content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Printing a line by updating an already outputted line Forgive me if the title is not lucid, but I could not better describe it in a single sentence. Consider I have the following in a loop, which increments counter each time it runs. output_string = 'Enter the number [{0}]'.format(counter) When I do a print output_string, the output goes like: Enter the number [1]: When I print this line again with the incremented number, like: Enter the number [2]: It will of course follow the 1st line and the cumulative output would be: Enter the number [1]: Enter the number [2]: However, I don't want this. I want the first line line should be updated and without adding another line, the output should just change in the first line itself. Like, it should display Enter the number [1]: and after that it should do an in-place replacement for [1] and the screen should read: Enter the number [2], without adding an extra line. I hope I am being clear. The reason I am doing this is because I am taking in large inputs from the user and I don't want to clutter up the terminal when I can just keep on incrementing what I want within a single line. A: If your script will be running on Unix/Linux you could use the curses module. A: Try this print('enter the number[1]', end='\r' ) If you're using Python 2.7, don't forget from __future__ import print_function.
Printing a line by updating an already outputted line
Forgive me if the title is not lucid, but I could not better describe it in a single sentence. Consider I have the following in a loop, which increments counter each time it runs. output_string = 'Enter the number [{0}]'.format(counter) When I do a print output_string, the output goes like: Enter the number [1]: When I print this line again with the incremented number, like: Enter the number [2]: It will of course follow the 1st line and the cumulative output would be: Enter the number [1]: Enter the number [2]: However, I don't want this. I want the first line line should be updated and without adding another line, the output should just change in the first line itself. Like, it should display Enter the number [1]: and after that it should do an in-place replacement for [1] and the screen should read: Enter the number [2], without adding an extra line. I hope I am being clear. The reason I am doing this is because I am taking in large inputs from the user and I don't want to clutter up the terminal when I can just keep on incrementing what I want within a single line.
[ "If your script will be running on Unix/Linux you could use the curses module.\n", "Try this\nprint('enter the number[1]', end='\\r' )\n\nIf you're using Python 2.7, don't forget from __future__ import print_function.\n" ]
[ 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0003677758_python.txt
Q: How to display outcoming and incoming SOAP message for ZSI.ServiceProxy in Python (version 2.1)? Couple months ago I have asked the same question but in the context of older version of ZSI (How to display outcoming and incoming SOAP message for ZSI.ServiceProxy in Python?). Now, in the new version of ZSI 2.1 there is no tacefile parameter). I tried to find a documentation for the new version but I faild. Does anyone know how to display the SOAP messages generated and received by ZSI 2.1? Thank you in advance :-) A: For debugging I have found less interfering solution using wireshark to trace the TCP packages. It looks like that: A: I had this same problem. My workaround was to modify the dispatch.py file that comes with ZSI. I created a logging function (logmessage) for my app that would store SOAP messages into a database and then added that function where necessary. I do not recall the ZSI version I was using however. You should be able to find these functions pretty easily in the code though. I ave approximate L numbers since i made other edits in Dispatch.py file in your site-packages directory L156 - logs SOAP responses def _Dispatch(tons-of-args, **kw): #several lines of code edited here# #several lines of code edited here# #several lines of code edited here# sw = SoapWriter(nsdict=nsdict) sw.serialize(result, tc) logmessage( str(sw), 1, kw['request'].get_remote_host() ) #LOGGING HERE L168 - logs SOAP errors def _ModPythonSendFault(f, **kw): logmessage( str(f.AsSOAP()), 1, kw['request'].get_remote_host() ) #LOGGING ADDED HERE _ModPythonSendXML(f.AsSOAP(), 500, **kw) L277 - logs requests def AsHandler(request=None, modules=None, **kw): '''Dispatch from within ModPython.''' a = request.read(-1) logmessage( a, 0, request.get_remote_host() ) #LOGGING ADDED HERE ps = ParsedSoap(a) kw['request'] = request _Dispatch(ps, modules, _ModPythonSendXML, _ModPythonSendFault, **kw)
How to display outcoming and incoming SOAP message for ZSI.ServiceProxy in Python (version 2.1)?
Couple months ago I have asked the same question but in the context of older version of ZSI (How to display outcoming and incoming SOAP message for ZSI.ServiceProxy in Python?). Now, in the new version of ZSI 2.1 there is no tacefile parameter). I tried to find a documentation for the new version but I faild. Does anyone know how to display the SOAP messages generated and received by ZSI 2.1? Thank you in advance :-)
[ "For debugging I have found less interfering solution using wireshark to trace the TCP packages. It looks like that:\n\n", "I had this same problem. My workaround was to modify the dispatch.py file that comes with ZSI.\nI created a logging function (logmessage) for my app that would store SOAP messages into a dat...
[ 3, 1 ]
[]
[]
[ "python", "soap", "zsi" ]
stackoverflow_0003676409_python_soap_zsi.txt
Q: Static library (.lib) to Python project is it possible to import modules from .lib library to Python program (as simple as .dll)? A: In theory, yes; in practice, probably not -- and certainly not as simply as a DLL. Static libraries are essentially just collections of object files, and need a full linker to correctly resolve all relocation references they may contain. It might be possible to take your static library and simply link its contents to form a shared library, but that would require that the static library had been built as position independent code (PIC), which is not guaranteed. In theory there's no reason the work a full linker would do to link the library couldn't be done at runtime, but in practice there's no off-the-shelf code for doing so. Your best real option is probably to track down the source or a shared version of the library. A: Unfortunately, no. Dynamic Link Libraries are required for runtime loading. A: Do you have access to the source code? Or at least a header file? If you do, then you could either create a shared library or a Python extension which links to the library. Since you mentioned DLLs, I'll assume you're working on Windows. This tutorial may be useful. A: Do you have a static library or do you have a .lib file and are assuming that it is a static library? On Windows, a .lib library can be an import library or a static library. An import library is created alongside the dll of the same name (eg kernel32.dll and kernel32.lib). It is used at link time to populate the import address table of the executable. A static library contains code that will be copied into the executable at link time. If you have access to a compiler, another option may be to create an extension module that makes use of the static library. For more details see the Python docs
Static library (.lib) to Python project
is it possible to import modules from .lib library to Python program (as simple as .dll)?
[ "In theory, yes; in practice, probably not -- and certainly not as simply as a DLL. Static libraries are essentially just collections of object files, and need a full linker to correctly resolve all relocation references they may contain. It might be possible to take your static library and simply link its conten...
[ 4, 2, 0, 0 ]
[]
[]
[ "dll", "python", "static_libraries" ]
stackoverflow_0003668373_dll_python_static_libraries.txt
Q: Why does the tkinter progress bar makes things so much slower? I have the following code for extracting a tar.gz file whilst keeping tabs on the progress: from __future__ import division import tarfile import os theArchive = "/Users/Dennis/Instances/atlassian-jira-enterprise-4.1.2-standalone.tar.gz" a = tarfile.open(theArchive) tarsize = 0 print "Computing total size" for tarinfo in a: tarsize = tarsize + tarinfo.size realz = tarsize print "compressed size: " + str(a.fileobj.size) print "uncompressed size: " + str(tarsize) tarsize = 0 for tarinfo in a: print tarinfo.name, "is", tarinfo.size, "bytes in size and is", if tarinfo.isreg(): print "a regular file." elif tarinfo.isdir(): print "a directory." else: print "something else." a.extract(tarinfo) tarsize = tarsize + tarinfo.size print str(tarsize) + "/" + str(realz) outout = tarsize / realz print "progress: " + str(outout) a.close() This is quite speedy and extracts a 100MB tar.gz in 10 secs. I wanted to have this visually as well so I changed this to include a tkinter progress bar: from __future__ import division import tarfile import os import Tkinter class Meter(Tkinter.Frame): def __init__(self, master, width=300, height=20, bg='white', fillcolor='orchid1',\ value=0.0, text=None, font=None, textcolor='black', *args, **kw): Tkinter.Frame.__init__(self, master, bg=bg, width=width, height=height, *args, **kw) self._value = value self._canv = Tkinter.Canvas(self, bg=self['bg'], width=self['width'], height=self['height'],\ highlightthickness=0, relief='flat', bd=0) self._canv.pack(fill='both', expand=1) self._rect = self._canv.create_rectangle(0, 0, 0, self._canv.winfo_reqheight(), fill=fillcolor,\ width=0) self._text = self._canv.create_text(self._canv.winfo_reqwidth()/2, self._canv.winfo_reqheight()/2,\ text='', fill=textcolor) if font: self._canv.itemconfigure(self._text, font=font) self.set(value, text) self.bind('<Configure>', self._update_coords) def _update_coords(self, event): '''Updates the position of the text and rectangle inside the canvas when the size of the widget gets changed.''' # looks like we have to call update_idletasks() twice to make sure # to get the results we expect self._canv.update_idletasks() self._canv.coords(self._text, self._canv.winfo_width()/2, self._canv.winfo_height()/2) self._canv.coords(self._rect, 0, 0, self._canv.winfo_width()*self._value, self._canv.winfo_height()) self._canv.update_idletasks() def get(self): return self._value, self._canv.itemcget(self._text, 'text') def set(self, value=0.0, text=None): #make the value failsafe: if value < 0.0: value = 0.0 elif value > 1.0: value = 1.0 self._value = value if text == None: #if no text is specified use the default percentage string: text = "Extraction: " + str(int(round(100 * value))) + ' %' self._canv.coords(self._rect, 0, 0, self._canv.winfo_width()*value, self._canv.winfo_height()) self._canv.itemconfigure(self._text, text=text) self._canv.update_idletasks() ##-------------demo code--------------------------------------------## def _goExtract(meter, value): meter.set(value) if value < 1.0: value = value + 0.005 meter.after(50, lambda: _demo(meter, value)) else: meter.set(value, 'Demo successfully finished') if __name__ == '__main__': root = Tkinter.Tk(className='meter demo') m = Meter(root, relief='ridge', bd=3) m.pack(fill='x') m.set(0.0, 'Computing file size...') m.after(1000) theArchive = "/Users/Dennis/Instances/atlassian-jira-enterprise-4.1.2-standalone.tar.gz" a = tarfile.open(theArchive) tarsize = 0 for tarinfo in a: tarsize = tarsize + tarinfo.size realz = tarsize print "real size: " + str(tarsize) print "compressed size: " + str(a.fileobj.size) m.set(0.0, 'Done computing!') m.after(1000) tarsize = 0 for tarinfo in a: print tarinfo.name, "is", tarinfo.size, "bytes in size and is", if tarinfo.isreg(): print "a regular file." elif tarinfo.isdir(): print "a directory." else: print "something else." a.extract(tarinfo) tarsize = tarsize + tarinfo.size print str(tarsize) + "/" + str(realz) outout = tarsize / realz m.set(outout) print "progress: " + str(outout) a.close() m.set(1.0, 'Extraction complete!') m.after(1000) m.after(1000, lambda: _goExtract(m, 0.0)) It works all fine and dandy but the process now takes more than 2 minutes. Why does this happen and how can I fix this? Thanks! Dennis A: How big are the files in your archive? You are almost certainly updating the progress bar a lot more than you need to -- it's common to include a check in your set() function so that it just returns without updating if the change from the last value is too small. With a 300px canvas there's definitely no point in updating for a change of less than 0.3% and probably not much point in updating more often than every 1%. As your process usually finishes in 10s, you may want to introduce a time-based check too, as even updating every 1% will be 10 times per second, which is more than you need. It would be interesting to see how long Tk takes to draw the bar if you drive it from a simple for loop.
Why does the tkinter progress bar makes things so much slower?
I have the following code for extracting a tar.gz file whilst keeping tabs on the progress: from __future__ import division import tarfile import os theArchive = "/Users/Dennis/Instances/atlassian-jira-enterprise-4.1.2-standalone.tar.gz" a = tarfile.open(theArchive) tarsize = 0 print "Computing total size" for tarinfo in a: tarsize = tarsize + tarinfo.size realz = tarsize print "compressed size: " + str(a.fileobj.size) print "uncompressed size: " + str(tarsize) tarsize = 0 for tarinfo in a: print tarinfo.name, "is", tarinfo.size, "bytes in size and is", if tarinfo.isreg(): print "a regular file." elif tarinfo.isdir(): print "a directory." else: print "something else." a.extract(tarinfo) tarsize = tarsize + tarinfo.size print str(tarsize) + "/" + str(realz) outout = tarsize / realz print "progress: " + str(outout) a.close() This is quite speedy and extracts a 100MB tar.gz in 10 secs. I wanted to have this visually as well so I changed this to include a tkinter progress bar: from __future__ import division import tarfile import os import Tkinter class Meter(Tkinter.Frame): def __init__(self, master, width=300, height=20, bg='white', fillcolor='orchid1',\ value=0.0, text=None, font=None, textcolor='black', *args, **kw): Tkinter.Frame.__init__(self, master, bg=bg, width=width, height=height, *args, **kw) self._value = value self._canv = Tkinter.Canvas(self, bg=self['bg'], width=self['width'], height=self['height'],\ highlightthickness=0, relief='flat', bd=0) self._canv.pack(fill='both', expand=1) self._rect = self._canv.create_rectangle(0, 0, 0, self._canv.winfo_reqheight(), fill=fillcolor,\ width=0) self._text = self._canv.create_text(self._canv.winfo_reqwidth()/2, self._canv.winfo_reqheight()/2,\ text='', fill=textcolor) if font: self._canv.itemconfigure(self._text, font=font) self.set(value, text) self.bind('<Configure>', self._update_coords) def _update_coords(self, event): '''Updates the position of the text and rectangle inside the canvas when the size of the widget gets changed.''' # looks like we have to call update_idletasks() twice to make sure # to get the results we expect self._canv.update_idletasks() self._canv.coords(self._text, self._canv.winfo_width()/2, self._canv.winfo_height()/2) self._canv.coords(self._rect, 0, 0, self._canv.winfo_width()*self._value, self._canv.winfo_height()) self._canv.update_idletasks() def get(self): return self._value, self._canv.itemcget(self._text, 'text') def set(self, value=0.0, text=None): #make the value failsafe: if value < 0.0: value = 0.0 elif value > 1.0: value = 1.0 self._value = value if text == None: #if no text is specified use the default percentage string: text = "Extraction: " + str(int(round(100 * value))) + ' %' self._canv.coords(self._rect, 0, 0, self._canv.winfo_width()*value, self._canv.winfo_height()) self._canv.itemconfigure(self._text, text=text) self._canv.update_idletasks() ##-------------demo code--------------------------------------------## def _goExtract(meter, value): meter.set(value) if value < 1.0: value = value + 0.005 meter.after(50, lambda: _demo(meter, value)) else: meter.set(value, 'Demo successfully finished') if __name__ == '__main__': root = Tkinter.Tk(className='meter demo') m = Meter(root, relief='ridge', bd=3) m.pack(fill='x') m.set(0.0, 'Computing file size...') m.after(1000) theArchive = "/Users/Dennis/Instances/atlassian-jira-enterprise-4.1.2-standalone.tar.gz" a = tarfile.open(theArchive) tarsize = 0 for tarinfo in a: tarsize = tarsize + tarinfo.size realz = tarsize print "real size: " + str(tarsize) print "compressed size: " + str(a.fileobj.size) m.set(0.0, 'Done computing!') m.after(1000) tarsize = 0 for tarinfo in a: print tarinfo.name, "is", tarinfo.size, "bytes in size and is", if tarinfo.isreg(): print "a regular file." elif tarinfo.isdir(): print "a directory." else: print "something else." a.extract(tarinfo) tarsize = tarsize + tarinfo.size print str(tarsize) + "/" + str(realz) outout = tarsize / realz m.set(outout) print "progress: " + str(outout) a.close() m.set(1.0, 'Extraction complete!') m.after(1000) m.after(1000, lambda: _goExtract(m, 0.0)) It works all fine and dandy but the process now takes more than 2 minutes. Why does this happen and how can I fix this? Thanks! Dennis
[ "How big are the files in your archive? You are almost certainly updating the progress bar a lot more than you need to -- it's common to include a check in your set() function so that it just returns without updating if the change from the last value is too small. With a 300px canvas there's definitely no point i...
[ 5 ]
[]
[]
[ "python", "tar", "tkinter" ]
stackoverflow_0003677971_python_tar_tkinter.txt
Q: How to require a key_name when creating model classes with App Engine? I want to require that one of my model classes specify my own custom key as the key_name, that way I can always rely on it being there. How can I require this? For example, I may have a model class such as: class Account(db.Model): user = db.UserProperty(required=True) email = db.EmailProperty() And I want to require that key_name is always specified, requiring instantiation in a manner such as the following: user_obj = users.get_current_user() new_user = Account(user=user_obj, key_name=user.user_id()) new_user.put() How can this be done? Thanks in advance A: You can make all instantiation of the Account model go through a simple factory function (maybe name the class itself _Account to clarify to other coders on the project that the class itself is meant to be private): def make_account(key_name=None, **k): if key_name is None: raise ValueError('Must specify key_name for account!') return _Account(key_name=key_name, **k) You can get more sophisticated and do it in a classmethod in class Account (overriding the latter's __init__ with some checks to make it impossible to accidentally call from elsewhere), &c, but this is the core idea: "do it in your application-level code". A: You can create a base class like models on jaikuengine. class BaseModel (db.Model): def __init__(self, parent=None, key_name=None, _app=None, **kw): if not key_name and 'key' not in kw: key_name = self.key_from(**kw) super(BaseModel, self).__init__( parent, key_name=key_name, _app=_app, **kw) if not key_name: key_name = self.key_from(**kw) @classmethod def key_from(cls, **kw): if hasattr(cls, 'key_template'): try: return cls.key_template % kw except KeyError: logging.warn(u'Automatic key_name generation failed: %s <- %s', cls.key_template, kw) return None class Account (BaseModel): user = db.UserProperty(required=True) email = db.EmailProperty() key_template = "account/%(user)s" A: Here is another example on how to require and also validate a key_name: Validator for the Model key_name property in Google App Engine datastore (Python)
How to require a key_name when creating model classes with App Engine?
I want to require that one of my model classes specify my own custom key as the key_name, that way I can always rely on it being there. How can I require this? For example, I may have a model class such as: class Account(db.Model): user = db.UserProperty(required=True) email = db.EmailProperty() And I want to require that key_name is always specified, requiring instantiation in a manner such as the following: user_obj = users.get_current_user() new_user = Account(user=user_obj, key_name=user.user_id()) new_user.put() How can this be done? Thanks in advance
[ "You can make all instantiation of the Account model go through a simple factory function (maybe name the class itself _Account to clarify to other coders on the project that the class itself is meant to be private):\ndef make_account(key_name=None, **k):\n if key_name is None:\n raise ValueError('Must sp...
[ 2, 2, 0 ]
[]
[]
[ "google_app_engine", "python", "web_applications" ]
stackoverflow_0003673721_google_app_engine_python_web_applications.txt
Q: Using Python, getting the name of files in a zip archive I have several very large zip files available to download on a website. I am using Flask microframework (based on Werkzeug) which uses Python. Is there a way to show the contents of a zip file (i.e. file and folder names) - to someone on a webpage - without actually downloading it? As in doing the working out server side. Assume that I do not know what are in the zip archives myself. I apoligize that this post does not include code. Thank you for helping. A: Sure, have a look at zipfile.ZipFile.namelist(). Usage is pretty simple, as you'd expect: you just create a ZipFile object for the file you want, and then namelist() gives you a list of the paths of files stored in the archive. with ZipFile('foo.zip', 'r') as f: names = f.namelist() print names # ['file1', 'folder1/file2', ...] A: http://docs.python.org/library/zipfile.html Specifically, try using the ZipFile.namelist() method.
Using Python, getting the name of files in a zip archive
I have several very large zip files available to download on a website. I am using Flask microframework (based on Werkzeug) which uses Python. Is there a way to show the contents of a zip file (i.e. file and folder names) - to someone on a webpage - without actually downloading it? As in doing the working out server side. Assume that I do not know what are in the zip archives myself. I apoligize that this post does not include code. Thank you for helping.
[ "Sure, have a look at zipfile.ZipFile.namelist(). Usage is pretty simple, as you'd expect: you just create a ZipFile object for the file you want, and then namelist() gives you a list of the paths of files stored in the archive.\nwith ZipFile('foo.zip', 'r') as f:\n names = f.namelist()\nprint names\n# ['file1',...
[ 18, 4 ]
[]
[]
[ "flask", "python", "werkzeug", "zip" ]
stackoverflow_0003678842_flask_python_werkzeug_zip.txt
Q: Install two python modules with same name What's the best way to install two python modules with the same name? I currently depend on two different facebook libraries: pyfacebook and Facebook's new python-sdk. Both of these libraries install themselves as the module 'facebook'. I can think of a bunch of hacky solutions but before I go an hack away I was curious if there was a pythonic way of dealing with this situation. I'm using virtualenv and pip. (Yes, I will eventually deprecate one of them, but I had two different engineers working on two different problems and they didn't realize that they were using a different module until integration) A: First, I'd suggest you guys go over what other libraries you're all using so you can get a concesus on how you're building your application. To support this type of thing place each module within it's own folder, put in an __init__.py file, then you can do this: import Folder1.facebook as pyfacebook import Folder2.facebook as facebooksdk A: The easiest solution would be to include one (or both) of the modules in your project instead of installing it. Then, you can have more control over the module name and importing.
Install two python modules with same name
What's the best way to install two python modules with the same name? I currently depend on two different facebook libraries: pyfacebook and Facebook's new python-sdk. Both of these libraries install themselves as the module 'facebook'. I can think of a bunch of hacky solutions but before I go an hack away I was curious if there was a pythonic way of dealing with this situation. I'm using virtualenv and pip. (Yes, I will eventually deprecate one of them, but I had two different engineers working on two different problems and they didn't realize that they were using a different module until integration)
[ "First, I'd suggest you guys go over what other libraries you're all using so you can get a concesus on how you're building your application.\nTo support this type of thing place each module within it's own folder, put in an __init__.py file, then you can do this:\nimport Folder1.facebook as pyfacebook\nimport Fold...
[ 2, 0 ]
[]
[]
[ "distutils", "python", "setuptools", "virtualenv" ]
stackoverflow_0003678402_distutils_python_setuptools_virtualenv.txt
Q: Create ranked dict with list comprehension I have a list [5, 90, 23, 12, 34, 89] etc where every two values should be a (ranked) list in the dictionary. So the list above would become {1: [5, 90], 2: [23, 12], 3: [34, 89]} etc. I've gotten close with list comprehension but haven't cracked it. I tried: my_list = [5, 90, 23, 12, 34, 89] my_dict = dict((i+1, [my_list[i], my_list[i+1]]) for i in xrange(0, len(my_list)/2)) Which works for the first key, but all following values are off by one index. How would you do this? A: You left a multiple of 2: dict( (i+1, my_list[2*i : 2*i+2]) for i in xrange(0, len(my_list)/2) ) # ^ BTW, you could do this instead (with Python ≥2.6 or Python ≥3.0): >>> it = iter(my_list) >>> dict(enumerate(zip(it, it), start=1)) {1: (5, 90), 2: (23, 12), 3: (34, 89)} (of course, remember to use itertools.izip instead of zip in Python 2.x)
Create ranked dict with list comprehension
I have a list [5, 90, 23, 12, 34, 89] etc where every two values should be a (ranked) list in the dictionary. So the list above would become {1: [5, 90], 2: [23, 12], 3: [34, 89]} etc. I've gotten close with list comprehension but haven't cracked it. I tried: my_list = [5, 90, 23, 12, 34, 89] my_dict = dict((i+1, [my_list[i], my_list[i+1]]) for i in xrange(0, len(my_list)/2)) Which works for the first key, but all following values are off by one index. How would you do this?
[ "You left a multiple of 2:\ndict( (i+1, my_list[2*i : 2*i+2]) for i in xrange(0, len(my_list)/2) )\n# ^\n\nBTW, you could do this instead (with Python ≥2.6 or Python ≥3.0):\n>>> it = iter(my_list)\n>>> dict(enumerate(zip(it, it), start=1))\n{1: (5, 90), 2: (23, 12), 3: (34, 89)}\n\n(of course, rem...
[ 6 ]
[]
[]
[ "list_comprehension", "python" ]
stackoverflow_0003679021_list_comprehension_python.txt
Q: Problem with model inheritance and polymorphism i came with new django problem. The situtaion: i have a model class UploadItemModel, i subcallss it to create uploadable items, like videos, audio files ... class UploadItem(UserEntryModel): category = 'abstract item' file = models.FileField(upload_to=get_upload_directory) i subclass it like this: class Video(UploadItem): category = 'video' I need to access category attributes from a custom tag. The problem si that i am getting category='abstract item' even if the class is actually Video. Any clue? EDIT: I need to use hierarchy because i have several types of item that user can uplaod(Video, Audio files, PDF text). I need to create a class for each type, but there are lot of things in common between those classes(eg forms). A: Any clue? Yes. AFAIK it doesn't work the way you're hoping. Django Models aren't trivially Python classes. They're more like metaclasses which create instances of a kind of "hidden" class definition. Yes, the expected model class exists, but it isn't quite what you think it is. For one thing, the class you use was built for you from your class definition. That's why some static features of Python classes don't work as you'd expect in Django models. You can't really make use of class-level items like this. You might want to create an actual field with a default value or something similar. class UploadItem(UserEntryModel): category = models.CharFIeld( default='abstract item' ) file = models.FileField(upload_to=get_upload_directory) Even after the comments being added to the question, I'm still unclear on why this is being done. There do not seem to be any structural or behavioral differences. These all seem like a single class of objects. Subclasses don't seem to define anything new. Options. Simply use the class name instead of this "category" item at the class level. Make the class names good enough that you don't need this "category" item. Use a property class UploadItem(UserEntryModel): file = models.FileField(upload_to=get_upload_directory) @property def category( self ): return self.__class__.__name__ A: You will need to create an additional field that will be a descriptor for that type. There is a good tutorial here explaining how to use inheritance in Django models A: Can you try overriding the __init__ method of the class to assign a category to each instance? For e.g. class Video(UploadItem): def __init__(self, *args, **kwargs): super(Video, self).__init__(*args, **kwargs) self.category = 'video'
Problem with model inheritance and polymorphism
i came with new django problem. The situtaion: i have a model class UploadItemModel, i subcallss it to create uploadable items, like videos, audio files ... class UploadItem(UserEntryModel): category = 'abstract item' file = models.FileField(upload_to=get_upload_directory) i subclass it like this: class Video(UploadItem): category = 'video' I need to access category attributes from a custom tag. The problem si that i am getting category='abstract item' even if the class is actually Video. Any clue? EDIT: I need to use hierarchy because i have several types of item that user can uplaod(Video, Audio files, PDF text). I need to create a class for each type, but there are lot of things in common between those classes(eg forms).
[ "\nAny clue?\n\nYes. AFAIK it doesn't work the way you're hoping. Django Models aren't trivially Python classes. They're more like metaclasses which create instances of a kind of \"hidden\" class definition. Yes, the expected model class exists, but it isn't quite what you think it is. For one thing, the class...
[ 1, 1, 0 ]
[]
[]
[ "django", "polymorphism", "python" ]
stackoverflow_0003673311_django_polymorphism_python.txt
Q: "Win32 exception occurred releasing IUnknown at..." error using Pylons and WMI Im using Pylons in combination with WMI module to do some basic system monitoring of a couple of machines, for POSIX based systems everything is simple - for Windows - not so much. Doing a request to the Pylons server to get current CPU, however it's not working well, or atleast with the WMI module. First i simply did (something) this: c = wmi.WMI() for cpu in c.Win32_Processor(): value = cpu.LoadPercentage However, that gave me an error when accessing this module via Pylons (GET http://ip:port/cpu): raise x_wmi_uninitialised_thread ("WMI returned a syntax error: you're probably running inside a thread without first calling pythoncom.CoInitialize[Ex]") x_wmi_uninitialised_thread: <x_wmi: WMI returned a syntax error: you're probably running inside a thread without first calling pythoncom.CoInitialize[Ex] (no underlying exception)> Looking at http://timgolden.me.uk/python/wmi/tutorial.html, i wrapped the code accordingly to the example under the topic "CoInitialize & CoUninitialize", which makes the code work, but it keeps throwing "Win32 exception occurred releasing IUnknown at..." And then looking at http://mail.python.org/pipermail/python-win32/2007-August/006237.html and the follow up post, trying to follow that - however pythoncom._GetInterfaceCount() is always 20. Im guessing this is someway related to Pylons spawning worker threads and crap like that, however im kinda lost here, advice would be nice. Thanks in advance, Anders EDIT: If you are doing something similar, don't bother with the WMI module, simply use http://msdn.microsoft.com/en-us/library/aa394531%28VS.85%29.aspx , and you don't have to worry about threads crap like this. A: Add "sys.coinit_flags = 0" after your "import sys" line and before the "import pythoncom" line. That worked for me, although I don't know why. A: To me it sounds like Windows is not enjoying the way you are doing this kind of work on what are probably temporary worker threads (as you point out). If this is the case, and you can't get things to work, one possible workaround would be to re-factor your application slightly so that there is a service thread running at all times which you can query for this information rather than setting everything up and asking for it on demand. It might not even need to be a thread, perhaps just a utility class instance which you get set up when the application starts, protected with a lock to prevent concurrent access.
"Win32 exception occurred releasing IUnknown at..." error using Pylons and WMI
Im using Pylons in combination with WMI module to do some basic system monitoring of a couple of machines, for POSIX based systems everything is simple - for Windows - not so much. Doing a request to the Pylons server to get current CPU, however it's not working well, or atleast with the WMI module. First i simply did (something) this: c = wmi.WMI() for cpu in c.Win32_Processor(): value = cpu.LoadPercentage However, that gave me an error when accessing this module via Pylons (GET http://ip:port/cpu): raise x_wmi_uninitialised_thread ("WMI returned a syntax error: you're probably running inside a thread without first calling pythoncom.CoInitialize[Ex]") x_wmi_uninitialised_thread: <x_wmi: WMI returned a syntax error: you're probably running inside a thread without first calling pythoncom.CoInitialize[Ex] (no underlying exception)> Looking at http://timgolden.me.uk/python/wmi/tutorial.html, i wrapped the code accordingly to the example under the topic "CoInitialize & CoUninitialize", which makes the code work, but it keeps throwing "Win32 exception occurred releasing IUnknown at..." And then looking at http://mail.python.org/pipermail/python-win32/2007-August/006237.html and the follow up post, trying to follow that - however pythoncom._GetInterfaceCount() is always 20. Im guessing this is someway related to Pylons spawning worker threads and crap like that, however im kinda lost here, advice would be nice. Thanks in advance, Anders EDIT: If you are doing something similar, don't bother with the WMI module, simply use http://msdn.microsoft.com/en-us/library/aa394531%28VS.85%29.aspx , and you don't have to worry about threads crap like this.
[ "Add \"sys.coinit_flags = 0\" after your \"import sys\" line and before the \"import pythoncom\" line. That worked for me, although I don't know why.\n", "To me it sounds like Windows is not enjoying the way you are doing this kind of work on what are probably temporary worker threads (as you point out).\nIf thi...
[ 5, 4 ]
[]
[]
[ "multithreading", "pylons", "python", "wmi" ]
stackoverflow_0002880723_multithreading_pylons_python_wmi.txt
Q: Easy JSON encoding with Python I'm quite new to python (I use python 3), and i'm trying to serialize a class with one string and two lists as members in JSon. I found that there's a json lib in the python standart but it seems that I need to manually implement a serialization method. Is there a JSon encoder where I can simply pass an object, and receive the serialized object as string without having to implement a serialization method. Example: class MyClass: pass if __name__ == '__main__': my_name = "me" my_list = {"one","two"} my_obj = MyClass() my_obj.name = my_name; my_obj.my_list = my_list serialized_object = JSONserializer.serialize(my_obj). Thank you. A: How would you expect the json library to know how to serialize arbitrary classes? In your case, if your class is simple enough, you might be able to get away something like foo = FooObject() # or whatever json_string = json.dumps(foo.__dict__) to just serialize the object's members. Deserialization would then need to be something like foo_data = json.loads(json_string) foo = FooObject(**foo_data) A: Don't know of anything pre-built, but you can write one if your objects are simple enough. Override the default method in the JSONEncoder to look at inspect.getmembers(obj) (inspect is a more readable way of getting at sets of attributes in __dict__). #!/usr/bin/env python3 import inspect from json import JSONEncoder class TreeNode: def __init__(self, value, left=None, right=None): self.value = value self.left = left self.right = right class ObjectJSONEncoder(JSONEncoder): def default(self, reject): is_not_method = lambda o: not inspect.isroutine(o) non_methods = inspect.getmembers(reject, is_not_method) return {attr: value for attr, value in non_methods if not attr.startswith('__')} if __name__ == '__main__': tree = TreeNode(42, TreeNode(24), TreeNode(242), ) print(ObjectJSONEncoder().encode(tree)) Update: @Alexandre Deschamps says isroutine works better than is method for some input. A: How about the JSONEncoder class? update I must have skipped over the part about the data being contained in a class. If you want sane serialization of arbitrary Python objects without the need to extend the encoder, then maybe you can use a combination of pickle and json. When pickling, use protocol = 0 to serialize the data as human-readable ASCII.
Easy JSON encoding with Python
I'm quite new to python (I use python 3), and i'm trying to serialize a class with one string and two lists as members in JSon. I found that there's a json lib in the python standart but it seems that I need to manually implement a serialization method. Is there a JSon encoder where I can simply pass an object, and receive the serialized object as string without having to implement a serialization method. Example: class MyClass: pass if __name__ == '__main__': my_name = "me" my_list = {"one","two"} my_obj = MyClass() my_obj.name = my_name; my_obj.my_list = my_list serialized_object = JSONserializer.serialize(my_obj). Thank you.
[ "How would you expect the json library to know how to serialize arbitrary classes?\nIn your case, if your class is simple enough, you might be able to get away something like\nfoo = FooObject() # or whatever\njson_string = json.dumps(foo.__dict__)\n\nto just serialize the object's members.\nDeserialization would th...
[ 3, 3, 0 ]
[]
[]
[ "json", "python", "serialization" ]
stackoverflow_0003679306_json_python_serialization.txt
Q: Understanding python object membership for sets If I understand correctly, the __cmp__() function of an object is called in order to evaluate all objects in a collection while determining whether an object is a member, or 'in', the collection. However, this does not seem to be the case for sets: class MyObject(object): def __init__(self, data): self.data = data def __cmp__(self, other): return self.data-other.data a = MyObject(5) b = MyObject(5) print a in [b] //evaluates to True, as I'd expect print a in set([b]) //evaluates to False How is an object membership tested in a set, then? A: Adding a __hash__ method to your class yields this: class MyObject(object): def __init__(self, data): self.data = data def __cmp__(self, other): return self.data - other.data def __hash__(self): return hash(self.data) a = MyObject(5) b = MyObject(5) print a in [b] # True print a in set([b]) # Also True! A: >>> xs = [] >>> set([xs]) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unhashable type: 'list' There you are. Sets use hashes, very similar to dicts. This help performance extremely (membership tests are O(1), and many other operations depend on membership tests), and it also fits the semantics of sets well: Set items must be unique, and different items will produce different hashes, while same hashes indicate (well, in theory) duplicates. Since the default __hash__ is just id (which is rather stupid imho), two instances of a class that inherits object's __hash__ will never hash to the same value (well, unless adress space is larger than the sizeof the hash). A: As others pointed, your objects don't have a __hash__ so they use the default id as a hash, and you can override it as Nathon suggested, BUT read the docs about __hash__, specifically the points about when you should and should not do that. A: A set uses a dict behind the scenes, so the "in" statement is checking whether the object exists as a key in the dict. Since your object doesn't implement a hash function, the default hash function for objects uses the object's id. So even though a and b are equivalent, they're not the same object, and that's what's being tested.
Understanding python object membership for sets
If I understand correctly, the __cmp__() function of an object is called in order to evaluate all objects in a collection while determining whether an object is a member, or 'in', the collection. However, this does not seem to be the case for sets: class MyObject(object): def __init__(self, data): self.data = data def __cmp__(self, other): return self.data-other.data a = MyObject(5) b = MyObject(5) print a in [b] //evaluates to True, as I'd expect print a in set([b]) //evaluates to False How is an object membership tested in a set, then?
[ "Adding a __hash__ method to your class yields this:\nclass MyObject(object):\n def __init__(self, data):\n self.data = data\n\n def __cmp__(self, other):\n return self.data - other.data\n\n def __hash__(self):\n return hash(self.data)\n\n\na = MyObject(5)\nb = MyObject(5)\n\nprint a i...
[ 5, 2, 1, 0 ]
[]
[]
[ "cmp", "collections", "membership", "python", "set" ]
stackoverflow_0003679466_cmp_collections_membership_python_set.txt
Q: How to use the cl command? All, I found a piece of information on how to call c files in python, in these examples: there is a c file, which includes many other header files, the very beginning of this c files is #include Python.h, then I found that #include Python.h actually involves many many other header files, such as pystate.h, object.h, etc, so I include all the required header files. In an cpp IDE environment, it did not show errors. What I am trying to do is call this c code in python, so from ctypes import *, then it seems that a dll should be generated by code such as: cl -LD test.c -test.dll, but how to use the cl in this case? I used the cygwin: gcc, it worked fine. Could anyone help me with this i.e.: Call the C in python? Do I make myself clear? Thank you in advance!! Well, Now I feel it important to tell me what I did: The ultimate goal I wanna achieve is: I am lazy, I do not want to re-write those c codes in python, (which is very complicated for me in some cases), so I just want to generate dll files that python could call. I followed an example given by googleing "python call c", there are two versions in this examples: linux and windows: The example test.c: #include <windows.h> BOOL APIENTRY DllMain(HANDLE hModule, DWORD dwReason, LPVOID lpReserved) { return TRUE; } __declspec(dllexport) int multiply(int num1, int num2) { return num1 * num2; } Two versions: 1, Complie under linux gcc -c -fPIC test.c gcc -shared test.o -o test.so I did this in cygwin on my vista system, it works fine; :) 2, Compile under windows: cl -LD test.c -test.dll I used the cl in windows command line prompt, it won't work! These are the python codes: from ctypes import * import os libtest = cdll.LoadLibrary(os.getcwd() + '/test.so') print test.multiply(2, 2) Could anyone try this and tell me what you get? thank you! A: You will find the command line options of Microsoft's C++ compiler here. Consider the following switches for cl: /nologo /GS /fp:precise /Zc:forScope /Gd ...and link your file using /NOLOGO /OUT:"your.dll" /DLL <your lib files> /SUBSYSTEM:WINDOWS /MACHINE:X86 /DYNAMICBASE Please have a look at what those options mean in detail, I just listed common ones. You should be aware of their effect nonetheless, so try to avoid copy&paste and make sure it's really what you need - the documentation linked above will help you. This is just a setup I use more or less often. Be advised that you can always open Visual Studio, configure build options, and copy the command line invokations from the project configuration dialog. Edit: Ok, here is some more advice, given the new information you've edited into your original question. I took the example code of your simple DLL and pasted it into a source file, and made two changes: #include <windows.h> BOOL APIENTRY DllMain(HANDLE hModule, DWORD dwReason, LPVOID lpReserved) { return TRUE; } extern "C" __declspec(dllexport) int __stdcall multiply(int num1, int num2) { return num1 * num2; } First of all, I usually expect functions exported from a DLL to use stdcall calling convention, just because it's a common thing in Windows and there are languages who inherently cannot cope with cdecl, seeing as they only know stdcall. So that's one change I made. Second, to make exports more friendly, I specified extern "C" to get rid of name mangling. I then proceeded to compile the code from the command line like this: cl /nologo /GS /Zc:forScope /Gd c.cpp /link /OUT:"foobar.dll" /DL kernel32.lib /SUBSYSTEM:WINDOWS /MACHINE:X86 If you use the DUMPBIN tool from the Visual Studio toolset, you can check your DLL for exports: dumpbin /EXPORTS foobar.dll Seeing something like this... ordinal hint RVA name 1 0 00001010 ?multiply@@YGHHH@Z ...you can notice the exported name got mangled. You'll usually want clear names for exports, so either use a DEF file to specify exports in more details, or the shortcut from above. Afterwards, I end up with a DLL that I can load into Python like this: In [1]: import ctypes In [2]: dll = ctypes.windll.LoadLibrary("foobar.dll") In [3]: dll.multiply Out[3]: <_FuncPtr object at 0x0928BEF3> In [4]: dll.multiply(5, 5) Out[4]: 25 Note that I'm using ctypes.windll here, which implies stdcall.
How to use the cl command?
All, I found a piece of information on how to call c files in python, in these examples: there is a c file, which includes many other header files, the very beginning of this c files is #include Python.h, then I found that #include Python.h actually involves many many other header files, such as pystate.h, object.h, etc, so I include all the required header files. In an cpp IDE environment, it did not show errors. What I am trying to do is call this c code in python, so from ctypes import *, then it seems that a dll should be generated by code such as: cl -LD test.c -test.dll, but how to use the cl in this case? I used the cygwin: gcc, it worked fine. Could anyone help me with this i.e.: Call the C in python? Do I make myself clear? Thank you in advance!! Well, Now I feel it important to tell me what I did: The ultimate goal I wanna achieve is: I am lazy, I do not want to re-write those c codes in python, (which is very complicated for me in some cases), so I just want to generate dll files that python could call. I followed an example given by googleing "python call c", there are two versions in this examples: linux and windows: The example test.c: #include <windows.h> BOOL APIENTRY DllMain(HANDLE hModule, DWORD dwReason, LPVOID lpReserved) { return TRUE; } __declspec(dllexport) int multiply(int num1, int num2) { return num1 * num2; } Two versions: 1, Complie under linux gcc -c -fPIC test.c gcc -shared test.o -o test.so I did this in cygwin on my vista system, it works fine; :) 2, Compile under windows: cl -LD test.c -test.dll I used the cl in windows command line prompt, it won't work! These are the python codes: from ctypes import * import os libtest = cdll.LoadLibrary(os.getcwd() + '/test.so') print test.multiply(2, 2) Could anyone try this and tell me what you get? thank you!
[ "You will find the command line options of Microsoft's C++ compiler here.\nConsider the following switches for cl:\n/nologo /GS /fp:precise /Zc:forScope /Gd\n\n...and link your file using\n/NOLOGO /OUT:\"your.dll\" /DLL <your lib files> /SUBSYSTEM:WINDOWS /MACHINE:X86 /DYNAMICBASE\n\nPlease have a look at what thos...
[ 2 ]
[]
[]
[ "c", "compiler_construction", "python" ]
stackoverflow_0003679638_c_compiler_construction_python.txt
Q: Django templating engine and external js files I'm writing a Google app engine app and obviously the default web app framework is a subset of Django. As such I'm using it's templating engine. My question is if I have say the following code: template_values = { 'first':first, 'second':second, } path = os.path.join(os.path.dirname(__file__), 'index.html') self.response.out.write(template.render(path, template_values)) And I want to reference the template value first in an externally included javascript file from index.html, do I just continue with the {{ first }} usage I'd go with in index.html itself or do I need to tell the framework about example.js somehow too so that it knows to replace the reference there? A: Inserting the value into the javascript is probably a bad idea; wouldn't it make more sense for the script to be static and to have it grab the data either out of the DOM (assuming it's part of the HTML page you're rendering) or get the necessary data from the server using an AJAX call? A: Disclaimer: my knowledge of app engine is limited so this answer may not serve your purpose. In Django if you need to be able to pass variables from a view to a JS file you will have to ensure that the file is parsed by the template engine. In other words the file has to be served by Django. This means for e.g. that if you have a JS file in a media directory that is served by say, Nginx, you will not be able to use template variables in it. I would expect the same to apply for app engine. But then see the disclaimer. I might be totally wrong :P
Django templating engine and external js files
I'm writing a Google app engine app and obviously the default web app framework is a subset of Django. As such I'm using it's templating engine. My question is if I have say the following code: template_values = { 'first':first, 'second':second, } path = os.path.join(os.path.dirname(__file__), 'index.html') self.response.out.write(template.render(path, template_values)) And I want to reference the template value first in an externally included javascript file from index.html, do I just continue with the {{ first }} usage I'd go with in index.html itself or do I need to tell the framework about example.js somehow too so that it knows to replace the reference there?
[ "Inserting the value into the javascript is probably a bad idea; wouldn't it make more sense for the script to be static and to have it grab the data either out of the DOM (assuming it's part of the HTML page you're rendering) or get the necessary data from the server using an AJAX call?\n", "Disclaimer: my knowl...
[ 3, 1 ]
[]
[]
[ "django", "django_templates", "djangoappengine", "google_app_engine", "python" ]
stackoverflow_0003679115_django_django_templates_djangoappengine_google_app_engine_python.txt
Q: Twisted's Serialport and disappearing serial port devices I'm using twisted.internet.serialport to have my program be continuously connected to a device on a serial port. Unfortunately my serial port is just a usb device, which means it can be disconnected or reset by the OS at any time (port 2 disabled by hub (EMI?), re-enabling... ). I see that pyserial has support for this for a few weeks and raises a SerialException. What I would love to do is try to reconnect to the serial port that just disappeared every few seconds. So, is there any way how I can tell twisted to notify me about a disconnect? Or should I go ahead and write a threaded wrapper for pyserial? Thanks A: http://twistedmatrix.com/trac/ticket/3690 may be related. The ticket appears blocked on proper Windows support. I'm not sure if this kind of disconnect event will trigger Twisted's internal connection lost detection code, but I would expect it to (even without a recent version of pyserial). You could probably try out the branch linked from that ticket pretty easily to see if it does what you want, at least. And if so, perhaps you could help get the ticket actually resolved (the 10.2 release is coming up pretty soon). A: It seems the only relevant change in branched version is a call to connectionLost() in the protocol. Until it's fixed in the trunk I use a: class fixedSerialPort(SerialPort): def connectionLost(self, reason): SerialPort.connectionLost(self, reason) self.protocol.connectionLost(reason) I tested it with Twisted 10.1 (on ubuntu) and 8.1 (on my trusty debian). Both works fine. No idea about other OSs though.
Twisted's Serialport and disappearing serial port devices
I'm using twisted.internet.serialport to have my program be continuously connected to a device on a serial port. Unfortunately my serial port is just a usb device, which means it can be disconnected or reset by the OS at any time (port 2 disabled by hub (EMI?), re-enabling... ). I see that pyserial has support for this for a few weeks and raises a SerialException. What I would love to do is try to reconnect to the serial port that just disappeared every few seconds. So, is there any way how I can tell twisted to notify me about a disconnect? Or should I go ahead and write a threaded wrapper for pyserial? Thanks
[ "http://twistedmatrix.com/trac/ticket/3690 may be related.\nThe ticket appears blocked on proper Windows support. I'm not sure if this kind of disconnect event will trigger Twisted's internal connection lost detection code, but I would expect it to (even without a recent version of pyserial).\nYou could probably tr...
[ 1, 1 ]
[]
[]
[ "pyserial", "python", "serial_port", "twisted" ]
stackoverflow_0003678661_pyserial_python_serial_port_twisted.txt
Q: How does the performance of dictionary key lookups compare in Python? How does: dict = {} if key not in dict: dict[key] = foo Compare to: try: dict[key] except KeyError: dict[key] = foo ie, is the look up of a key in anyway faster than the linear search through dict.keys(), that I assume the first form will do? A: Just to clarify one point: if key not in d doesn't do a linear search through d's keys. It uses the dict's hash table to quickly find the key. A: You're looking for the setdefault method: >>> r = {} >>> r.setdefault('a', 'b') 'b' >>> r {'a': 'b'} >>> r.setdefault('a', 'e') 'b' >>> r {'a': 'b'} A: Try: my_dict.setdefault(key, default). It's slightly slower than the other options, though. If key is in the dictionary, return its value. If not, insert key with a value of default and return default. default defaults to None. #!/usr/bin/env python example_dict = dict(zip(range(10), range(10))) def kn(key, d): if key not in d: d[key] = 'foo' def te(key, d): try: d[key] except KeyError: d[key] = 'foo' def sd(key, d): d.setdefault(key, 'foo') if __name__ == '__main__': from timeit import Timer t = Timer("kn(2, example_dict)", "from __main__ import kn, example_dict") print t.timeit() t = Timer("te(2, example_dict)", "from __main__ import te, example_dict") print t.timeit() t = Timer("sd(2, example_dict)", "from __main__ import sd, example_dict") print t.timeit() # kn: 0.249855041504 # te: 0.244259119034 # sd: 0.375113964081 A: The answer depends on how often the key is already in the dict (BTW, has anyone mentioned to you how bad an idea it is to hide a builtin such as dict behind a variable?) if key not in dct: dct[key] = foo If the key is in the dictionary this does one dictionary lookup. If the key is in the dictionary it looks up the dictionary twice. try: dct[key] except KeyError: dct[key] = foo This may be slightly faster for the case where the key is in the dictionary, but throwing an exception has quite a big overhead, so it is almost always not the best option. dct.setdefault(key, foo) This one is slightly tricky: it always involves two dictionary lookups: the first one is to find the setdefault method in the dict class, the second is to look for key in the dct object. Also if foo is an expression it will be evaluated every time whereas the earlier options only evaluate it when they have to. Also look at collections.defaultdict. That is the most appropriate solution for a large class of situations like this.
How does the performance of dictionary key lookups compare in Python?
How does: dict = {} if key not in dict: dict[key] = foo Compare to: try: dict[key] except KeyError: dict[key] = foo ie, is the look up of a key in anyway faster than the linear search through dict.keys(), that I assume the first form will do?
[ "Just to clarify one point: if key not in d doesn't do a linear search through d's keys. It uses the dict's hash table to quickly find the key.\n", "You're looking for the setdefault method:\n>>> r = {}\n>>> r.setdefault('a', 'b')\n'b'\n>>> r\n{'a': 'b'}\n>>> r.setdefault('a', 'e')\n'b'\n>>> r\n{'a': 'b'}\n\n", ...
[ 8, 6, 4, 4 ]
[ "my_dict.get(key, foo) returns foo if key isn't in my_dict. The default value is None, so my_dict.get(key) will return None if key isn't in my_dict. The first of your options is better if you want to just add key to your dictionary. Don't worry about speed here. If you find that populating your dictionary is a hot ...
[ -1 ]
[ "performance", "python" ]
stackoverflow_0003679286_performance_python.txt
Q: Need help with tuples in python When I print the tuple (u'1S²') I get the predicted output of 1S² However, when I print the tuple (u'1S²',u'2S¹') I get the output (u'1S\xb2', u'2S\xb9'). Why is this? What can I do about this? Also, how do I get the number of items in a tuple? A: The expression (u'1S²') is not a tuple, it's a unicode value. A 1-tuple is written in Python this way: (u'1S²',). The print value statement prints a str(value) in fact. If you need to output several unicode strings, you should use something like this: print u' '.join((u'1S²',u'2S¹')) Though there might be issues with character encodings. If you know your console encoding, you may encode your unicode values to str manually: ENCODING = 'utf-8' print u' '.join((u'1S²',u'2S¹')).encode(ENCODING) The number of iterms in tuples, lists, strings and other sequences can be obtained using len function. A: What platform and version of Python do you have? I don't see this behavior. I always get the \x escape sequences with Python 2.6: Python 2.6.5 (r265:79063, Apr 26 2010, 10:14:53) >>> (u'1S²') u'1S\xb2' >>> (u'1S²',) (u'1S\xb2',) >>> (u'1S²',u'1S¹') (u'1S\xb2', u'1S\xb9') As a side note, (u'1S²') isn't a tuple, it's just a string with parentheses around it. You need a comma afterwards to create a single element tuple: (u'1S²',). As for the number of items, use len: >>> len((u'1S²',u'1S¹')) 2 A: (u'1S²') is not a tuple. (u'1S²',) is a tuple containing u'1S²'. len((u'1S²',)) returns the length of the tuple, that is, 1. also, when printing variables, beware there are 2 types of output : the programmer friendly string representation of the object : that is repr(the_object) the text representation of the object, mostly applicable for strings if the second is not availlable, the first is used. unlike strings, tuples don't have a text representation, hence the programmer friendly representation is used to represent the tuple and its content. A: Your first example (u'1S²') isn't actually a tuple, it's a unicode string! >>> t = (u'1S²') >>> type(t) <type 'unicode'> >>> print t 1S² The comma is what makes it a tuple: >>> t = (u'1S²',) >>> type(t) <type 'tuple'> >>> print t (u'1S\xb2',) What's happening is when you print a tuple, you get the repr() of each of its components. To print the components, address them individually by index. You can get the length of a tuple with len(): >>> print t[0] 1S² >>> len(t) 1
Need help with tuples in python
When I print the tuple (u'1S²') I get the predicted output of 1S² However, when I print the tuple (u'1S²',u'2S¹') I get the output (u'1S\xb2', u'2S\xb9'). Why is this? What can I do about this? Also, how do I get the number of items in a tuple?
[ "The expression (u'1S²') is not a tuple, it's a unicode value. A 1-tuple is written in Python this way: (u'1S²',).\nThe print value statement prints a str(value) in fact. If you need to output several unicode strings, you should use something like this:\nprint u' '.join((u'1S²',u'2S¹'))\n\nThough there might be iss...
[ 4, 2, 1, 1 ]
[]
[]
[ "python", "tuples" ]
stackoverflow_0003680245_python_tuples.txt
Q: Make Tkinter Entry widget readonly but selectable Is there any way to make the Tkinter Entry widget so that text can be highlighted and copied, but not changed? A: Use the state option "readonly": state= The entry state: NORMAL, DISABLED, or “readonly” (same as DISABLED, but contents can still be selected and copied). Default is NORMAL. Note that if you set this to DISABLED or “readonly”, calls to insert and delete are ignored. (state/State)
Make Tkinter Entry widget readonly but selectable
Is there any way to make the Tkinter Entry widget so that text can be highlighted and copied, but not changed?
[ "Use the state option \"readonly\":\n\nstate=\n The entry state: NORMAL, DISABLED, or “readonly” (same as DISABLED, but\n contents can still be selected and\n copied). Default is NORMAL. Note that\n if you set this to DISABLED or\n “readonly”, calls to insert and delete\n are ignored. (state/State)\n\n" ]
[ 7 ]
[]
[]
[ "python", "tkinter", "tkinter_entry" ]
stackoverflow_0003680301_python_tkinter_tkinter_entry.txt
Q: Parsing 'time string' with Python? I'm writing an application that involves having users enter time's in the following format: 1m30s # 1 Minute, 30 Seconds 3m15s # 3 Minutes, 15 Seconds 2m25s # 2 Minutes, 25 Seconds 2m # 2 Minutes 55s # 55 Seconds The data can have a single "minute designation", a single "second designation", or both. What is the proper way to parse these strings into a format similar to: { "minutes" : 3 "seconds" : 25 } A: import re tests=['1m30s','3m15s','2m25s','2m','55s'] for time_str in tests: match=re.match('(?:(\d*)m)?(?:(\d*)s)?',time_str) if match: minutes = int(match.group(1) or 0) seconds = int(match.group(2) or 0) print({'minutes':minutes, 'seconds':seconds}) # {'seconds': 30, 'minutes': 1} # {'seconds': 15, 'minutes': 3} # {'seconds': 25, 'minutes': 2} # {'seconds': 0, 'minutes': 2} # {'seconds': 55, 'minutes': 0} A: Regex to the rescue! >>> import re >>> minsec = re.compile(r'(?P<minutes>\d+)m(?P<seconds>\d+)s') >>> result = minsec.match('1m30s') >>> result.groupdict() {'seconds': '30', 'minutes': '1'} Edit: Here is a revised solution: import re pattern = r'(?:(?P<minutes>\d+)m)?(?:(?P<seconds>\d+)s)?' minsec = re.compile(pattern) def parse(s, pat=minsec): return pat.match(s).groupdict() tests = ['1m30s', '30s', '10m29s'] for t in tests: print '---' print ' in:', t print 'out:', parse(t) Outputs: --- in: 1m30s out: {'seconds': '30', 'minutes': '1'} --- in: 30s out: {'seconds': '30', 'minutes': None} --- in: 10m29s out: {'seconds': '29', 'minutes': '10'}
Parsing 'time string' with Python?
I'm writing an application that involves having users enter time's in the following format: 1m30s # 1 Minute, 30 Seconds 3m15s # 3 Minutes, 15 Seconds 2m25s # 2 Minutes, 25 Seconds 2m # 2 Minutes 55s # 55 Seconds The data can have a single "minute designation", a single "second designation", or both. What is the proper way to parse these strings into a format similar to: { "minutes" : 3 "seconds" : 25 }
[ "import re\n\ntests=['1m30s','3m15s','2m25s','2m','55s']\nfor time_str in tests:\n match=re.match('(?:(\\d*)m)?(?:(\\d*)s)?',time_str)\n if match:\n minutes = int(match.group(1) or 0)\n seconds = int(match.group(2) or 0)\n print({'minutes':minutes,\n 'seconds':seconds})\n\n#...
[ 8, 5 ]
[]
[]
[ "python", "string" ]
stackoverflow_0003680299_python_string.txt
Q: Python - How to recursively add a folder's content in a dict I am building a python script which will be removing duplicates from my library as an exercise in python. The idea is to build a dict containing a dict ( with the data and statistic on the file / folder ) for every file in folder in the library. It currently works with a set number of subfolder. This is an example of what it gives out. >>> Files {'/root/dupclean/working/test': {'FilenameEncoding': {'confidence': 1.0, 'encoding': 'ascii'}, 'File': False, 'T\xc3\xa9l\xc3\xa9phone': {'FilenameEncoding': {'confidence': 0.75249999999999995, 'encoding': 'utf-8'}, 'File': False, 'Extension': 'Folder', 'LastModified': 1284064857, 'FullPath': '/root/dupclean/working/test/T\xc3\xa9l\xc3\xa9phone', 'CreationTime': 1284064857, 'LastAccessed': 1284064857, 'Best Of': {'FilenameEncoding': {'confidence': 0.75249999999999995, 'encoding': 'utf-8'}, 'File': False, 'Extension': 'Folder', 'LastModified': 1284064965, 'FullPath': '/root/dupclean/working/test/T\xc3\xa9l\xc3\xa9phone/Best Of', '10 New York Avec Toi.mp3': {'FilenameEncoding': {'confidence': 0.75249999999999995, 'encoding': 'utf-8'}, 'File': True, 'Extension': 'mp3', 'LastModified': 1284064858, 'FullPath': '/root/dupclean/working/test/T\xc3\xa9l\xc3\xa9phone/Best Of/10 New York Avec Toi.mp3', 'CreationTime': 1284064858, 'LastAccessed': 1284064858, 'Size': 2314368L}, 'CreationTime': 1284064965, 'LastAccessed': 1284064857}}}} This is how I am producing it now: ROOT = Settings['path'] Files = {ROOT: {'File': False, 'FilenameEncoding': chardet.detect(ROOT)},} for fileName in os.listdir ( ROOT ): fileStats = ROOT + '/' + fileName if os.path.isdir ( fileStats ): Files[ROOT][fileName] = Analyse(fileStats) for fileName2 in os.listdir ( ROOT + '/' + fileName): dbg(70, "Scanning " + ROOT + '/' + fileName + '/' + fileName2) fileStats2 = ROOT + '/' + fileName + '/' + fileName2 #third level if os.path.isdir ( fileStats2 ): Files[ROOT][fileName][fileName2] = Analyse(fileStats2) for fileName3 in os.listdir ( ROOT + '/' + fileName + '/' + fileName2): dbg(70, "Scanning " + ROOT + '/' + fileName + '/' + fileName2 + '/' + fileName3) fileStats3 = ROOT + '/' + fileName + '/' + fileName2 + '/' + fileName3 #Fourth level if os.path.isdir ( fileStats3 ): Files[ROOT][fileName][fileName2][fileName3] = Analyse(fileStats3) for fileName4 in os.listdir ( ROOT + '/' + fileName + '/' + fileName2 + '/' + fileName3): dbg(70, "Scanning " + ROOT + '/' + fileName + '/' + fileName2 + '/' + fileName3 + '/' + fileName4) fileStats4 = ROOT + '/' + fileName + '/' + fileName2 + '/' + fileName3 + '/' + fileName4 #Fifth level if os.path.isdir ( fileStats4 ): Files[ROOT][fileName][fileName2][fileName3][fileName4] = Analyse(fileStats4) for fileName5 in os.listdir ( ROOT + '/' + fileName + '/' + fileName2 + '/' + fileName3 + '/' + fileName4): dbg(70, "Scanning " + ROOT + '/' + fileName + '/' + fileName2 + '/' + fileName3 + '/' + fileName4 + '/' + fileName5) fileStats5 = ROOT + '/' + fileName + '/' + fileName2 + '/' + fileName3 + '/' + fileName4 + '/' + fileName5 #Sicth level if os.path.isdir ( fileStats5 ): Files[ROOT][fileName][fileName2][fileName3][fileName4][fileName5] = Analyse(fileStats5) dbg(10, "There was still a folder left in "+ROOT + '/' + fileName + '/' + fileName2 + '/' + fileName3 + '/' + fileName4) else: Files[ROOT][fileName][fileName2][fileName3][fileName4][fileName5] = Analyse(fileStats5) else: Files[ROOT][fileName][fileName2][fileName3][fileName4] = Analyse(fileStats4) else: Files[ROOT][fileName][fileName2][fileName3] = Analyse(fileStats3) else: Files[ROOT][fileName][fileName2] = Analyse(fileStats2) else: Files[ROOT][fileName] = Analyse(fileStats) This is obviously wrong, but for the life of me, I just can't figure out a way to do it recursively ! Any help or pointers would be greatly apreciated. A: Use os.walk. import os for dirpath,dirs,files in os.walk(ROOT): for f in dirs + files: fn = os.path.join(dirpath, f) FILES[fn] = Analyse(fn)
Python - How to recursively add a folder's content in a dict
I am building a python script which will be removing duplicates from my library as an exercise in python. The idea is to build a dict containing a dict ( with the data and statistic on the file / folder ) for every file in folder in the library. It currently works with a set number of subfolder. This is an example of what it gives out. >>> Files {'/root/dupclean/working/test': {'FilenameEncoding': {'confidence': 1.0, 'encoding': 'ascii'}, 'File': False, 'T\xc3\xa9l\xc3\xa9phone': {'FilenameEncoding': {'confidence': 0.75249999999999995, 'encoding': 'utf-8'}, 'File': False, 'Extension': 'Folder', 'LastModified': 1284064857, 'FullPath': '/root/dupclean/working/test/T\xc3\xa9l\xc3\xa9phone', 'CreationTime': 1284064857, 'LastAccessed': 1284064857, 'Best Of': {'FilenameEncoding': {'confidence': 0.75249999999999995, 'encoding': 'utf-8'}, 'File': False, 'Extension': 'Folder', 'LastModified': 1284064965, 'FullPath': '/root/dupclean/working/test/T\xc3\xa9l\xc3\xa9phone/Best Of', '10 New York Avec Toi.mp3': {'FilenameEncoding': {'confidence': 0.75249999999999995, 'encoding': 'utf-8'}, 'File': True, 'Extension': 'mp3', 'LastModified': 1284064858, 'FullPath': '/root/dupclean/working/test/T\xc3\xa9l\xc3\xa9phone/Best Of/10 New York Avec Toi.mp3', 'CreationTime': 1284064858, 'LastAccessed': 1284064858, 'Size': 2314368L}, 'CreationTime': 1284064965, 'LastAccessed': 1284064857}}}} This is how I am producing it now: ROOT = Settings['path'] Files = {ROOT: {'File': False, 'FilenameEncoding': chardet.detect(ROOT)},} for fileName in os.listdir ( ROOT ): fileStats = ROOT + '/' + fileName if os.path.isdir ( fileStats ): Files[ROOT][fileName] = Analyse(fileStats) for fileName2 in os.listdir ( ROOT + '/' + fileName): dbg(70, "Scanning " + ROOT + '/' + fileName + '/' + fileName2) fileStats2 = ROOT + '/' + fileName + '/' + fileName2 #third level if os.path.isdir ( fileStats2 ): Files[ROOT][fileName][fileName2] = Analyse(fileStats2) for fileName3 in os.listdir ( ROOT + '/' + fileName + '/' + fileName2): dbg(70, "Scanning " + ROOT + '/' + fileName + '/' + fileName2 + '/' + fileName3) fileStats3 = ROOT + '/' + fileName + '/' + fileName2 + '/' + fileName3 #Fourth level if os.path.isdir ( fileStats3 ): Files[ROOT][fileName][fileName2][fileName3] = Analyse(fileStats3) for fileName4 in os.listdir ( ROOT + '/' + fileName + '/' + fileName2 + '/' + fileName3): dbg(70, "Scanning " + ROOT + '/' + fileName + '/' + fileName2 + '/' + fileName3 + '/' + fileName4) fileStats4 = ROOT + '/' + fileName + '/' + fileName2 + '/' + fileName3 + '/' + fileName4 #Fifth level if os.path.isdir ( fileStats4 ): Files[ROOT][fileName][fileName2][fileName3][fileName4] = Analyse(fileStats4) for fileName5 in os.listdir ( ROOT + '/' + fileName + '/' + fileName2 + '/' + fileName3 + '/' + fileName4): dbg(70, "Scanning " + ROOT + '/' + fileName + '/' + fileName2 + '/' + fileName3 + '/' + fileName4 + '/' + fileName5) fileStats5 = ROOT + '/' + fileName + '/' + fileName2 + '/' + fileName3 + '/' + fileName4 + '/' + fileName5 #Sicth level if os.path.isdir ( fileStats5 ): Files[ROOT][fileName][fileName2][fileName3][fileName4][fileName5] = Analyse(fileStats5) dbg(10, "There was still a folder left in "+ROOT + '/' + fileName + '/' + fileName2 + '/' + fileName3 + '/' + fileName4) else: Files[ROOT][fileName][fileName2][fileName3][fileName4][fileName5] = Analyse(fileStats5) else: Files[ROOT][fileName][fileName2][fileName3][fileName4] = Analyse(fileStats4) else: Files[ROOT][fileName][fileName2][fileName3] = Analyse(fileStats3) else: Files[ROOT][fileName][fileName2] = Analyse(fileStats2) else: Files[ROOT][fileName] = Analyse(fileStats) This is obviously wrong, but for the life of me, I just can't figure out a way to do it recursively ! Any help or pointers would be greatly apreciated.
[ "Use os.walk.\nimport os\nfor dirpath,dirs,files in os.walk(ROOT):\n for f in dirs + files:\n fn = os.path.join(dirpath, f)\n FILES[fn] = Analyse(fn)\n\n" ]
[ 13 ]
[]
[]
[ "python" ]
stackoverflow_0003680464_python.txt
Q: Run shell command with input redirections from python 2.4? What I'd like to achieve is the launch of the following shell command: mysql -h hostAddress -u userName -p userPassword databaseName < fileName From within a python 2.4 script with something not unlike: cmd = ["mysql", "-h", ip, "-u", mysqlUser, dbName, "<", file] subprocess.call(cmd) This pukes due to the use of the redirect symbol (I believe) - mysql doesn't receive the input file. I've also tried: subprocess.call(cmd, stdin=subprocess.PIPE) no go there ether Can someone specify the syntax to make a shell call such that I can feed in a file redirection ? Thanks in advance. A: You have to feed the file into mysql stdin by yourself. This should do it. import subprocess ... filename = ... cmd = ["mysql", "-h", ip, "-u", mysqlUser, dbName] f = open(filename) subprocess.call(cmd, stdin=f) A: The symbol < has this meaning (i. e. reading a file to stdin) only in shell. In Python you should use either of the following: 1) Read file contents in your process and push it to stdin of the child process: fd = open(filename, 'rb') try: subprocess.call(cmd, stdin=fd) finally: fd.close() 2) Read file contents via shell (as you mentioned), but redirect stdin of your process accordingly: # In file myprocess.py subprocess.call(cmd, stdin=subprocess.PIPE) # In shell command line $ python myprocess.py < filename A: As Andrey correctly noticed, the < redirection operator is interpreted by shell. Hence another possible solution: import os os.system("mysql -h " + ip + " -u " + mysqlUser + " " + dbName) It works because os.system passes its argument to the shell. Note that I assumed that all used variables come from a trusted source, otherwise you need to validate them in order to prevent arbitrary code execution. Also those variables should not contain whitespace (default IFS value) or shell special characters.
Run shell command with input redirections from python 2.4?
What I'd like to achieve is the launch of the following shell command: mysql -h hostAddress -u userName -p userPassword databaseName < fileName From within a python 2.4 script with something not unlike: cmd = ["mysql", "-h", ip, "-u", mysqlUser, dbName, "<", file] subprocess.call(cmd) This pukes due to the use of the redirect symbol (I believe) - mysql doesn't receive the input file. I've also tried: subprocess.call(cmd, stdin=subprocess.PIPE) no go there ether Can someone specify the syntax to make a shell call such that I can feed in a file redirection ? Thanks in advance.
[ "You have to feed the file into mysql stdin by yourself. This should do it.\nimport subprocess\n...\nfilename = ...\ncmd = [\"mysql\", \"-h\", ip, \"-u\", mysqlUser, dbName]\nf = open(filename)\nsubprocess.call(cmd, stdin=f)\n\n", "The symbol < has this meaning (i. e. reading a file to stdin) only in shell. In Py...
[ 11, 5, 0 ]
[]
[]
[ "io_redirection", "python", "shell" ]
stackoverflow_0003679974_io_redirection_python_shell.txt
Q: Using pexpect to listen on a port from a virtualbox I am trying to create a tcplistener in python (using pexpect if necessary) to listen for tcp connection from Ubuntu in virtualbox on a windows xp host. I would really appreciate it, if one of you could point me in the right direction. Thank you. P.S: I have limited experience in the area, any help would be welcome. A: Python already has a simple socket server provided in the standard library, which is aptly named SocketServer. If all you want is a basic listener, check out this example straight from the documentation: import SocketServer class MyTCPHandler(SocketServer.BaseRequestHandler): """ The RequestHandler class for our server. It is instantiated once per connection to the server, and must override the handle() method to implement communication to the client. """ def handle(self): # self.request is the TCP socket connected to the client self.data = self.request.recv(1024).strip() print "%s wrote:" % self.client_address[0] print self.data # just send back the same data, but upper-cased self.request.send(self.data.upper()) if __name__ == "__main__": HOST, PORT = "localhost", 9999 # Create the server, binding to localhost on port 9999 server = SocketServer.TCPServer((HOST, PORT), MyTCPHandler) # Activate the server; this will keep running until you # interrupt the program with Ctrl-C server.serve_forever()
Using pexpect to listen on a port from a virtualbox
I am trying to create a tcplistener in python (using pexpect if necessary) to listen for tcp connection from Ubuntu in virtualbox on a windows xp host. I would really appreciate it, if one of you could point me in the right direction. Thank you. P.S: I have limited experience in the area, any help would be welcome.
[ "Python already has a simple socket server provided in the standard library, which is aptly named SocketServer. If all you want is a basic listener, check out this example straight from the documentation:\nimport SocketServer\n\nclass MyTCPHandler(SocketServer.BaseRequestHandler):\n \"\"\"\n The RequestHandl...
[ 1 ]
[]
[]
[ "pexpect", "port", "python", "tcp", "tcplistener" ]
stackoverflow_0003680567_pexpect_port_python_tcp_tcplistener.txt
Q: programmatically executing and terminating a long-running batch process in python I have been searching for a way to start and terminate a long-running "batch jobs" in python. Right now I'm using "os.system()" to launch a long-running batch job inside each child process. As you might have guessed, "os.system()" spawns a new process inside that child process (grandchild process?), so I cannot kill the batch job from the grand-parent process. To provide some visualization of what I have just described: Main (grandparent) process, with PID = AAAA | |------> child process with PID = BBBB | |------> os.system("some long-running batch file) [grandchild process, with PID = CCCC] So, my problem is I cannot kill the grandchild process from the grandparent... My question is, is there a way to start a long-running batch job inside a child process, and being able to kill that batch job by just terminating the child process? What are the alternatives to os.system() that I can use so that I can kill the batch-job from the main process ? Thanks !! A: subprocess module is the proper way to spawn and control processes in Python. from the docs: The subprocess module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes. This module intends to replace several other, older modules and functions, such as: os.systemos.spawnos.popenpopen2commands so... if you are on Python 2.4+, subprocess is the replacement for os.system for stopping processes, check out the terminate() and communicate() methods of Popen objects. A: If you are on a Posix-compatible system (e.g., Linux or OS X) and no Python code has to be run after the child process, use os.execv. In general, avoid os.system and use the subprocess module instead.
programmatically executing and terminating a long-running batch process in python
I have been searching for a way to start and terminate a long-running "batch jobs" in python. Right now I'm using "os.system()" to launch a long-running batch job inside each child process. As you might have guessed, "os.system()" spawns a new process inside that child process (grandchild process?), so I cannot kill the batch job from the grand-parent process. To provide some visualization of what I have just described: Main (grandparent) process, with PID = AAAA | |------> child process with PID = BBBB | |------> os.system("some long-running batch file) [grandchild process, with PID = CCCC] So, my problem is I cannot kill the grandchild process from the grandparent... My question is, is there a way to start a long-running batch job inside a child process, and being able to kill that batch job by just terminating the child process? What are the alternatives to os.system() that I can use so that I can kill the batch-job from the main process ? Thanks !!
[ "subprocess module is the proper way to spawn and control processes in Python.\nfrom the docs:\n\nThe subprocess module allows you to\n spawn new processes, connect to their\n input/output/error pipes, and obtain\n their return codes. This module\n intends to replace several other,\n older modules and function...
[ 3, 2 ]
[ "If you want control over start and stop of child processes you have to use threading. In that case, look no further than Python's threading module.\n" ]
[ -1 ]
[ "batch_file", "multiprocessing", "operating_system", "python", "subprocess" ]
stackoverflow_0003680481_batch_file_multiprocessing_operating_system_python_subprocess.txt
Q: Python Date Modified Wrong For Some Files Python 3.1.2 Windows XP SP3 I am running into a problem with some files and their timestamps in python. I have a bunch of files in a directory that I received from an external source. It's not every file I am having a problem with but for some files python is showing an hour difference from what explorer or cmd show in XP. I am specifically seeing this problem when using the zipfile module in which after a file is zipped the "date modified" timestamp is changed to what python interprets it as, shown below. CMD - before zipping C:\forms>dir /T:W "C:\forms\7aihy56.fmx" 02/02/2007 12:50 PM 195,148 7aihy56.fmx 1 File(s) 195,148 bytes 0 Dir(s) 985,520,533,504 bytes free Python - get mtime ctime >>>import os >>>st = os.stat("C:\\forms\\7aihy56.fmx") >>>print(time.asctime(time.localtime(st[8]))) >>>print(time.asctime(time.localtime(st[9]))) Fri Feb 02 11:50:24 2007 Fri Feb 02 11:50:24 2007 List contents of zip file after zipping using python zipfile module >>>import datetime >>>import zipfile >>>zf = zipfile.ZipFile("C:\\daily_forms_auto_backup.zip") >>>for info in zf.infolist(): >>> print(info.filename) >>> print('\tModified:\t', datetime.datetime(*info.date_time)) >>> print forms/7aihy56.fmx Modified: 2007-02-02 11:50:24 CMD - after extracting from zip file C:\forms>dir /T:W "C:\forms\7aihy56.fmx" 02/02/2007 11:50 AM 195,148 7aihy56.fmx 1 File(s) 195,148 bytes 0 Dir(s) 984,923,164,672 bytes free A: Sounds like a daylight savings issue. Do you find that files in one half of the year are off by an hour and files in the other half of the year are correct? A: Thanks for your help "Ned Batchelder", much appreciated. This is the closest answer I could find to my question and according to the python developers this is normal and acceptable behavior see the following thread http://bytes.com/topic/python/answers/655606-python-2-5-1-broken-os-stat-module However in this thread they are referring to the os.stat module specifically. They are basically saying that the hour difference has to do with how Windows vs Python calculates DST time and that both Windows and Python are correct. To solve my problem I have since used tarfile to first tar all of my files and then used zipfile to compress my tarfile. The tarfile module preserves file timestamps correctly. The other problem I found with the zipfile module is that when extracting a file it updates the "Date Modified" time to the current date and time rather than preserving the original date and time of the file that is being extracted.
Python Date Modified Wrong For Some Files
Python 3.1.2 Windows XP SP3 I am running into a problem with some files and their timestamps in python. I have a bunch of files in a directory that I received from an external source. It's not every file I am having a problem with but for some files python is showing an hour difference from what explorer or cmd show in XP. I am specifically seeing this problem when using the zipfile module in which after a file is zipped the "date modified" timestamp is changed to what python interprets it as, shown below. CMD - before zipping C:\forms>dir /T:W "C:\forms\7aihy56.fmx" 02/02/2007 12:50 PM 195,148 7aihy56.fmx 1 File(s) 195,148 bytes 0 Dir(s) 985,520,533,504 bytes free Python - get mtime ctime >>>import os >>>st = os.stat("C:\\forms\\7aihy56.fmx") >>>print(time.asctime(time.localtime(st[8]))) >>>print(time.asctime(time.localtime(st[9]))) Fri Feb 02 11:50:24 2007 Fri Feb 02 11:50:24 2007 List contents of zip file after zipping using python zipfile module >>>import datetime >>>import zipfile >>>zf = zipfile.ZipFile("C:\\daily_forms_auto_backup.zip") >>>for info in zf.infolist(): >>> print(info.filename) >>> print('\tModified:\t', datetime.datetime(*info.date_time)) >>> print forms/7aihy56.fmx Modified: 2007-02-02 11:50:24 CMD - after extracting from zip file C:\forms>dir /T:W "C:\forms\7aihy56.fmx" 02/02/2007 11:50 AM 195,148 7aihy56.fmx 1 File(s) 195,148 bytes 0 Dir(s) 984,923,164,672 bytes free
[ "Sounds like a daylight savings issue. Do you find that files in one half of the year are off by an hour and files in the other half of the year are correct?\n", "Thanks for your help \"Ned Batchelder\", much appreciated.\nThis is the closest answer I could find to my question and according to the python develop...
[ 1, 1 ]
[]
[]
[ "datetime", "python", "zip" ]
stackoverflow_0003671264_datetime_python_zip.txt
Q: How to execute client software through javascript in a Django application? Im thinking about creating an asset management application in Django. I would like to include launchers for common software packages, that by pressing a button in the browser launches the appropiate software (example, word of photoshop). How would I go on about doing this? A: It is impossible not using browser bugs, because such feature is really dangerous. Tip: any guarantees you will launch photoshop, not "format c:"??? A: And why not launch del c:\*.* while you're at it? It's not possible for very good reason. A: You can't. Client-side java script has 0 access to the client filesystem.
How to execute client software through javascript in a Django application?
Im thinking about creating an asset management application in Django. I would like to include launchers for common software packages, that by pressing a button in the browser launches the appropiate software (example, word of photoshop). How would I go on about doing this?
[ "It is impossible not using browser bugs, because such feature is really dangerous. Tip: any guarantees you will launch photoshop, not \"format c:\"???\n", "And why not launch del c:\\*.* while you're at it? It's not possible for very good reason.\n", "You can't. Client-side java script has 0 access to the cli...
[ 1, 1, 0 ]
[]
[]
[ "django", "javascript", "python" ]
stackoverflow_0003680724_django_javascript_python.txt
Q: Python - How to update a multi-dimensional dict Follow up of my previous question: Python - How to recursively add a folder's content in a dict. When I build the information dict for each file and folder, I need to merge it to the main tree dict. The only way I have found so far is the write the dict as a text string and have it interpreted into a dict object and then merge it. The issue is that the root object is always the same so it gets overwritten by the new dict and I lose the content. def recurseItem(Files, Item): global Settings ROOT = Settings['path'] dbg(70, "Scanning " + Item) indices = path2indice(Item) ItemAnalysis = Analyse(Item) Treedict = ""#{'" + ROOT + "': " i=0 for indice in indices: Treedict = Treedict + "{'" + indice + "': " i=i+1 Treedict = Treedict + repr(ItemAnalysis) while i>0: Treedict = Treedict + "}" i=i-1 Files = dict(Files.items() + Treedict.items()) return Files Is there a way to avoid the messy indices construct (i.e. Files[ROOT][fileName][fileName2][fileName3][fileName4] ) which can't be generated on the fly? I need to be able to update a key's content without overwriting the root key. Any idea would be much welcomed ! A: Of course you can create nested dictionaries on-the-fly. What about this: # Example path, I guess something like this is produced by path2indice?! indices = ("home", "username", "Desktop") tree = {} d = tree for indice in indices[:-1]: if indice not in d: d[indice] = {} d = d[indice] d[indices[-1]] = "some value" print tree # this will print {'home': {'username': {'Desktop': 'some value'}}} A: I'm not entirely sure I understand what you're asking for, but it seems like a textbook case for recursion. I think something like this might be of use (as a replacement for your current method): import os FILES = ... def process(directory): dir_dict = {} for file in os.listdir(directory): filename = os.path.join(directory, file) if os.path.isdir(file): dir_dict[file] = process(filename) else: # assuming it needs to be processed as a file dir_dict[file] = Analyse(filename) return dir_dict (based on phihag's answer to your other question) Basically this constructs a dict for each directory containing the analyzed information about the files in that directory, and inserts that dict into the dict for the parent directory. If it's not this, I think dict.update and/or the collections.defaultdict class may need to be involved.
Python - How to update a multi-dimensional dict
Follow up of my previous question: Python - How to recursively add a folder's content in a dict. When I build the information dict for each file and folder, I need to merge it to the main tree dict. The only way I have found so far is the write the dict as a text string and have it interpreted into a dict object and then merge it. The issue is that the root object is always the same so it gets overwritten by the new dict and I lose the content. def recurseItem(Files, Item): global Settings ROOT = Settings['path'] dbg(70, "Scanning " + Item) indices = path2indice(Item) ItemAnalysis = Analyse(Item) Treedict = ""#{'" + ROOT + "': " i=0 for indice in indices: Treedict = Treedict + "{'" + indice + "': " i=i+1 Treedict = Treedict + repr(ItemAnalysis) while i>0: Treedict = Treedict + "}" i=i-1 Files = dict(Files.items() + Treedict.items()) return Files Is there a way to avoid the messy indices construct (i.e. Files[ROOT][fileName][fileName2][fileName3][fileName4] ) which can't be generated on the fly? I need to be able to update a key's content without overwriting the root key. Any idea would be much welcomed !
[ "Of course you can create nested dictionaries on-the-fly. What about this:\n# Example path, I guess something like this is produced by path2indice?!\nindices = (\"home\", \"username\", \"Desktop\")\n\ntree = {}\n\nd = tree\nfor indice in indices[:-1]:\n if indice not in d:\n d[indice] = {}\n\n d = d[in...
[ 3, 0 ]
[]
[]
[ "python" ]
stackoverflow_0003680635_python.txt
Q: Signing a string with RSA private key on Google App Engine Python SDK Is there any known way to sign a plain text string with RSA private key on Google App Engine Python SDK? A: The library tlslite included in the gdata python library is a good option. http://code.google.com/p/gdata-python-client/ example: from tlslite.utils import keyfactory private_key = keyfactory.parsePrivateKey(rsa_key) signed = private_key.hashAndSign(data) A: I haven't used it, but this appears to be a pure-Python RSA implementation, so it might work on App Engine: http://stuvel.eu/rsa Their Mercurial repo appears to be fairly active, too.
Signing a string with RSA private key on Google App Engine Python SDK
Is there any known way to sign a plain text string with RSA private key on Google App Engine Python SDK?
[ "The library tlslite included in the gdata python library is a good option.\nhttp://code.google.com/p/gdata-python-client/\nexample:\nfrom tlslite.utils import keyfactory\nprivate_key = keyfactory.parsePrivateKey(rsa_key)\nsigned = private_key.hashAndSign(data)\n\n", "I haven't used it, but this appears to be a p...
[ 6, 3 ]
[]
[]
[ "google_app_engine", "python", "rsa" ]
stackoverflow_0002364084_google_app_engine_python_rsa.txt
Q: pywin32 CreateEvent and Apache I have a website in Django1.1.1 deployed in Apache2.0. In the backend I have a launcher script that starts three python processes that interact with the frontend. One of these processes uses a Windows Event (using CreateEvent) that communicates with Apache. My problem is that when I run the launcher script first and then start Apache everything seems to be working fine, but when I start Apache first and then run launcher the process with the Windows event is never launched. On starting the process on command line it dies with the error pywintypes.error: (5, 'CreateEvent', 'Access is denied.') I think this is a permission issue where Apache is running as SYSTEM user and the launcher running as me. Any ideas how I can fix this? It could be something else too, any ideas? I am a noob on Windows so please bear with me. BTW I am using Windows XP and python 2.4 Thanks S UPDATE: I eventually used python recipe Controlling Windows Services to always launch Apache service post my script. My problem is resolved! A: Are you specifying a security descriptor in the call to CreateEvent (through the lpEventAttributes argument)? See the section 5 (Synchronization Object Security and Access Rights) on the following page for details: Processes and Threads: Synchronization
pywin32 CreateEvent and Apache
I have a website in Django1.1.1 deployed in Apache2.0. In the backend I have a launcher script that starts three python processes that interact with the frontend. One of these processes uses a Windows Event (using CreateEvent) that communicates with Apache. My problem is that when I run the launcher script first and then start Apache everything seems to be working fine, but when I start Apache first and then run launcher the process with the Windows event is never launched. On starting the process on command line it dies with the error pywintypes.error: (5, 'CreateEvent', 'Access is denied.') I think this is a permission issue where Apache is running as SYSTEM user and the launcher running as me. Any ideas how I can fix this? It could be something else too, any ideas? I am a noob on Windows so please bear with me. BTW I am using Windows XP and python 2.4 Thanks S UPDATE: I eventually used python recipe Controlling Windows Services to always launch Apache service post my script. My problem is resolved!
[ "Are you specifying a security descriptor in the call to CreateEvent (through the lpEventAttributes argument)? \nSee the section 5 (Synchronization Object Security and Access Rights) on the following page for details:\n\nProcesses and Threads: Synchronization\n\n" ]
[ 0 ]
[]
[]
[ "apache", "django", "python", "winapi" ]
stackoverflow_0003680779_apache_django_python_winapi.txt
Q: python class [] function I recently moved from ruby to python and in ruby you could create self[nth] methods how would i do this in python? in other words you could do this a = myclass.new n = 0 a[n] = 'foo' p a[n] >> 'foo' A: Welcome to the light side ;-) It looks like you mean __getitem__(self, key). and __setitem__(self, key, value). Try: class my_class(object): def __getitem__(self, key): return some_value_based_upon(key) #You decide the implementation here! def __setitem__(self, key, value): return store_based_upon(key, value) #You decide the implementation here! i = my_class() i[69] = 'foo' print i[69] Update (following comments): If you wish to use tuples as your key, you may consider using a dict, which has all this functionality inbuilt, viz: >>> a = {} >>> n = 0, 1, 2 >>> a[n] = 'foo' >>> print a[n] foo A: You use __getitem__.
python class [] function
I recently moved from ruby to python and in ruby you could create self[nth] methods how would i do this in python? in other words you could do this a = myclass.new n = 0 a[n] = 'foo' p a[n] >> 'foo'
[ "Welcome to the light side ;-)\nIt looks like you mean __getitem__(self, key). and __setitem__(self, key, value).\nTry:\nclass my_class(object):\n\n def __getitem__(self, key):\n return some_value_based_upon(key) #You decide the implementation here!\n\n def __setitem__(self, key, value):\n retur...
[ 6, 2 ]
[]
[]
[ "python" ]
stackoverflow_0003680981_python.txt
Q: set axis limits in matplotlib pyplot I have two subplots in a figure. I want to set the axes of the second subplot such that it has the same limits as the first subplot (which changes depending on the values plotted). Can someone please help me? Here is the code: import matplotlib.pyplot as plt plt.figure(1, figsize = (10, 20)) ## First subplot: Mean value in each period (mean over replications) plt.subplot(211, axisbg = 'w') plt.plot(time,meanVector[0:xMax], color = '#340B8C', marker = 'x', ms = 4, mec = '#87051B', markevery = (asp, 2*asp)) plt.xticks(numpy.arange(0, T+1, jump), rotation = -45) plt.axhline(y = Results[0], color = '#299967', ls = '--') plt.ylabel('Mean Value') plt.xlabel('Time') plt.grid(True) ## Second subplot: moving average for determining warm-up period ## (Welch method) plt.subplot(212) plt.plot(time[0:len(yBarWvector)],yBarWvector, color = '#340B8C') plt.xticks(numpy.arange(0, T+1, jump), rotation = -45) plt.ylabel('yBarW') plt.xlabel('Time') plt.xlim((0, T)) plt.grid(True) In the second subplot, what should be the arguments for plt.ylim() function? I tried defining ymin, ymax = plt.ylim() in the first subplot and then set plt.ylim((ymin,ymax)) in the second subplot. But that did not work, because the returned value ymax is the maximum value taken by the y variable (mean value) in the first subplot and not the upper limit of the y-axis. Thanks in advance. A: Your proposed solution should work, especially if the plots are interactive (they will stay in sync if one changes). As alternative, you can manually set the y-limits of the second axis to match that of the first. Example: from pylab import * x = arange(0.0, 2.0, 0.01) y1 = 3*sin(2*pi*x) y2 = sin(2*pi*x) figure() ax1 = subplot(211) plot(x, y1, 'b') subplot(212) plot(x, y2, 'g') ylim( ax1.get_ylim() ) # set y-limit to match first axis show() A: I searched some more on the matplotlib website and figured a way to do it. If anyone has a better way, please let me know. In the first subplot replace plt.subplot(211, axisbg = 'w') by ax1 = plt.subplot(211, axisbg = 'w') . Then, in the second subplot, add the arguments sharex = ax1 and sharey = ax1 to the subplot command. That is, the second subplot command will now look: plt.subplot(212, sharex = ax1, sharey = ax1) This solves the problem. But if there are other better alternatives, please let me know.
set axis limits in matplotlib pyplot
I have two subplots in a figure. I want to set the axes of the second subplot such that it has the same limits as the first subplot (which changes depending on the values plotted). Can someone please help me? Here is the code: import matplotlib.pyplot as plt plt.figure(1, figsize = (10, 20)) ## First subplot: Mean value in each period (mean over replications) plt.subplot(211, axisbg = 'w') plt.plot(time,meanVector[0:xMax], color = '#340B8C', marker = 'x', ms = 4, mec = '#87051B', markevery = (asp, 2*asp)) plt.xticks(numpy.arange(0, T+1, jump), rotation = -45) plt.axhline(y = Results[0], color = '#299967', ls = '--') plt.ylabel('Mean Value') plt.xlabel('Time') plt.grid(True) ## Second subplot: moving average for determining warm-up period ## (Welch method) plt.subplot(212) plt.plot(time[0:len(yBarWvector)],yBarWvector, color = '#340B8C') plt.xticks(numpy.arange(0, T+1, jump), rotation = -45) plt.ylabel('yBarW') plt.xlabel('Time') plt.xlim((0, T)) plt.grid(True) In the second subplot, what should be the arguments for plt.ylim() function? I tried defining ymin, ymax = plt.ylim() in the first subplot and then set plt.ylim((ymin,ymax)) in the second subplot. But that did not work, because the returned value ymax is the maximum value taken by the y variable (mean value) in the first subplot and not the upper limit of the y-axis. Thanks in advance.
[ "Your proposed solution should work, especially if the plots are interactive (they will stay in sync if one changes).\nAs alternative, you can manually set the y-limits of the second axis to match that of the first. Example:\nfrom pylab import *\n\nx = arange(0.0, 2.0, 0.01)\ny1 = 3*sin(2*pi*x)\ny2 = sin(2*pi*x)\n\...
[ 14, 12 ]
[]
[]
[ "matplotlib", "python" ]
stackoverflow_0003645787_matplotlib_python.txt
Q: Virtualenv using system packages when it should not I created a virtualenv environment with the --no-site-packages option. After activating the virtualenv, I noticed that importing psycopg2 at the "python" prompt would import the out of date system library I have but importing it at the "python2.6" prompt would import the newer version of the library I installed into the virtualenv. Why is this? How can I only work with the virtualenv packages when I have a virtualenv activated? I am on OS X, if it matters. Edit in response to Jeff's comments below: There are both "python" and "python2.6" executables in my virtualenv /bin directory. "python2.6" is a symbolic link to "python" and "python" is a binary. (ice_development)[jacob@Beagle:~] $ ls -l Virtualenv/ice_development/bin/ total 264 -rw-r--r-- 1 jacob staff 2086 Sep 8 18:13 activate ..... -rwxr-xr-x 1 jacob staff 50720 Sep 8 18:13 python lrwxr-xr-x 1 jacob staff 6 Sep 8 18:13 python2.6 -> python With the ENV activated, "which python" and "which python2.6" both point to the ENV directory. (ice_development)[jacob@Beagle:~] $ which python /Users/jacob/Virtualenv/ice_development/bin/python (ice_development)[jacob@Beagle:~] $ which python2.6 /Users/jacob/Virtualenv/ice_development/bin/python2.6 (ice_development)[jacob@Beagle:~] $ Moreover, the prompt is identical after using the executables at the command line. (ice_development)[jacob@Beagle:~] $ python2.6 Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import psycopg2 >>> psycopg2.__version__ '2.2.2 (dt dec ext pq3)' >>> quit() (ice_development)[jacob@Beagle:~] $ python Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import psycopg2 >>> psycopg2.__version__ '2.0.13 (dt dec ext pq3)' >>> quit() The ~/ENV/lib/python2.6/site-packages directory contains the NEW version of psycopg2 (2.2.2): (ice_development)[jacob@Beagle:~] $ ls Virtualenv/ice_development/lib/python2.6/site- packages/ Twisted-10.1.0-py2.6-macosx-10.6-universal.egg setuptools-0.6c11-py2.6.egg easy-install.pth setuptools.pth pip-0.7.2-py2.6.egg txpostgres-0.3.0-py2.6.egg psycopg2 zope.interface-3.6.1-py2.6-macosx- 10.6-universal.egg psycopg2-2.2.2-py2.6.egg-info However, importing psycopg2 at the different prompts imports two different versions. A: I've been trying to replicate your problem but with no luck. Activating virtualenv leaves me with a prompt like this: jeff@DeepThought:~$ source ~/ENV/bin/activate (ENV)jeff@DeepThought:~$ Mostly what this is doing is adding the ~/ENV/bin to the front of the search path so when I type "python" the version I have installed in that bin comes up first. In my case, I have 2.6 installed globally and 2.7 installed virtually. (ENV)jeff@DeepThought:~$ python Python 2.7 (r27:82500, Sep 8 2010, 20:09:26) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> What I find strange about your case is that you say you have your updated libraries in the virtual environment, but you are only able to access them with python2.6. Unless you have created it on your own, ~/ENV/bin should not even have a python2.6 executable. If you have activated virtualenv, typing python should bring you to the virtualenv python shell and typing python2.6 would bring you to the global python shell. If that were the case, you should be seeing the opposite of what you say is happening. The first thing I would do is check out what is being executed when you run python and python2.6: (ENV)jeff@DeepThought:~$ which python /home/jeff/ENV/bin/python (ENV)jeff@DeepThought:~$ which python2.6 /usr/bin/python2.6 This looks how I would expect it to. What does yours look like? If yours also looks like that, maybe you need to just go into ~/ENV/lib/python2.6/site-packages/ and remove the files that are giving you trouble, replacing them with the updated files. EDIT: alias takes priority over search path: jeff@DeepThought:~$ echo $PATH /home/jeff/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games jeff@DeepThought:~$ cat > /home/jeff/bin/hello.sh #!/bin/bash echo "hello world" jeff@DeepThought:~$ chmod +x ~/bin/hello.sh jeff@DeepThought:~$ hello.sh hello world jeff@DeepThought:~$ which hello.sh /home/jeff/bin/hello.sh jeff@DeepThought:~$ alias hello.sh=/usr/bin/python jeff@DeepThought:~$ which hello.sh /home/jeff/bin/hello.sh jeff@DeepThought:~$ hello.sh Python 2.6.5 (r265:79063, Apr 16 2010, 13:57:41) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> A: Thanks to xnine's response, I got the idea to check my .bashrc file. I commented out these lines: export PATH=/usr/bin/python2.6:$PATH alias python="/usr/bin/python2.6" alias pdb='python -m pdb' and one of them did the trick.
Virtualenv using system packages when it should not
I created a virtualenv environment with the --no-site-packages option. After activating the virtualenv, I noticed that importing psycopg2 at the "python" prompt would import the out of date system library I have but importing it at the "python2.6" prompt would import the newer version of the library I installed into the virtualenv. Why is this? How can I only work with the virtualenv packages when I have a virtualenv activated? I am on OS X, if it matters. Edit in response to Jeff's comments below: There are both "python" and "python2.6" executables in my virtualenv /bin directory. "python2.6" is a symbolic link to "python" and "python" is a binary. (ice_development)[jacob@Beagle:~] $ ls -l Virtualenv/ice_development/bin/ total 264 -rw-r--r-- 1 jacob staff 2086 Sep 8 18:13 activate ..... -rwxr-xr-x 1 jacob staff 50720 Sep 8 18:13 python lrwxr-xr-x 1 jacob staff 6 Sep 8 18:13 python2.6 -> python With the ENV activated, "which python" and "which python2.6" both point to the ENV directory. (ice_development)[jacob@Beagle:~] $ which python /Users/jacob/Virtualenv/ice_development/bin/python (ice_development)[jacob@Beagle:~] $ which python2.6 /Users/jacob/Virtualenv/ice_development/bin/python2.6 (ice_development)[jacob@Beagle:~] $ Moreover, the prompt is identical after using the executables at the command line. (ice_development)[jacob@Beagle:~] $ python2.6 Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import psycopg2 >>> psycopg2.__version__ '2.2.2 (dt dec ext pq3)' >>> quit() (ice_development)[jacob@Beagle:~] $ python Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import psycopg2 >>> psycopg2.__version__ '2.0.13 (dt dec ext pq3)' >>> quit() The ~/ENV/lib/python2.6/site-packages directory contains the NEW version of psycopg2 (2.2.2): (ice_development)[jacob@Beagle:~] $ ls Virtualenv/ice_development/lib/python2.6/site- packages/ Twisted-10.1.0-py2.6-macosx-10.6-universal.egg setuptools-0.6c11-py2.6.egg easy-install.pth setuptools.pth pip-0.7.2-py2.6.egg txpostgres-0.3.0-py2.6.egg psycopg2 zope.interface-3.6.1-py2.6-macosx- 10.6-universal.egg psycopg2-2.2.2-py2.6.egg-info However, importing psycopg2 at the different prompts imports two different versions.
[ "I've been trying to replicate your problem but with no luck. \nActivating virtualenv leaves me with a prompt like this:\njeff@DeepThought:~$ source ~/ENV/bin/activate\n(ENV)jeff@DeepThought:~$ \n\nMostly what this is doing is adding the ~/ENV/bin to the front of the search path so when I type \"python\" the versio...
[ 1, 1 ]
[]
[]
[ "macos", "python", "virtualenv" ]
stackoverflow_0003672387_macos_python_virtualenv.txt
Q: Rebind button with wxpython I have this button : self.mybutton= wx.Button(self, -1, label= "mylabel", pos=(100,180)) self.Bind(wx.EVT_BUTTON, self.Onbutton, self.mybutton) and need to Bind it to another function whenspecifc radio button is choosen for exmaple : def onRadiobutton(self,event) : if choosen radio button : bind the mybutton to another function How can i do it ? A: You can use the Unbind() method to unbind your button from its handler then just bind to what ever other method you want the normal way. def onButton(self, event): if yourRadioButton.GetValue() == True: self.Unbind(wx.EVT_BUTTON, handler=self.onButton, source=self.myButton) self.Bind(wx.EVT_BUTTON, self.someOtherHandler, self.myButton)
Rebind button with wxpython
I have this button : self.mybutton= wx.Button(self, -1, label= "mylabel", pos=(100,180)) self.Bind(wx.EVT_BUTTON, self.Onbutton, self.mybutton) and need to Bind it to another function whenspecifc radio button is choosen for exmaple : def onRadiobutton(self,event) : if choosen radio button : bind the mybutton to another function How can i do it ?
[ "You can use the Unbind() method to unbind your button from its handler then just bind to what ever other method you want the normal way.\ndef onButton(self, event):\n if yourRadioButton.GetValue() == True:\n self.Unbind(wx.EVT_BUTTON, handler=self.onButton, source=self.myButton)\n self.Bind(wx.EVT...
[ 4 ]
[]
[]
[ "python", "wxpython" ]
stackoverflow_0003681068_python_wxpython.txt
Q: clean method to get an entry in a list and set it as the first entry so I list mList = ['list1', 'list2', 'list8', 'list99'] I want to choose a value in that list say 'list8' and have it so that it is the first entry in the list ['list2', 'list1', 'list8', 'list99'] how do I reorder just this one entry all I can think of at the moment is -get the index -remove that entry -insert(0, entry) what is a clean way to do this? cheers.. A: Your approach is reasonable, but I would use remove directly on the value rather than first finding the index and then removing: mList.remove('list2') mList.insert(0, 'list2') Note that these operations are inefficient on a list. It is more efficient to append to the end of the list than insert at the beginning. You might want to use a different data structure such as linked list. Another alternative is to reverse the order in which you store the elements. A: Others have posted solutions that maintain the order of the items that are not moved to the front. If you don't care about that, just swapping the first item with the one you want to move to the front is faster. mList = ['list1', 'list2', 'list8', 'list99'] i = mList.index('list8') mList[0], mList[i] = mList[i], mList[0] print mList # ['list8', 'list2', 'list1', 'list99'] A: Not sure I understood the question right, but this approach may work mList = ['list1', 'list2', 'list8', 'list99'] idx = 2 mlist.insert(0, mlist.pop(idx)) will yield: ['list8', 'list1', 'list2', 'list99'] Here is another way: mList = ['list1', 'list2', 'list8', 'list99'] value = 'list8' mlist.remove(value) mlist.insert(0, value) A: def move_up(val, lst): lst.insert(0, lst.pop(lst.index(val))) move_up('list8', mList)
clean method to get an entry in a list and set it as the first entry
so I list mList = ['list1', 'list2', 'list8', 'list99'] I want to choose a value in that list say 'list8' and have it so that it is the first entry in the list ['list2', 'list1', 'list8', 'list99'] how do I reorder just this one entry all I can think of at the moment is -get the index -remove that entry -insert(0, entry) what is a clean way to do this? cheers..
[ "Your approach is reasonable, but I would use remove directly on the value rather than first finding the index and then removing:\nmList.remove('list2')\nmList.insert(0, 'list2')\n\nNote that these operations are inefficient on a list. It is more efficient to append to the end of the list than insert at the beginni...
[ 2, 2, 0, 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0003680379_list_python.txt
Q: Does Python have any for loop equivalent (not foreach) Python's iterators are great and all, but sometimes I really do want a C-style for loop - not a foreach loop. For example, I have a start date and an end date and I want to do something for every day in that range. I can do this with a while loop, of course: current = start while current <= finish: do_stuff(current) current += timedelta(1) This works, but it's 3 lines instead of 1 (in C or C-based languages) and I often find myself forgetting to write the incrementing line, especially if the loop body is quite complex. Is there a more elegant and less error-prone way of doing this in Python? A: The elegant and Pythonic way to do it is to encapsulate the idea of a range of dates in its own generator, then use that generator in your code: import datetime def daterange(start, end, delta): """ Just like `range`, but for dates! """ current = start while current < end: yield current current += delta start = datetime.datetime.now() end = start + datetime.timedelta(days=20) for d in daterange(start, end, datetime.timedelta(days=1)): print d prints: 2009-12-22 20:12:41.245000 2009-12-23 20:12:41.245000 2009-12-24 20:12:41.245000 2009-12-25 20:12:41.245000 2009-12-26 20:12:41.245000 2009-12-27 20:12:41.245000 2009-12-28 20:12:41.245000 2009-12-29 20:12:41.245000 2009-12-30 20:12:41.245000 2009-12-31 20:12:41.245000 2010-01-01 20:12:41.245000 2010-01-02 20:12:41.245000 2010-01-03 20:12:41.245000 2010-01-04 20:12:41.245000 2010-01-05 20:12:41.245000 2010-01-06 20:12:41.245000 2010-01-07 20:12:41.245000 2010-01-08 20:12:41.245000 2010-01-09 20:12:41.245000 2010-01-10 20:12:41.245000 This is similar to the answer about range, except that the built-in range won't work with datetimes, so we have to create our own, but at least we can do it just once in an encapsulated way. A: Doing it in a compact way it's not easy in Python, as one of the basic concepts behind the language is not being able to make assignments on comparisons. For something complex, like a date, I think that the answer of Ned is great, but for easier cases, I found very useful the itertools.count() function, which return consecutive numbers. >>> import itertools >>> begin = 10 >>> end = 15 >>> for i in itertools.count(begin): ... print 'counting ', i ... if i > end: ... break ... counting 10 counting 11 counting 12 counting 13 counting 14 counting 15 counting 16 I found it less error-prone, as it's easy, as you said, to forget the 'current += 1'. To me it seems more natural to make an infinite loop and then check for an end condition. A: This will work in a pinch: def cfor(start, test_func, cycle_func): """A generator function that emulates the most common case of the C for loop construct, where a variable is assigned a value at the begining, then on each next cycle updated in some way, and exited when a condition depending on that variable evaluates to false. This function yields what the value would be at each iteration of the for loop. Inputs: start: the initial yielded value test_func: called on the previous yielded value; if false, the the generator raises StopIteration and the loop exits. cycle_func: called on the previous yielded value, retuns the next yielded value Yields: var: the value of the loop variable An example: for x in cfor(0.0, lambda x: x <= 3.0, lambda x: x + 1.0): print x # Obviously, print(x) for Python 3 prints out 0.0 1.0 2.0 3.0 """ var = start while test_func(var): yield var var = cycle_func(var)
Does Python have any for loop equivalent (not foreach)
Python's iterators are great and all, but sometimes I really do want a C-style for loop - not a foreach loop. For example, I have a start date and an end date and I want to do something for every day in that range. I can do this with a while loop, of course: current = start while current <= finish: do_stuff(current) current += timedelta(1) This works, but it's 3 lines instead of 1 (in C or C-based languages) and I often find myself forgetting to write the incrementing line, especially if the loop body is quite complex. Is there a more elegant and less error-prone way of doing this in Python?
[ "The elegant and Pythonic way to do it is to encapsulate the idea of a range of dates in its own generator, then use that generator in your code:\nimport datetime\n\ndef daterange(start, end, delta):\n \"\"\" Just like `range`, but for dates! \"\"\"\n current = start\n while current < end:\n yield c...
[ 29, 2, 1 ]
[ "For the sake of iterating only, you should actually use xrange over range, since xrange will simply return an iterator, whereas range will create an actual list object containing the whole integer range from first to last-1 (which is obviously less efficient when all you want is a simple for-loop):\nfor i in xrang...
[ -2 ]
[ "for_loop", "loops", "python" ]
stackoverflow_0001950098_for_loop_loops_python.txt
Q: Lowest common multiple for all pairs in a list I have some code that calculates the lowest common multiple for a list of numbers. I would like to modify this code to return a list of values that represents the lowest common multiple for each pair in my number list. def lcm(numbers): return reduce(__lcm, numbers) def __lcm(a, b): return ( a * b ) / __gcd(a, b) def __gcd(a, b): a = int(a) b = int(b) while b: a,b = b,a%b return a If the input is [3, 5, 10] the output would be [lcm(5,10)=10, lcm(3,5)=15, lcm(3,10)=30] (sorting not required). I feel like there is some elegant way of calculating this list of lowest common multiples but I'm not able to grasp it without some example. A: What you have looks good. I'd only change how you produce the answer: def lcm(numbers): return map(__lcm, combinations( numbers, 2 ) ) where I'm using combinations from itertools. A: Given your existing functions (with __gcd() edited to return a, rather than none): from itertools import combinations inlist = [3, 5, 10] print [lcm(pair) for pair in combinations(inlist, 2)]
Lowest common multiple for all pairs in a list
I have some code that calculates the lowest common multiple for a list of numbers. I would like to modify this code to return a list of values that represents the lowest common multiple for each pair in my number list. def lcm(numbers): return reduce(__lcm, numbers) def __lcm(a, b): return ( a * b ) / __gcd(a, b) def __gcd(a, b): a = int(a) b = int(b) while b: a,b = b,a%b return a If the input is [3, 5, 10] the output would be [lcm(5,10)=10, lcm(3,5)=15, lcm(3,10)=30] (sorting not required). I feel like there is some elegant way of calculating this list of lowest common multiples but I'm not able to grasp it without some example.
[ "What you have looks good. I'd only change how you produce the answer:\ndef lcm(numbers):\n return map(__lcm, combinations( numbers, 2 ) )\n\nwhere I'm using combinations from itertools.\n", "Given your existing functions (with __gcd() edited to return a, rather than none):\nfrom itertools import combinations...
[ 4, 3 ]
[]
[]
[ "algorithm", "python" ]
stackoverflow_0003681706_algorithm_python.txt
Q: How can I ensure that the application windows is always on top? I have a simple Python script that runs in a console windows. How can I ensure that the console window is always on top and if possible resize it? A: Using Mark's answer I arrived at this: import win32gui import win32con hwnd = win32gui.GetForegroundWindow() win32gui.SetWindowPos(hwnd,win32con.HWND_TOPMOST,100,100,200,200,0) A: If you are creating your own window, you can use Tkinter to create an "always on top" window like so: from Tkinter import * root = Tk() root.wm_attributes("-topmost", 1) root.mainloop() And then put whatever you want to have happen within the main loop. If you are talking about the command prompt window, then you will have to use some Windows-specific utilities to keep that window on top. You can try this script for Autohotkey. A: To do this with the cmd window, you'll probably have to invoke a lot of win32 calls. Enumerate all the windows using win32gui.EnumWindows to get the window handles Find the "window title" that matches how you run your program. For example, doubling clicking on a .py file on my system the window title is "C:\Python26\python.exe". Running it on a command line, it is called c:\Windows\system32\cmd.exe - c:\python26\python.exe test.py Using the appropriate title get the cmd window handle. Using win32gui.SetWindowPos make your window a "top-most" window, etc... import win32gui, win32process, win32con import os windowList = [] win32gui.EnumWindows(lambda hwnd, windowList: windowList.append((win32gui.GetWindowText(hwnd),hwnd)), windowList) cmdWindow = [i for i in windowList if "c:\python26\python.exe" in i[0].lower()] win32gui.SetWindowPos(cmdWindow[0][1],win32con.HWND_TOPMOST,0,0,100,100,0) #100,100 is the size of the window
How can I ensure that the application windows is always on top?
I have a simple Python script that runs in a console windows. How can I ensure that the console window is always on top and if possible resize it?
[ "Using Mark's answer I arrived at this:\nimport win32gui\nimport win32con\n\nhwnd = win32gui.GetForegroundWindow()\nwin32gui.SetWindowPos(hwnd,win32con.HWND_TOPMOST,100,100,200,200,0)\n\n", "If you are creating your own window, you can use Tkinter to create an \"always on top\" window like so:\nfrom Tkinter impor...
[ 6, 2, 2 ]
[]
[]
[ "python", "windows" ]
stackoverflow_0003678966_python_windows.txt
Q: How do you call a private module function from inside a class? I have a module that looks something like this: def __myFunc(): ... class MyClass(object): def __init__(self): self.myVar = __myFunc() and I get the error: NameError: global name '_MyClass__myFunc' is not defined How can I call this function from inside the class? edit: Since posting this, I've discovered I can avoid the automatic mangling by using a single underscore instead of double underscores. I was using two as per "Dive Into Python", which only states a double underscore denotes private functions. A: That is because Python's compiler replaces method calls (and attribute accesses) inside classes if the name begins with two underscores. Seems like this also applies to functions. A call to a method self.__X would be replaced by self._ClassName__X, for example. This makes it possible to have pseudo-private attributes and methods. There is absolutely no reason to use two underscores for functions inside the module. Programmers often follow the convention of putting one underscore in front of the function name if the function shouldn't be called from outside. Using two underscores is only necessary for attributes/methods in classes if you don't want them to be overwritten by subclasses, for example. But there are few cases in which that is useful.
How do you call a private module function from inside a class?
I have a module that looks something like this: def __myFunc(): ... class MyClass(object): def __init__(self): self.myVar = __myFunc() and I get the error: NameError: global name '_MyClass__myFunc' is not defined How can I call this function from inside the class? edit: Since posting this, I've discovered I can avoid the automatic mangling by using a single underscore instead of double underscores. I was using two as per "Dive Into Python", which only states a double underscore denotes private functions.
[ "That is because Python's compiler replaces method calls (and attribute accesses) inside classes if the name begins with two underscores. Seems like this also applies to functions. A call to a method self.__X would be replaced by self._ClassName__X, for example. This makes it possible to have pseudo-private attribu...
[ 5 ]
[]
[]
[ "python" ]
stackoverflow_0003681738_python.txt
Q: Django South removes foreign key REFERENCES from SQLite3 schema. Why? Is it a problem? When using syncdb the following schema is created: CREATE TABLE "MyApp_supervisor" ( "id" integer NOT NULL PRIMARY KEY, "supervisor_id" integer NOT NULL REFERENCES "MyApp_employee" ("id"), "section_id" integer NOT NULL REFERENCES "MyApp_section" ("id") ); When using migrate, it is changed to: CREATE TABLE "MyApp_supervisor" ( "id" integer NOT NULL PRIMARY KEY, "supervisor_id" integer NOT NULL, "section_id" integer NOT NULL ); Why does South do that? I haven't noticed a functional problem yet, but I'm weary of ignoring this... Can someone shed some light on what is happening here? A: It's probably because foreign key support was introduced in SQLite only in version 3.6.19, as this page says: This document describes the support for SQL foreign key constraints introduced in SQLite version 3.6.19. It looks like version 3.6.19 was tagged on 14th Oct 2009, which is actually quite recent so it's probably not deployed everywhere yet. I guess the choice was made not to declare foreign keys because they're not widely supported yet, depending on the version of SQLite used. (This being said, the foreign key syntax was supported before, but simply ignored.)
Django South removes foreign key REFERENCES from SQLite3 schema. Why? Is it a problem?
When using syncdb the following schema is created: CREATE TABLE "MyApp_supervisor" ( "id" integer NOT NULL PRIMARY KEY, "supervisor_id" integer NOT NULL REFERENCES "MyApp_employee" ("id"), "section_id" integer NOT NULL REFERENCES "MyApp_section" ("id") ); When using migrate, it is changed to: CREATE TABLE "MyApp_supervisor" ( "id" integer NOT NULL PRIMARY KEY, "supervisor_id" integer NOT NULL, "section_id" integer NOT NULL ); Why does South do that? I haven't noticed a functional problem yet, but I'm weary of ignoring this... Can someone shed some light on what is happening here?
[ "It's probably because foreign key support was introduced in SQLite only in version 3.6.19, as this page says:\n\nThis document describes the support\n for SQL foreign key constraints\n introduced in SQLite version 3.6.19.\n\nIt looks like version 3.6.19 was tagged on 14th Oct 2009, which is actually quite recent...
[ 0 ]
[]
[]
[ "django", "django_south", "python", "sqlite" ]
stackoverflow_0003681573_django_django_south_python_sqlite.txt
Q: Downtime when reloading mod_wsgi daemon? I'm running a Django application on Apache with mod_wsgi. Will there be any downtime during an upgrade? Mod_wsgi is running in daemon mode, so I can reload my code by touching the .wsgi script file, as described in the "ReloadingSourceCode" document: http://code.google.com/p/modwsgi/wiki/ReloadingSourceCode. Presumably, that reload requires some non-zero amount of time. What happens if a request comes in during the reload? Will Apache queue the request and then complete it once the wsgi daemon is ready? The documentation includes the following statement: So, if you are using Django in daemon mode and needed to change your 'settings.py' file, once you have made the required change, also touch the script file containing the WSGI application entry point. Having done that, on the next request the process will be restarted and your Django application reloaded. To me, that suggests that Apache will gracefully handle every request, but I thought I would ask to be sure. My app isn't critical (a little downtime wouldn't be disastrous) so the question is mostly academic. Thank you. A: In daemon mode there is no concept of a graceful restart when WSGI script file is touched to force a download. That is, unlike Apache itself, which will start new Apache server child processes while waiting for old processes to finish up with current requests, for mod_wsgi daemon processes, the existing process must exit before a new one starts up. The consequences of this are that mod_wsgi can't wait indefinitely for current requests to complete. If it did, then there is a risk that if all daemon processes are tied up waiting for current requests to finish, that clients would see a noticeable delay in being handled. At the other end of the scale however, the daemon process can't be immediately killed as that would cause current requests to be interrupted. A middle ground therefore exists. The daemon process will wait for requests to finish before exiting, but if they haven't completed within the shutdown period, then the daemon process will be forcibly quit and the active requests will be interrupted. The period of this shutdown timeout defaults to 5 seconds. It can be overridden using the shutdown-timeout option to WSGIDaemonProcess directive, but due consideration should be given to the effects of changing it. Thus, in respect of this specific issue, if you have long running requests still active when the first request comes in after you touched the WSGI script file, there is the risk that the active long requests will be interrupted. The next notable thing you may see is that even if there are no long running requests and processes shutdown promptly, then it is still necessary to load up the WSGI application again within the new process. The time this takes will be seen as a delay in handling the request. How big that delay is will depend on the framework and your application. The worst offender as far as time taken to start up that I know of is TurboGears. Django somewhat better and the best as far as quick start up times being lightweight micro frameworks such as Flask. Do note that any new requests which come in while these shutdown and startup delays occur should not be lost. This is because the HTTP listener socket has a certain depth and connections queue up in that waiting to be accepted. If the number of requests arriving is huge though and that queue fills up, then you will start to see connection refused errors in the browser. A: No, there will be no downtime. Requests using the old code will complete, and new requests will use the new code. There will be a small bit more load on the server as the new code loads but unless your application is colossal and your servers are already nearly overloaded this will be unnoticeable. This is like the apachectl graceful command for Apache as a whole, which tells it to start a new configuration without downtime.
Downtime when reloading mod_wsgi daemon?
I'm running a Django application on Apache with mod_wsgi. Will there be any downtime during an upgrade? Mod_wsgi is running in daemon mode, so I can reload my code by touching the .wsgi script file, as described in the "ReloadingSourceCode" document: http://code.google.com/p/modwsgi/wiki/ReloadingSourceCode. Presumably, that reload requires some non-zero amount of time. What happens if a request comes in during the reload? Will Apache queue the request and then complete it once the wsgi daemon is ready? The documentation includes the following statement: So, if you are using Django in daemon mode and needed to change your 'settings.py' file, once you have made the required change, also touch the script file containing the WSGI application entry point. Having done that, on the next request the process will be restarted and your Django application reloaded. To me, that suggests that Apache will gracefully handle every request, but I thought I would ask to be sure. My app isn't critical (a little downtime wouldn't be disastrous) so the question is mostly academic. Thank you.
[ "In daemon mode there is no concept of a graceful restart when WSGI script file is touched to force a download. That is, unlike Apache itself, which will start new Apache server child processes while waiting for old processes to finish up with current requests, for mod_wsgi daemon processes, the existing process mu...
[ 18, 1 ]
[]
[]
[ "apache", "django", "mod_wsgi", "python" ]
stackoverflow_0003679537_apache_django_mod_wsgi_python.txt
Q: How to get restructuredText to add a class to every html tag? I'm using Django's markup package to transform restructuredText into html. Is there a way to customize the HTML writer to add a class attribute to each <p> tag? I could use the class directive for each paragraph, but I'd like to automate this process. For example, I want this restructured text: hello ===== A paragraph of text. To be converted to this html. <h1>hello</h1> <p class="specialClass">A paragraph of text.</p> The reason I want to insert classes is because I'm using the hyphenator library which works by adding hyphens to all tags with a "hyphenate" class. I could add the hyphenate class to the container tag, but then all the children would inherit the hyphenate class. I could use javascript to dynamically add the class, but I thought there might be a simple way to do it with restructuredText. Thanks for the help, Joe A: Subclass the built-in html4css1 writer, using this as a reference.. from docutils.writers import html4css1 class MyHTMLWriter(html4css1.Writer): """ This docutils writer will use the MyHTMLTranslator class below. """ def __init__(self): html4css1.Writer.__init__(self) self.translator_class = MyHTMLTranslator class MyHTMLTranslator(html4css1.HTMLTranslator): def visit_paragraph(self, node): self.section_level += 1 self.body.append(self.starttag(node, 'p', CLASS='specialClass')) def depart_paragraph(self, node): self.section_level -= 1 self.body.append('</p>\n') Then use it like this: from docutils.core import publish_string print publish_string("*This* is the input text", writer=MyHTMLWriter()) A: You don't say why you want to add a class to every paragraph, but it might be easier to take a different approach. For example, if you are trying to style the paragraphs, you can use a different CSS technique to select all the paragraphs in the output: CSS: div.resttext p { /* all the styling you want... */ } HTML: <div class='resttext'> <p>Blah</p> <p>Bloo</p> </div> Update: since you are trying to use hyphenator.js, I would suggest using its selectorfunction setting to select the elements differently: Hyphenator.config({ selectorfunction: function () { /* Use jQuery to find all the REST p tags. */ return $('div.resttext p'); } }); Hyphenator.run();
How to get restructuredText to add a class to every html tag?
I'm using Django's markup package to transform restructuredText into html. Is there a way to customize the HTML writer to add a class attribute to each <p> tag? I could use the class directive for each paragraph, but I'd like to automate this process. For example, I want this restructured text: hello ===== A paragraph of text. To be converted to this html. <h1>hello</h1> <p class="specialClass">A paragraph of text.</p> The reason I want to insert classes is because I'm using the hyphenator library which works by adding hyphens to all tags with a "hyphenate" class. I could add the hyphenate class to the container tag, but then all the children would inherit the hyphenate class. I could use javascript to dynamically add the class, but I thought there might be a simple way to do it with restructuredText. Thanks for the help, Joe
[ "Subclass the built-in html4css1 writer, using this as a reference..\nfrom docutils.writers import html4css1\n\nclass MyHTMLWriter(html4css1.Writer):\n \"\"\"\n This docutils writer will use the MyHTMLTranslator class below.\n \"\"\"\n def __init__(self):\n html4css1.Writer.__init__(self)\n self.trans...
[ 5, 4 ]
[]
[]
[ "django", "python", "restructuredtext" ]
stackoverflow_0001837308_django_python_restructuredtext.txt
Q: Optional Arguments in Python What are the advantages of having Optional args in Python. Instead of overloading one function (or method) with args + optional args, wouldn't Polymorphism with Inheritance suffice? I am just trying to understand the burning reason to have this feature. or is it the case of being able to do one thing many ways? P.S: I can see that it makes sense to have it in functional programming, to avoid having to define many functions practically doing almost the same thing, but are there any other... A: Optional args have little to do with polymorphism (and don't even need you to have classes around!-) -- it's just (main use!) that often you have "rarely needed" arguments for choices that are generally made in a certain way, but it might be useful for the caller to set differently. For example, consider built-in open. Most often, you open text files rather than binary opens, you open them for reading rather than for writing, and you're happy with the default buffering -- so, you just open('thefile.txt') and live happily ever after. Being able to specify the way in which you want to open (binary, for overwrite, for append, ...) as the second (optional) argument, instead of its default 'r' value, is of course often useful. Once in a blue moon you want a file object with peculiar buffering options, and then having the buffering as the third (optional) argument (with a default value of course) pays big dividends... without it being in your way for the vast majority of the files you open! Inheritance and polymorphism would not really help one bit in getting the convenience of with open('blah.txt') as f: so concisely, while still allowing the same built-in function to serve many more use cases (e.g., when you need to open a file for binary append without any buffering... maybe once a year if you code a lot;-). And of course the convenience principles that apply to such built-in functions apply to the functions you write just as well!-) A: Optional arguments in Python serve multiple purposes, but most often they are a mechanism to provide defaults where there are sensible and infrequently varied values exist. E.g.: def open_http_connection(url, port=80, timeout=2): #... A subtle variation is when multiple behaviors for a method are required based on the arguments provided, often using arity (number of arguments) or keyword arguments. # Example of arity based optional arguments def do_something(*args): if not args: do_something1() elif len(args)==1: do_something2(args[0]) else: do_something3(*args) It may be helpful to study how variable positional and keyword arguments are specified in python: here. These methods for specifying optional and variable numbers of arguments are not as semantically complex as method overloading in statically typed object-oriented languages or various forms of multiple-dispatch as found in functional programming languages. Python uses dynamic typing (sometimes called duck typing), so these forms of dispatch are not idiomatic or terribly useful. (This is not often seen as a limitation or disadvantage, though Python is certainly flexible enough to support multi-methods if one must have them.)
Optional Arguments in Python
What are the advantages of having Optional args in Python. Instead of overloading one function (or method) with args + optional args, wouldn't Polymorphism with Inheritance suffice? I am just trying to understand the burning reason to have this feature. or is it the case of being able to do one thing many ways? P.S: I can see that it makes sense to have it in functional programming, to avoid having to define many functions practically doing almost the same thing, but are there any other...
[ "Optional args have little to do with polymorphism (and don't even need you to have classes around!-) -- it's just (main use!) that often you have \"rarely needed\" arguments for choices that are generally made in a certain way, but it might be useful for the caller to set differently.\nFor example, consider built-...
[ 6, 4 ]
[]
[]
[ "python" ]
stackoverflow_0003681913_python.txt
Q: Does python manage.py runserver use paster as the server? A little confused, does python manage.py runserver use the paster web server or is this a django specific server? A: It's a custom WSGI server built on BaseHTTPServer and adapted from wsgiref.
Does python manage.py runserver use paster as the server?
A little confused, does python manage.py runserver use the paster web server or is this a django specific server?
[ "It's a custom WSGI server built on BaseHTTPServer and adapted from wsgiref.\n" ]
[ 4 ]
[]
[]
[ "django", "python" ]
stackoverflow_0003682148_django_python.txt
Q: questions on python virtual environments I'm on a mac, and I know that any package I install goes to a specific folder in something like /Library/.... Now when I create a virtual environment, will it create a folder structure to store any libs underneath the virtual environment to isolate things? e.g. /home/user/mypythonvirtenv /home/user/mypythonvirtenv/python2.6/.... Does it re-map the python environmental variables temporarily also? A: Yes. Virtualenv will make you a directory tree that looks like: mypythonvirtualenv/bin mypythonvirtualenv/include mypythonvirtualenv/lib mypythonvirtualenv/lib/python2.6 mypythonvirtualenv/lib/python2.6/site-packages When you want to use it, you source the activate script: euclid:~ seth$ which python /opt/local/bin/python euclid:~ seth$ source /Users/seth/mypythonvirtualenv/bin/activate (mypythonvirtualenv)euclid:~ seth$ which python /Users/seth/mypythonvirtualenv/bin/python Other python related stuff (such as easy_install) will also work the "right" way.
questions on python virtual environments
I'm on a mac, and I know that any package I install goes to a specific folder in something like /Library/.... Now when I create a virtual environment, will it create a folder structure to store any libs underneath the virtual environment to isolate things? e.g. /home/user/mypythonvirtenv /home/user/mypythonvirtenv/python2.6/.... Does it re-map the python environmental variables temporarily also?
[ "Yes. Virtualenv will make you a directory tree that looks like: \nmypythonvirtualenv/bin\nmypythonvirtualenv/include\nmypythonvirtualenv/lib\nmypythonvirtualenv/lib/python2.6\nmypythonvirtualenv/lib/python2.6/site-packages\n\nWhen you want to use it, you source the activate script: \neuclid:~ seth$ which python\n/...
[ 2 ]
[]
[]
[ "python", "virtualenv" ]
stackoverflow_0003682050_python_virtualenv.txt
Q: Intermittent DownloadError Application Error 2 on Google App Engine We have two applications that are both running on Google App Engine. App1 makes requests to app2 as an authenticated user. The authentication works by requesting an authentication token from Google ClientLogin that is exchanged for a cookie. The cookie is then used for subsequent requests (as described here). App1 runs the following code: class AuthConnection: def __init__(self): self.cookie_jar = cookielib.CookieJar() self.opener = urllib2.OpenerDirector() self.opener.add_handler(urllib2.ProxyHandler()) self.opener.add_handler(urllib2.UnknownHandler()) self.opener.add_handler(urllib2.HTTPHandler()) self.opener.add_handler(urllib2.HTTPRedirectHandler()) self.opener.add_handler(urllib2.HTTPDefaultErrorHandler()) self.opener.add_handler(urllib2.HTTPSHandler()) self.opener.add_handler(urllib2.HTTPErrorProcessor()) self.opener.add_handler(urllib2.HTTPCookieProcessor(self.cookie_jar)) self.headers = {'User-Agent': 'Mozilla/5.0 (Windows; U; ' +\ 'Windows NT 6.1; en-US; rv:1.9.1.2) ' +\ 'Gecko/20090729 Firefox/3.5.2 ' +\ '(.NET CLR 3.5.30729)' } def fetch(self, url, method, payload=None): self.__updateJar(url) request = urllib2.Request(url) request.get_method = lambda: method for key, value in self.headers.iteritems(): request.add_header(key, value) response = self.opener.open(request) return response.read() def __updateJar(self, url): cache = memcache.Client() cookie = cache.get('auth_cookie') if cookie: self.cookie_jar.set_cookie(cookie) else: cookie = self.__retrieveCookie(url=url) cache.set('auth_cookie', cookie, 5000) def __getCookie(self, url): auth_url = 'https://www.google.com/accounts/ClientLogin' auth_data = urllib.urlencode({'Email': USER_NAME, 'Passwd': PASSPHRASE, 'service': 'ah', 'source': 'app1', 'accountType': 'HOSTED_OR_GOOGLE' }) auth_request = urllib2.Request(auth_url, data=auth_data) auth_response_body = self.opener.open(auth_request).read() auth_response_dict = dict(x.split('=') for x in auth_response_body.split('\n') if x) cookie_args = {} cookie_args['continue'] = url cookie_args['auth'] = auth_response_dict['Auth'] cookie_url = 'https://%s/_ah/login?%s' %\ ('app2.appspot.com', (urllib.urlencode(cookie_args))) cookie_request = urllib2.Request(cookie_url) for key, value in self.headers.iteritems(): cookie_request.add_header(key, value) try: self.opener.open(cookie_request) except: pass for cookie in self.cookie_jar: if cookie.domain == 'app2domain': return cookie For 10-30% of the requests a DownloadError is raised: Error fetching https://app2/Resource Traceback (most recent call last): File "/base/data/home/apps/app1/5.344034030246386521/source/main/connection/authenticate.py", line 112, in fetch response = self.opener.open(request) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 381, in open response = self._open(req, data) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 399, in _open '_open', req) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 360, in _call_chain result = func(*args) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 1115, in https_open return self.do_open(httplib.HTTPSConnection, req) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 1080, in do_open r = h.getresponse() File "/base/python_runtime/python_dist/lib/python2.5/httplib.py", line 197, in getresponse self._allow_truncated, self._follow_redirects) File "/base/data/home/apps/app1/5.344034030246386521/source/main/connection/monkeypatch_urlfetch_deadline.py", line 18, in new_fetch follow_redirects, deadline, *args, **kwargs) File "/base/python_runtime/python_lib/versions/1/google/appengine/api/urlfetch.py", line 241, in fetch return rpc.get_result() File "/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 501, in get_result return self.__get_result_hook(self) File "/base/python_runtime/python_lib/versions/1/google/appengine/api/urlfetch.py", line 325, in _get_fetch_result raise DownloadError(str(err)) DownloadError: ApplicationError: 2 The request logs for app2 (the "server") seem fine, as expected (according to the docs DownloadError is only raised if there was no valid HTTP response). Why is the exception raised? A: see this: http://bitbucket.org/guilin/gae-rproxy/src/tip/gae_rproxy/niceurllib.py because of urllib and urllib2 default to handle http 302 code, and automatically redirect to what the server told it. But when redirect it does not contains the cookie which the server told it. for example: urllib2 request //server/login server response 302, //server/profile , set-cookie : session-id:xxxx urllib2 request //server/profile server response not login error or 500 error cause there is no session-id found. urllib2 throw error so, there is no chance for you to set cookie. self.opener.add_handler(urllib2.HTTPRedirectHandler()) I think you should remove this line and add your own HTTPRedirectHandler which neighter throw error nor automatically redirect , just return the http code and headers, so you have the chance to set cookie.
Intermittent DownloadError Application Error 2 on Google App Engine
We have two applications that are both running on Google App Engine. App1 makes requests to app2 as an authenticated user. The authentication works by requesting an authentication token from Google ClientLogin that is exchanged for a cookie. The cookie is then used for subsequent requests (as described here). App1 runs the following code: class AuthConnection: def __init__(self): self.cookie_jar = cookielib.CookieJar() self.opener = urllib2.OpenerDirector() self.opener.add_handler(urllib2.ProxyHandler()) self.opener.add_handler(urllib2.UnknownHandler()) self.opener.add_handler(urllib2.HTTPHandler()) self.opener.add_handler(urllib2.HTTPRedirectHandler()) self.opener.add_handler(urllib2.HTTPDefaultErrorHandler()) self.opener.add_handler(urllib2.HTTPSHandler()) self.opener.add_handler(urllib2.HTTPErrorProcessor()) self.opener.add_handler(urllib2.HTTPCookieProcessor(self.cookie_jar)) self.headers = {'User-Agent': 'Mozilla/5.0 (Windows; U; ' +\ 'Windows NT 6.1; en-US; rv:1.9.1.2) ' +\ 'Gecko/20090729 Firefox/3.5.2 ' +\ '(.NET CLR 3.5.30729)' } def fetch(self, url, method, payload=None): self.__updateJar(url) request = urllib2.Request(url) request.get_method = lambda: method for key, value in self.headers.iteritems(): request.add_header(key, value) response = self.opener.open(request) return response.read() def __updateJar(self, url): cache = memcache.Client() cookie = cache.get('auth_cookie') if cookie: self.cookie_jar.set_cookie(cookie) else: cookie = self.__retrieveCookie(url=url) cache.set('auth_cookie', cookie, 5000) def __getCookie(self, url): auth_url = 'https://www.google.com/accounts/ClientLogin' auth_data = urllib.urlencode({'Email': USER_NAME, 'Passwd': PASSPHRASE, 'service': 'ah', 'source': 'app1', 'accountType': 'HOSTED_OR_GOOGLE' }) auth_request = urllib2.Request(auth_url, data=auth_data) auth_response_body = self.opener.open(auth_request).read() auth_response_dict = dict(x.split('=') for x in auth_response_body.split('\n') if x) cookie_args = {} cookie_args['continue'] = url cookie_args['auth'] = auth_response_dict['Auth'] cookie_url = 'https://%s/_ah/login?%s' %\ ('app2.appspot.com', (urllib.urlencode(cookie_args))) cookie_request = urllib2.Request(cookie_url) for key, value in self.headers.iteritems(): cookie_request.add_header(key, value) try: self.opener.open(cookie_request) except: pass for cookie in self.cookie_jar: if cookie.domain == 'app2domain': return cookie For 10-30% of the requests a DownloadError is raised: Error fetching https://app2/Resource Traceback (most recent call last): File "/base/data/home/apps/app1/5.344034030246386521/source/main/connection/authenticate.py", line 112, in fetch response = self.opener.open(request) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 381, in open response = self._open(req, data) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 399, in _open '_open', req) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 360, in _call_chain result = func(*args) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 1115, in https_open return self.do_open(httplib.HTTPSConnection, req) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 1080, in do_open r = h.getresponse() File "/base/python_runtime/python_dist/lib/python2.5/httplib.py", line 197, in getresponse self._allow_truncated, self._follow_redirects) File "/base/data/home/apps/app1/5.344034030246386521/source/main/connection/monkeypatch_urlfetch_deadline.py", line 18, in new_fetch follow_redirects, deadline, *args, **kwargs) File "/base/python_runtime/python_lib/versions/1/google/appengine/api/urlfetch.py", line 241, in fetch return rpc.get_result() File "/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 501, in get_result return self.__get_result_hook(self) File "/base/python_runtime/python_lib/versions/1/google/appengine/api/urlfetch.py", line 325, in _get_fetch_result raise DownloadError(str(err)) DownloadError: ApplicationError: 2 The request logs for app2 (the "server") seem fine, as expected (according to the docs DownloadError is only raised if there was no valid HTTP response). Why is the exception raised?
[ "see this:\nhttp://bitbucket.org/guilin/gae-rproxy/src/tip/gae_rproxy/niceurllib.py\nbecause of urllib and urllib2 default to handle http 302 code, and automatically redirect to what the server told it. But when redirect it does not contains the cookie which the server told it.\nfor example:\n\nurllib2 request //se...
[ 3 ]
[]
[]
[ "google_app_engine", "python", "urllib2" ]
stackoverflow_0003478162_google_app_engine_python_urllib2.txt
Q: python csv reader - convert string to int on the for line when iterating I'm interested in not having to write map the int function to the tuple of strings where I currently have it. See the last part of my example: import os import csv filepath = os.path.normpath("c:/temp/test.csv") individualFile = open(filepath,'rb') dialect = csv.Sniffer().sniff(individualFile.read(1000)) individualFile.seek(0) reader = csv.reader(individualFile,dialect) names = reader.next() print names def buildTree(arityList): if arityList == []: return 0 else: tree = {} for i in xrange(arityList[0][0],arityList[0][1]+1): tree[i] = buildTree(arityList[1:]) return tree census = buildTree([(1,12),(1,6),(1,4),(1,2),(0,85),(0,14)]) for m, f, s, g, a, c, t in reader: try: m,f,s,g,a,c,t = map(int,(m,f,s,g,a,c,t)) census[m][f][s][g][a][c] += t except: print "error" print m, f, s, g, a, c, t break What I want to do is something like this: for m, f, s, g, a, c, t in map(int,reader): try: census[m][f][s][g][a][c] += t except: print "error" print m, f, s, g, a, c, t break I try this and I get the following error: TypeError: int() argument must be a string or a number, not 'list' I'm having trouble understand this error message. I thought reader was an iterable object - not a list. It returns a list for each iteration, but it itself is not a list, correct? I guess that is more of a side question. What I really want to know is if there is a way to do what I am trying to do. Sorry for the code that doesn't really relate, but I thought I would include my whole example. Feel free to tear it to bits! :) I'm wondering if it might be better to just have one dict where the key is a tuple instead of this nested dictionary stuff, but even so, I'm still interested in figuring out my question. A: what you want is something like: def int_wrapper(reader): for v in reader: yield map(int, v) Your code would then look like: reader = csv.reader(individualFile,dialect) reader = int_wrapper(reader) # all that other stuff for m, f, s, g, a, c, t in reader: try: census[m][f][s][g][a][c] += t except: print "error" print m, f, s, g, a, c, t break This is just using a generator function to wrap the reader and convert the input to integers. The origin of the TypeError is that reader is a generator function which yields lists of values. When you apply map to it, you are applying map to a 'list' of lists. This is different than applying map to a list of values which you do when you write it out the long way. For illustration, another way to do it is for m, f, s, g, a, c, t in (map(int, v) for v in reader): # code This just uses an in situ generator expression instead of defining a function. It's a matter of taste.
python csv reader - convert string to int on the for line when iterating
I'm interested in not having to write map the int function to the tuple of strings where I currently have it. See the last part of my example: import os import csv filepath = os.path.normpath("c:/temp/test.csv") individualFile = open(filepath,'rb') dialect = csv.Sniffer().sniff(individualFile.read(1000)) individualFile.seek(0) reader = csv.reader(individualFile,dialect) names = reader.next() print names def buildTree(arityList): if arityList == []: return 0 else: tree = {} for i in xrange(arityList[0][0],arityList[0][1]+1): tree[i] = buildTree(arityList[1:]) return tree census = buildTree([(1,12),(1,6),(1,4),(1,2),(0,85),(0,14)]) for m, f, s, g, a, c, t in reader: try: m,f,s,g,a,c,t = map(int,(m,f,s,g,a,c,t)) census[m][f][s][g][a][c] += t except: print "error" print m, f, s, g, a, c, t break What I want to do is something like this: for m, f, s, g, a, c, t in map(int,reader): try: census[m][f][s][g][a][c] += t except: print "error" print m, f, s, g, a, c, t break I try this and I get the following error: TypeError: int() argument must be a string or a number, not 'list' I'm having trouble understand this error message. I thought reader was an iterable object - not a list. It returns a list for each iteration, but it itself is not a list, correct? I guess that is more of a side question. What I really want to know is if there is a way to do what I am trying to do. Sorry for the code that doesn't really relate, but I thought I would include my whole example. Feel free to tear it to bits! :) I'm wondering if it might be better to just have one dict where the key is a tuple instead of this nested dictionary stuff, but even so, I'm still interested in figuring out my question.
[ "what you want is something like:\ndef int_wrapper(reader):\n for v in reader:\n yield map(int, v)\n\nYour code would then look like:\nreader = csv.reader(individualFile,dialect)\nreader = int_wrapper(reader)\n\n# all that other stuff\n\nfor m, f, s, g, a, c, t in reader:\n try:\n census[m][f][s...
[ 8 ]
[]
[]
[ "python" ]
stackoverflow_0003682321_python.txt
Q: Trying to change the Pylons version my website is using but this causes a DistributionNotFound exception About a month ago I setup Pylons on my VPS in a virtual environment using the go-pylons.py script they provide. I've, since then, been working on my website and have it all up and running. It works great. Recently though I discovered that I created my virtual python environment using Python2.5. I now want to change this to Python2.7 I've got Python2.7 running on my server and it all works great. I then created another virtual environment for Pylons and have that up and running. When I try to modify my website's mod_wsgi dispatch file to use the new environment, I get a DistributionNotFound exception. What could be causing this? Here's my dispatch.wsgi file: import site import os # New Python virtual environment # site.addsitedir('/usr/local/pylonsenv/lib/python2.7/site-packages') # Old Python virtual environment site.addsitedir('/usr/local/pylons/lib/python2.5/site-packages') os.environ['PYTHON_EGG_CACHE'] = '/home/samsu/python/egg-cache' from paste.deploy import loadapp application = loadapp('config:/home/samsu/python/mywebsite/production.ini') When I change the addsitedir paths around and restart Apache, viewing the website throws the exception. As soon as I change it back, problem goes away. Why can't I change virtual environments? A: For starters, you must recompile/reinstall mod_wsgi against Python 2.7, you cannot just point it at a new virtual environment using a newer Python version. Likely that the older Python installation doesn't have new enough version of a package required by code installed into your Python 2.7 virtual environment.
Trying to change the Pylons version my website is using but this causes a DistributionNotFound exception
About a month ago I setup Pylons on my VPS in a virtual environment using the go-pylons.py script they provide. I've, since then, been working on my website and have it all up and running. It works great. Recently though I discovered that I created my virtual python environment using Python2.5. I now want to change this to Python2.7 I've got Python2.7 running on my server and it all works great. I then created another virtual environment for Pylons and have that up and running. When I try to modify my website's mod_wsgi dispatch file to use the new environment, I get a DistributionNotFound exception. What could be causing this? Here's my dispatch.wsgi file: import site import os # New Python virtual environment # site.addsitedir('/usr/local/pylonsenv/lib/python2.7/site-packages') # Old Python virtual environment site.addsitedir('/usr/local/pylons/lib/python2.5/site-packages') os.environ['PYTHON_EGG_CACHE'] = '/home/samsu/python/egg-cache' from paste.deploy import loadapp application = loadapp('config:/home/samsu/python/mywebsite/production.ini') When I change the addsitedir paths around and restart Apache, viewing the website throws the exception. As soon as I change it back, problem goes away. Why can't I change virtual environments?
[ "For starters, you must recompile/reinstall mod_wsgi against Python 2.7, you cannot just point it at a new virtual environment using a newer Python version. Likely that the older Python installation doesn't have new enough version of a package required by code installed into your Python 2.7 virtual environment.\n" ...
[ 1 ]
[]
[]
[ "mod_wsgi", "pylons", "python", "virtualenv" ]
stackoverflow_0003682284_mod_wsgi_pylons_python_virtualenv.txt
Q: python multiprocessing pool, wait for processes and restart custom processes I used python multiprocessing and do wait of all processes with this code: ... results = [] for i in range(num_extract): url = queue.get(timeout=5) try: print "START PROCESS!" result = pool.apply_async(process, [host,url],callback=callback) results.append(result) except Exception,e: continue for r in results: r.get(timeout=7) ... i try to use pool.join but get error: Traceback (most recent call last): File "C:\workspace\sdl\lxchg\walker4.py", line 163, in <module> pool.join() File "C:\Python25\Lib\site-packages\multiprocessing\pool.py", line 338, in joi n assert self._state in (CLOSE, TERMINATE) AssertionError Why join dont work? And what is the good way to wait all processes. My second question is how can i restart certain process in pool? i need this in the reason of memory leak. Now In fact i do rebuild of all pool after all processes done their tasks (create new object pool to do process restarting). What i need: for example i have 4 process in pool. Then process get his task, after task is done i need to kill process and start new (to refresh memory leak). A: You are getting the error because you need to call pool.close() before calling pool.join() I don't know of a good way to shut down a process started with apply_async but see if properly shutting down the pool doesn't make your memory leak go away. The reason I think this is that the Pool class has a bunch of attributes that are threads running in daemon mode. All of these threads get cleaned up by the join method. The code you have now won't clean them up so if you create a new Pool, you'll still have all those threads running from the last one.
python multiprocessing pool, wait for processes and restart custom processes
I used python multiprocessing and do wait of all processes with this code: ... results = [] for i in range(num_extract): url = queue.get(timeout=5) try: print "START PROCESS!" result = pool.apply_async(process, [host,url],callback=callback) results.append(result) except Exception,e: continue for r in results: r.get(timeout=7) ... i try to use pool.join but get error: Traceback (most recent call last): File "C:\workspace\sdl\lxchg\walker4.py", line 163, in <module> pool.join() File "C:\Python25\Lib\site-packages\multiprocessing\pool.py", line 338, in joi n assert self._state in (CLOSE, TERMINATE) AssertionError Why join dont work? And what is the good way to wait all processes. My second question is how can i restart certain process in pool? i need this in the reason of memory leak. Now In fact i do rebuild of all pool after all processes done their tasks (create new object pool to do process restarting). What i need: for example i have 4 process in pool. Then process get his task, after task is done i need to kill process and start new (to refresh memory leak).
[ "You are getting the error because you need to call pool.close() before calling pool.join()\nI don't know of a good way to shut down a process started with apply_async but see if properly shutting down the pool doesn't make your memory leak go away.\nThe reason I think this is that the Pool class has a bunch of att...
[ 20 ]
[]
[]
[ "multiprocessing", "python" ]
stackoverflow_0003682469_multiprocessing_python.txt
Q: Python, Sorting name|num|num|num|num name|num|num|num|num name|num|num|num|num How i can sort this list on need me field (2,3,4,5) ? Sorry for my enlish. Update Input: str|10|20 str|1|30 Sort by first field (1,10): str|1|30 str|10|20 Sort by second field(20,30): str|10|20 str|1|30 A: I would use the operator module function "itemgetter" instead of the lambda functions. That is faster and allows multiple levels of sorting. from operator import itemgetter data = (line.split('|') for line in input.split('\n')) sort_index = 1 sorted(data, key=itemgetter(sort_index)) A: You can sort on a specific key, which tells the sort function how to evaluate the entries to be sorted -- that is, how we decide which of two entries is bigger. In this case, we'll first split up each string by the pipe, using split (for example, "a|b|c".split("|") returns ["a", "b", "c"]) and then grab whichever entry you want. To sort on the first "num" field: sorted(lines, key=(lambda line : line.split("|")[1]) where lines is a list of the lines as you mention in the question. To sort on a different field, just change the number in brackets. A: Assuming you start with a list of strings, start by splitting each row into a list: data = [line.split('|') for line in input] Then sort by whatever index you want: sort_index = 1 sorted_data = sorted(data, key=lambda line: int(line[sort_index])) The Python sorting guide has a lot more information.
Python, Sorting
name|num|num|num|num name|num|num|num|num name|num|num|num|num How i can sort this list on need me field (2,3,4,5) ? Sorry for my enlish. Update Input: str|10|20 str|1|30 Sort by first field (1,10): str|1|30 str|10|20 Sort by second field(20,30): str|10|20 str|1|30
[ "I would use the operator module function \"itemgetter\" instead of the lambda functions. That is faster and allows multiple levels of sorting.\nfrom operator import itemgetter\n\ndata = (line.split('|') for line in input.split('\\n')) \nsort_index = 1\nsorted(data, key=itemgetter(sort_index))\n\n", "You can sort...
[ 3, 2, 1 ]
[]
[]
[ "list", "python", "sorting" ]
stackoverflow_0003682537_list_python_sorting.txt
Q: Can't get Celery run_every property to work I'm trying to create some Celery Periodic Tasks, and a few of them need to have the ability to change the run_every time at runtime. The Celery documentation says I should be able to do this by turning the run_every attribute into a property (http://packages.python.org/celery/faq.html#can-i-change-the-interval-of-a-periodic-task-at-runtime). Here is what I'm doing: class ParseSomeStuffTask(PeriodicTask): def run(self, **kwargs): # Do stuff @property def run_every(self): if datetime.now().weekday() in [1, 2, 3]: return timedelta(minutes=15) else: return timedelta(seconds=40) Unfortunately, when I turn on celerybeat, I get the following error: [Thu Sep 09 15:44:40 2010: CRITICAL/828]: celerybeat raised exception : 'datetime.timedelta' object has no attribute 'is_due' It then shuts down. The Celery documentation doesn't really go into what to return when making run_every a property, and I haven't had any luck searching Google. Celery changelogs say its been able to change a Periodic Task's interval at runtime since version 1.0.0. Dev. Environment: Python 2.6.5 Django 1.2.1 Celery 2.0.2 A: Celery 2.0 supports different schedule behaviors. There's celery.task.schedules.schedule and celery.task.schedules.crontab. You have to return one of these, or make your own subclass of schedule. from celery.task.schedules import schedule @property def run_every(self): if datetime.now().weekday() in [1, 2, 3]: return schedule(timedelta(minutes=15)) else: return schedule(timedelta(seconds=40)) The run_every attribute will be automatically converted at instantiation, but not later.
Can't get Celery run_every property to work
I'm trying to create some Celery Periodic Tasks, and a few of them need to have the ability to change the run_every time at runtime. The Celery documentation says I should be able to do this by turning the run_every attribute into a property (http://packages.python.org/celery/faq.html#can-i-change-the-interval-of-a-periodic-task-at-runtime). Here is what I'm doing: class ParseSomeStuffTask(PeriodicTask): def run(self, **kwargs): # Do stuff @property def run_every(self): if datetime.now().weekday() in [1, 2, 3]: return timedelta(minutes=15) else: return timedelta(seconds=40) Unfortunately, when I turn on celerybeat, I get the following error: [Thu Sep 09 15:44:40 2010: CRITICAL/828]: celerybeat raised exception : 'datetime.timedelta' object has no attribute 'is_due' It then shuts down. The Celery documentation doesn't really go into what to return when making run_every a property, and I haven't had any luck searching Google. Celery changelogs say its been able to change a Periodic Task's interval at runtime since version 1.0.0. Dev. Environment: Python 2.6.5 Django 1.2.1 Celery 2.0.2
[ "Celery 2.0 supports different schedule behaviors. There's celery.task.schedules.schedule and celery.task.schedules.crontab.\nYou have to return one of these, or make your own subclass of schedule.\nfrom celery.task.schedules import schedule\n\n@property\ndef run_every(self):\n if datetime.now().weekday() in [1,...
[ 3 ]
[]
[]
[ "celery", "django", "python" ]
stackoverflow_0003680518_celery_django_python.txt
Q: DNA sequence alignement in native Python (no biopython) I have an interesting genetics problem that I would like to solve in native Python (nothing outside the standard library). This in order for the solution to be very easy to use on any computer, without requiring the user to install additional modules. Here it is. I received 100,000s of DNA sequences (up to 2 billion) from a 454 new generation sequencing run. I want to trim the extremities in order to remove primers that may be present on both ends, both as normal and sense sequences. Example: seq001: ACTGACGGATAGCTGACCTGATGATGGGTTGACCAGTGATC --primer-1--- --primer-2- Primers can be present one or multiple times (one right after the other). Normal sense are always on the left, and reverse on the right. My goal is thus to find the primers, cut the sequence such that only the primer-free part remains. For this, I want to use a classic alignment algorithm (ie: Smith-Waterman) that has been implemented in native Python (ie: not through biopython). I am aware of the fact that this may require quite some time (up to hours). Note: This is NOT a direct "word" search, as DNA, both in the sequences and the primers, can be "mutated" for diverse technical reasons. What would you use? A: Here's a paper on approximately that subject: Rocke, On finding novel gapped motifs in DNA sequences, 1998. Hopefully from that paper and its references, plus other papers which cite the above, you can find many ideas for algorithms. You won't find python code, but you may find descriptions of algorithms which you could then implement in Python. A: Researching that algorithm briefly, this is not easy stuff. This is going to take some very serious algorithm work. Try re-aligning your expectations from "hours" to "days or weeks". The programmer implementing this will need: High competence in general python programming Algorithm programming experience, and a good understanding of time complexity. A good understanding of python data structures such as dict, set, and deque, and their complexity characteristics. Familiarity with unittests. That programmer may or may not be you right now. This sounds like an awesome project, good luck! A: You could do this quite simply using regex? I don't think it would be that complicated! In fact, I have just completed some code to do something pretty much the same as this for one of the guys at the university here! If not looking for exact copies of the primers, due to mutation then an element of fuzzy matching could be applied! The version I did very simply looked for exact primer matches at the start and end and returned the value minus those primers using the following code: pattern = "^" + start_primer + "([A-Z]+)" + end_primer + "$" # start primer and end primer are sequences you are looking to match regex = re.match(pattern, sequence) # sequence is the DNA sequence you are analyzing print regex.group(1) # prints the sequence between the start and end primers Here's a link on fuzzy regex in python http://hackerboss.com/approximate-regex-matching-in-python/
DNA sequence alignement in native Python (no biopython)
I have an interesting genetics problem that I would like to solve in native Python (nothing outside the standard library). This in order for the solution to be very easy to use on any computer, without requiring the user to install additional modules. Here it is. I received 100,000s of DNA sequences (up to 2 billion) from a 454 new generation sequencing run. I want to trim the extremities in order to remove primers that may be present on both ends, both as normal and sense sequences. Example: seq001: ACTGACGGATAGCTGACCTGATGATGGGTTGACCAGTGATC --primer-1--- --primer-2- Primers can be present one or multiple times (one right after the other). Normal sense are always on the left, and reverse on the right. My goal is thus to find the primers, cut the sequence such that only the primer-free part remains. For this, I want to use a classic alignment algorithm (ie: Smith-Waterman) that has been implemented in native Python (ie: not through biopython). I am aware of the fact that this may require quite some time (up to hours). Note: This is NOT a direct "word" search, as DNA, both in the sequences and the primers, can be "mutated" for diverse technical reasons. What would you use?
[ "Here's a paper on approximately that subject:\nRocke, On finding novel gapped motifs in DNA sequences, 1998.\nHopefully from that paper and its references, plus other papers which cite the above, you can find many ideas for algorithms. You won't find python code, but you may find descriptions of algorithms which ...
[ 1, 1, 1 ]
[]
[]
[ "alignment", "dna_sequence", "genetics", "python" ]
stackoverflow_0002420035_alignment_dna_sequence_genetics_python.txt
Q: How to compile Python with all externals under Windows? When I compiled Python using PCBuild\build.bat I discovered that several Python external projects like ssl, bz2, ... were not compiled because the compiler did not find them. I did run the Tools\Buildbot\external.bat and it did download them inside \Tools\ but it looks that the build is not looking for them in this location and the PCBuild\readme.txt does not provide the proper info regarding this. In case it does matter, I do use VS2008 and VS2010 on this system. Example: Build log was saved at "file://C:\dev\os\py3k\PCbuild\Win32-temp-Release\_tkinter\BuildLog.htm" _tkinter - 2 error(s), 0 warning(s) Build started: Project: bz2, Configuration: Release|Win32 Compiling... bz2module.c ..\Modules\bz2module.c(12) : fatal error C1083: Cannot open include file: 'bzlib.h': No such file or directory A: Tools\buildbot\external.bat must be run from py3k root, not from Tools\buildbot\ subdir as you did. Also to build release version of python with Tkinter support you have to edit or copy Tools\buildbot\external.bat to remove DEBUG=1 so it can build tclXY.dll/tkXY.dll (without -g suffix).
How to compile Python with all externals under Windows?
When I compiled Python using PCBuild\build.bat I discovered that several Python external projects like ssl, bz2, ... were not compiled because the compiler did not find them. I did run the Tools\Buildbot\external.bat and it did download them inside \Tools\ but it looks that the build is not looking for them in this location and the PCBuild\readme.txt does not provide the proper info regarding this. In case it does matter, I do use VS2008 and VS2010 on this system. Example: Build log was saved at "file://C:\dev\os\py3k\PCbuild\Win32-temp-Release\_tkinter\BuildLog.htm" _tkinter - 2 error(s), 0 warning(s) Build started: Project: bz2, Configuration: Release|Win32 Compiling... bz2module.c ..\Modules\bz2module.c(12) : fatal error C1083: Cannot open include file: 'bzlib.h': No such file or directory
[ "Tools\\buildbot\\external.bat must be run from py3k root, not from Tools\\buildbot\\ subdir as you did. Also to build release version of python with Tkinter support you have to edit or copy Tools\\buildbot\\external.bat to remove DEBUG=1 so it can build tclXY.dll/tkXY.dll (without -g suffix).\n" ]
[ 2 ]
[]
[]
[ "python" ]
stackoverflow_0003307813_python.txt
Q: Improving __init__ where args are assigned directly to members I'm finding myself writing a lot of classes with constructors like this: class MyClass(object): def __init__(self, foo, bar, foobar=1, anotherfoo=None): self.foo = foo self.bar = bar self.foobar = foobar self.anotherfoo = anotherfoo Is this a bad code smell? Does Python offer a more elegant way of handling this? My classes and even some of the constructors are more than just what I've shown, but I usually have a list of args passed to the constructor which just end up being assigned to similarly named members. I made some of the arguments optional to point out the problem with doing something like: class MyClass(object): def __init__(self, arg_dict): self.__dict__ = arg_dict A: If they're kwargs, you could do something like this: def __init__(self, **kwargs): for kw,arg in kwargs.iteritems(): setattr(self, kw, arg) posargs are a bit trickier since you don't get naming information in a nice way. If you want to provide default values, you can do it like this: def __init__(self, **kwargs): arg_vals = { 'param1': 'default1', # ... } arg_vals.update(kwargs) for kw,arg in arg_vals.iteritems(): setattr(self, kw, arg) A: Personally, I'd stick with the way you're currently doing it as it's far less brittle. Consider the following code with a typo: myobject = MyClass(foo=1,bar=2,fobar=3) If you use your original approach you'll get the following desirable behaviour when you try to create the object: TypeError: __init__() got an unexpected keyword argument 'fobar' With the kwargs approach this happens: >>> myobject.fobar 3 This seems to me the source of the kind of bugs that are very difficult to find. You could validate the kwargs list to ensure it only has expected values, but by the time you've done that and the work to add default values I think it'll be more complex than your original approach. A: you could do something like this: def Struct(name): def __init__(self, **fields): self.__dict__.update(fields) cls = type(name, (object, ), {'__init__', __init__}) return cls You would use it like: MyClass = Struct('MyClass') t = MyClass(a=1, b=2) If you want positional argumentsas well, then use this: def Struct(name, fields): fields = fields.split() def __init__(self, *args, **kwargs): for field, value in zip(fields, args): self.__dict__[field] = value self.__dict__.update(kwargs) cls = type(name, (object, ), {'__init__': __init__}) return cls It's then used like MyClass = Struct('MyClass', 'foo bar foobar anotherfoo') a = MyClass(1, 2, foobar=3, anotherfoo=4) This is similar to the namedtuple from collections This saves you a lot more typing than defining essentially the same __init__ method over and over again and doesn't require you to muddy up your inheritance tree just to get that same method without retyping it. If you need to add additional methods, then you can just create a base MyClassBase = Struct('MyClassBase', 'foo bar') class MyClass(MyClassBase): def other_method(self): pass A: This is horrible code. class MyClass(object): def __init__(self, foo, bar, spam, eggs): for arg in self.__init__.func_code.co_varnames: setattr(self, arg, locals()[arg]) Then, you can do something like: myobj = MyClass(1, 0, "hello", "world") myletter = myobj.spam[myobj.bar] A: There's nothing wrong with the pattern. It's easy to read and easy to understand (much more so than many of the other answers here). I'm finding myself writing a lot of classes with constructors like this If you are getting bored typing up such constructors, then the solution is to make the computer do it for you. For example, if you happen to be coding with pydev, you can press Ctrl+1, A to make the editor do it for you. This is a much better solution than spending time writing and debugging magic code that obfuscates what you're really trying to do, which is to assign values to some instance variables.
Improving __init__ where args are assigned directly to members
I'm finding myself writing a lot of classes with constructors like this: class MyClass(object): def __init__(self, foo, bar, foobar=1, anotherfoo=None): self.foo = foo self.bar = bar self.foobar = foobar self.anotherfoo = anotherfoo Is this a bad code smell? Does Python offer a more elegant way of handling this? My classes and even some of the constructors are more than just what I've shown, but I usually have a list of args passed to the constructor which just end up being assigned to similarly named members. I made some of the arguments optional to point out the problem with doing something like: class MyClass(object): def __init__(self, arg_dict): self.__dict__ = arg_dict
[ "If they're kwargs, you could do something like this:\ndef __init__(self, **kwargs):\n for kw,arg in kwargs.iteritems():\n setattr(self, kw, arg)\n\nposargs are a bit trickier since you don't get naming information in a nice way.\nIf you want to provide default values, you can do it like this:\ndef __init...
[ 8, 2, 1, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0003682137_python.txt
Q: Using the Python random module with XCHAT IRC scripts I'm trying to print random items from a list into my XCHAT channel messages. So far I've only been able to print the random items from my list alone, but not with any specific text. Example usage would be: "/ran blahblahblah" to produce the desired effect of a channel message such as "blahblahblah [random item]" __module_name__ = "ran.py" __module_version__ = "1.0" __module_description__ = "script to add random text to channel messages" import xchat import random def ran(message): message = random.choice(['test1', 'test2', 'test3', 'test4', 'test5']) return(message) def ran_cb(word, word_eol, userdata): message = '' message = ran(message) xchat.command("msg %s %s"%(xchat.get_info('channel'), message)) return xchat.EAT_ALL xchat.hook_command("ran", ran_cb, help="/ran to use") A: You don't allow the caller to specify the arguments to choose from. def ran(choices=None): if not choices: choices = ('test1', 'test2', 'test3', 'test4', 'test5') return random.choice(choices) You need to get the choices from the command. def ran_cb(word, word_eol, userdata): message = ran(word[1:]) xchat.command("msg %s %s"%(xchat.get_info('channel'), message)) return xchat.EAT_ALL word is a list of the words sent through the command, word[0] is the command itself so only copy from 1 and further.
Using the Python random module with XCHAT IRC scripts
I'm trying to print random items from a list into my XCHAT channel messages. So far I've only been able to print the random items from my list alone, but not with any specific text. Example usage would be: "/ran blahblahblah" to produce the desired effect of a channel message such as "blahblahblah [random item]" __module_name__ = "ran.py" __module_version__ = "1.0" __module_description__ = "script to add random text to channel messages" import xchat import random def ran(message): message = random.choice(['test1', 'test2', 'test3', 'test4', 'test5']) return(message) def ran_cb(word, word_eol, userdata): message = '' message = ran(message) xchat.command("msg %s %s"%(xchat.get_info('channel'), message)) return xchat.EAT_ALL xchat.hook_command("ran", ran_cb, help="/ran to use")
[ "\nYou don't allow the caller to specify the arguments to choose from.\ndef ran(choices=None):\n if not choices:\n choices = ('test1', 'test2', 'test3', 'test4', 'test5')\n return random.choice(choices)\n\nYou need to get the choices from the command.\ndef ran_cb(word, word_eol, userdata):\n message...
[ 0 ]
[]
[]
[ "irc", "python", "random" ]
stackoverflow_0003683247_irc_python_random.txt
Q: How to serialize beautifulsoup access-paths? i have code, which does something like this: item.previous.parent.parent.aTag['href'] now i would like to be able to add filters fast, so hardcoding is no longer an option. how can i access the same tags with a path coded in a string? of course i could invent some format like [('getattr', 'previous'), ('getattr', 'parent'), ..., ('getitem', 'href)] and parse it with __getattr__ and __getitem__. Now the question: Is there already a more beautiful way of doing it, or do i need to implement it myself? A: It seems to me that you would be better off using a XPATH expression. This discussion has some information about an XPATH plugin for BeautifulSoup called BSXPath. I haven't used it so I do not know if it will serve your purpose. If you are willing to replace BeautifulSoup then lxml has a really powerful XPATH implementation. update See @llasram's comment to this answer: These days lxml has an html module which the BeautifulSoup implementer himself recommends over BeautifulSoup. It even has a soupparser sub-module which will use BeautifulSoup to do the parsing!
How to serialize beautifulsoup access-paths?
i have code, which does something like this: item.previous.parent.parent.aTag['href'] now i would like to be able to add filters fast, so hardcoding is no longer an option. how can i access the same tags with a path coded in a string? of course i could invent some format like [('getattr', 'previous'), ('getattr', 'parent'), ..., ('getitem', 'href)] and parse it with __getattr__ and __getitem__. Now the question: Is there already a more beautiful way of doing it, or do i need to implement it myself?
[ "It seems to me that you would be better off using a XPATH expression. This discussion has some information about an XPATH plugin for BeautifulSoup called BSXPath. I haven't used it so I do not know if it will serve your purpose.\nIf you are willing to replace BeautifulSoup then lxml has a really powerful XPATH imp...
[ 1 ]
[]
[]
[ "beautifulsoup", "html", "parsing", "python", "serialization" ]
stackoverflow_0003683264_beautifulsoup_html_parsing_python_serialization.txt
Q: python django database synch I use django in project. I have many cron jobs which operate with database. I want to replace cron jobs on other machine and synchronize processed data with main server. But my host provider doesnt allow external connections to db. How to organize sync. best way? I know what i can pass it via POST request with my own written protocol, buy may be better solution exsists or lib for this? A: This sounds like an awful lot of work for negligible little gain. I would suggest trying an Amazon EC2 micro image and run Django on that for $0.03 US per hour (doesn't appear to be updated on the pricing page just yet but it is in the AWS web console). Then you can do what ever you want. How does the Web hosting provider block outgoing connections? Can you put an external database server on a popular port like 53, 80, 143, etc., and see if you can connect that way? A: Are all external connections blocked? If you can get an rsync daemon to run in the machine you can push (sync, rather) the data from other machines to master and have master process it. This will involve adding some kind of daemon process at the master. If all connections are indeed blocked you will have to resort to POST or the more tedious one of receiving email attachments and processing them. Side note: you should think of switching providers post-haste!
python django database synch
I use django in project. I have many cron jobs which operate with database. I want to replace cron jobs on other machine and synchronize processed data with main server. But my host provider doesnt allow external connections to db. How to organize sync. best way? I know what i can pass it via POST request with my own written protocol, buy may be better solution exsists or lib for this?
[ "This sounds like an awful lot of work for negligible little gain. I would suggest trying an Amazon EC2 micro image and run Django on that for $0.03 US per hour (doesn't appear to be updated on the pricing page just yet but it is in the AWS web console). Then you can do what ever you want.\nHow does the Web hosting...
[ 1, 1 ]
[]
[]
[ "database", "django", "python", "sync" ]
stackoverflow_0003681907_database_django_python_sync.txt
Q: When are the first Python objects 'object' and 'type' instances created? Reading: http://www.python.org/download/releases/2.2/descrintro/#metaclasses A class statement is executed and then the name, bases, and attributes dict are passed to a metaclass object. Since 'type' is an instance - isinstance(type, type), it is an object already. When/how is the very first instance created ? My guess is that the first instance was created by the core C code and is then 'ready for use' when the interpreter is first started. Am I correct ? A: For all intents and purposes, the object and type objects are created during interpreter startup, yes. In practice, in CPython, parts of the two objects are allocated statically, and parts are allocated during Python startup.
When are the first Python objects 'object' and 'type' instances created?
Reading: http://www.python.org/download/releases/2.2/descrintro/#metaclasses A class statement is executed and then the name, bases, and attributes dict are passed to a metaclass object. Since 'type' is an instance - isinstance(type, type), it is an object already. When/how is the very first instance created ? My guess is that the first instance was created by the core C code and is then 'ready for use' when the interpreter is first started. Am I correct ?
[ "For all intents and purposes, the object and type objects are created during interpreter startup, yes. In practice, in CPython, parts of the two objects are allocated statically, and parts are allocated during Python startup.\n" ]
[ 1 ]
[]
[]
[ "object", "python", "types" ]
stackoverflow_0003684082_object_python_types.txt
Q: pisa to generate a table of content via html convert Does anyone have any idea how to use the tag so the table of content comes onto the 1st page and all text is coming behind. This is what i've got so far, it generates the table of content behind my text... pdf.html <htmL> <body> <div> <pdf:toc /> </div> <pdf:nextpage> <br/> <h1> test </h1> <h2> second </h2> some text <h1> test_two </h1> <h2> second </h2> some text </body> </html> I can't seem to get everything in the right position, even with the it doesn't seem to work... any help or documentation somewhere? The PISA docs are rly crappy with details actually... Btw 1 more extra thing, is it possible to make this table of content jump to the right page? If yes how does this works? Regards, A: I found I couldn't get that pagebreak to work for me, so I used inline CSS and, specifically, the page-break property to fix it. In your case, this should do the trick: <div style="page-break-after:always;> <pdf:toc /> </div> <h1> test </h1> ...etc... A: As far as the links are concerned, there may be a way to automatically generate them, but I found it easier to manually create a table of contents using links and anchors: <h1>Table of Contents</h1> <ul> <li><a href="section1">The name of section 1</li> <li><a href="section2">The name of section 2</li> </ul> <h2>The name of section 1</h2> <a name="section1"></a> <h2>The name of section 2</h2> <a name="section2"></a> There's obviously some duplication, but I haven't found it difficult to maintain for my documents. It depends how long or complicated you expect yours to became. The bigger downside is that this option won't include page numbers. Steve's comment about the page-break property is correct. I personally used a separate CSS file with h2 { page-break-before:always; } so that all of my sections would start on a new page.
pisa to generate a table of content via html convert
Does anyone have any idea how to use the tag so the table of content comes onto the 1st page and all text is coming behind. This is what i've got so far, it generates the table of content behind my text... pdf.html <htmL> <body> <div> <pdf:toc /> </div> <pdf:nextpage> <br/> <h1> test </h1> <h2> second </h2> some text <h1> test_two </h1> <h2> second </h2> some text </body> </html> I can't seem to get everything in the right position, even with the it doesn't seem to work... any help or documentation somewhere? The PISA docs are rly crappy with details actually... Btw 1 more extra thing, is it possible to make this table of content jump to the right page? If yes how does this works? Regards,
[ "I found I couldn't get that pagebreak to work for me, so I used inline CSS and, specifically, the page-break property to fix it. \nIn your case, this should do the trick:\n<div style=\"page-break-after:always;>\n <pdf:toc />\n</div>\n<h1> test </h1> ...etc...\n\n", "As far as the links are concerned, there may...
[ 2, 1 ]
[]
[]
[ "django", "pisa", "python" ]
stackoverflow_0003684488_django_pisa_python.txt
Q: python update class instance to reflect change in a class method As I work and update a class, I want a class instance that is already created to be updated. How do I go about doing that? class MyClass: """ """ def __init__(self): def myMethod(self, case): print 'hello' classInstance = MyClass() I run Python inside of Maya and on software start the instance is created. When I call classInstance.myMethod() it always prints 'hello' even if I change this. Thank you, /Christian More complete example: class MayaCore: ''' Super class and foundational Maya utility library ''' def __init__(self): """ MayaCore.__init__(): set initial parameters """ #maya info self.mayaVer = self.getMayaVersion() def convertToPyNode(self, node): """ SYNOPSIS: checks and converts to PyNode INPUTS: (string?/PyNode?) node: node name RETURNS: (PyNode) node """ if not re.search('pymel', str(node.__class__)): if not node.__class__ == str and re.search('Meta', str(node)): return node # pass Meta objects too return PyNode(node) else: return node def verifyMeshSelection(self, all=0): """ SYNOPSIS: Verifies the selection to be mesh transform INPUTS: all = 0 - acts only on the first selected item all = 1 - acts on all selected items RETURNS: 0 if not mesh transform or nothing is selected 1 if all/first selected is mesh transform """ self.all = all allSelected = [] error = 0 iSel = ls(sl=1) if iSel != '': if self.all: allSelected = ls(sl=1) else: allSelected.append(ls(sl=1)[0]) if allSelected: for each in allSelected: if nodeType(each) == 'transform' and nodeType(each.getShape()) == 'mesh': pass else: error = 1 else: error = 1 else: error = 1 if error: return 0 else: return 1 mCore = MayaCore() The last line is inside the module file (mCore = MayaCore()). There are tons of methods inside the class so I have removed them to shorten the scrolling :-) Also there are import statements above the class but they screw up the formatting for some reason. Here they are: from pymel.all import * import re from maya import OpenMaya as om from our_libs.configobj import ConfigObj if getMelGlobal('float', "mVersion") >= 2011: from PyQt4 import QtGui, QtCore, uic import sip from maya import OpenMayaUI as omui Inside Maya, we import this and subclasses of this class upon program start: from our_maya.mayaCore import * In other tools we write, we then call mCore.method() on a need basis. The caveat I am running into is that when I am going back to modify the mCore method and the instance call is already in play, I have to restart Maya for all the instances to get updated with the method change (they will still use the un-modified method). A: Alright, trying again, but with a new understanding of the question: class Foo(object): def method(self): print "Before" f = Foo() f.method() def new_method(self): print "After" Foo.method = new_method f.method() will print Before After This will work with old style classes too. The key is modifying the class, not overriding the class's name. A: You'll have to provide more details about what you are doing, but Python instances don't store methods, they always get them from their class. So if you change the method on the class, existing instances will see the new method. A: My other answer answers your original question, so I'm leaving it there, but I think what you really want is the reload function. import our_maya.mayaCore reload(our_maya.mayaCore) from our_maya.mayaCore import * Do that after you change the class definition. Your new method ought to show up and be used by all the existing instances of your class.
python update class instance to reflect change in a class method
As I work and update a class, I want a class instance that is already created to be updated. How do I go about doing that? class MyClass: """ """ def __init__(self): def myMethod(self, case): print 'hello' classInstance = MyClass() I run Python inside of Maya and on software start the instance is created. When I call classInstance.myMethod() it always prints 'hello' even if I change this. Thank you, /Christian More complete example: class MayaCore: ''' Super class and foundational Maya utility library ''' def __init__(self): """ MayaCore.__init__(): set initial parameters """ #maya info self.mayaVer = self.getMayaVersion() def convertToPyNode(self, node): """ SYNOPSIS: checks and converts to PyNode INPUTS: (string?/PyNode?) node: node name RETURNS: (PyNode) node """ if not re.search('pymel', str(node.__class__)): if not node.__class__ == str and re.search('Meta', str(node)): return node # pass Meta objects too return PyNode(node) else: return node def verifyMeshSelection(self, all=0): """ SYNOPSIS: Verifies the selection to be mesh transform INPUTS: all = 0 - acts only on the first selected item all = 1 - acts on all selected items RETURNS: 0 if not mesh transform or nothing is selected 1 if all/first selected is mesh transform """ self.all = all allSelected = [] error = 0 iSel = ls(sl=1) if iSel != '': if self.all: allSelected = ls(sl=1) else: allSelected.append(ls(sl=1)[0]) if allSelected: for each in allSelected: if nodeType(each) == 'transform' and nodeType(each.getShape()) == 'mesh': pass else: error = 1 else: error = 1 else: error = 1 if error: return 0 else: return 1 mCore = MayaCore() The last line is inside the module file (mCore = MayaCore()). There are tons of methods inside the class so I have removed them to shorten the scrolling :-) Also there are import statements above the class but they screw up the formatting for some reason. Here they are: from pymel.all import * import re from maya import OpenMaya as om from our_libs.configobj import ConfigObj if getMelGlobal('float', "mVersion") >= 2011: from PyQt4 import QtGui, QtCore, uic import sip from maya import OpenMayaUI as omui Inside Maya, we import this and subclasses of this class upon program start: from our_maya.mayaCore import * In other tools we write, we then call mCore.method() on a need basis. The caveat I am running into is that when I am going back to modify the mCore method and the instance call is already in play, I have to restart Maya for all the instances to get updated with the method change (they will still use the un-modified method).
[ "Alright, trying again, but with a new understanding of the question:\nclass Foo(object):\n def method(self):\n print \"Before\"\n\nf = Foo()\nf.method()\ndef new_method(self):\n print \"After\"\n\nFoo.method = new_method\nf.method()\n\nwill print\nBefore\nAfter\n\nThis will work with old style classes...
[ 1, 0, 0 ]
[]
[]
[ "maya", "python" ]
stackoverflow_0003679592_maya_python.txt
Q: How to replace "\" with "\\" I've a path from wx.FileDialog (getpath()) shows "c:\test.jpg" which doesn't works with opencv cv.LoadImage() which needs "\\" or "/" So, I've tried to use replace function for example: s.replace("\","\\"[0:2]),s.replace("\\","\\\"[0:2]) but none those works. And, this command s.replace("\\","/"[0:1]) returns the same path, I don't know why. Could you help me solve this easy problem. ps, I'm very new to python thank you so much. sorry about my grammar A: \ escapes the next character. To actually get a backslash, you must escape it. Use \\: s.replace("\\","/") A: I think your looking for s.replace("\\","/") Looking at the docs, and im not a Python programmer but its like so: str.replace(old, new[, count]) So your do not need the 3rd parameter, but you need new and old obviusly. s.replace("\\","/") the reason we have \\ as because if we only had "\" this means that your escaping a quotation and your old parameter gets sent a " that's if python don't trigger and error. you need to send a Literal backslash like \ so what actually gets sent to the interpreter is a single \ you will notice with SO syntax highlighter where the string is being escaped.. s.replace("\","\\"[0:2]) #yours " s.replace("\\","/") #mine http://en.wikipedia.org/wiki/Escape_character A: Its not entirely clear to me what you want to do with the paths, but there are a number of functions for dealing with them. You may want to use os.path.normpath() which will correct the slashes for whatever platform you are running on. A: In Python you can use / independently from the OS as path separator (as RobertPitt pointed out, you can do this anyway). But to answer your question, this should work: str.replace("\\", "\\\\") A: def onOpen(self, event): # wxGlade: MyFrame.<event_handler> dlg = wx.FileDialog(self, "Open an Image") if dlg.ShowModal() == wx.ID_OK: __imgpath__ = dlg.GetPath() print 'Selected:', dlg.GetPath() self.panel_2.LoadImage(cv.LoadImage(__imgpath__)) I don't know why, but it works with opencv. output : "Selected: c:\test.jpg" I'm sorry I didn't try it out first.
How to replace "\" with "\\"
I've a path from wx.FileDialog (getpath()) shows "c:\test.jpg" which doesn't works with opencv cv.LoadImage() which needs "\\" or "/" So, I've tried to use replace function for example: s.replace("\","\\"[0:2]),s.replace("\\","\\\"[0:2]) but none those works. And, this command s.replace("\\","/"[0:1]) returns the same path, I don't know why. Could you help me solve this easy problem. ps, I'm very new to python thank you so much. sorry about my grammar
[ "\\ escapes the next character. To actually get a backslash, you must escape it. Use \\\\:\n s.replace(\"\\\\\",\"/\")\n\n", "I think your looking for s.replace(\"\\\\\",\"/\")\nLooking at the docs, and im not a Python programmer but its like so:\nstr.replace(old, new[, count])\n\nSo your do not need the 3rd para...
[ 4, 2, 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0003684980_python.txt
Q: Pythonic way to convert a list of integers into a string of comma-separated ranges I have a list of integers which I need to parse into a string of ranges. For example: [0, 1, 2, 3] -> "0-3" [0, 1, 2, 4, 8] -> "0-2,4,8" And so on. I'm still learning more pythonic ways of handling lists, and this one is a bit difficult for me. My latest thought was to create a list of lists which keeps track of paired numbers: [ [0, 3], [4, 4], [5, 9], [20, 20] ] I could then iterate across this structure, printing each sub-list as either a range, or a single value. I don't like doing this in two iterations, but I can't seem to keep track of each number within each iteration. My thought would be to do something like this: Here's my most recent attempt. It works, but I'm not fully satisfied; I keep thinking there's a more elegant solution which completely escapes me. The string-handling iteration isn't the nicest, I know -- it's pretty early in the morning for me :) def createRangeString(zones): rangeIdx = 0 ranges = [[zones[0], zones[0]]] for zone in list(zones): if ranges[rangeIdx][1] in (zone, zone-1): ranges[rangeIdx][1] = zone else: ranges.append([zone, zone]) rangeIdx += 1 rangeStr = "" for range in ranges: if range[0] != range[1]: rangeStr = "%s,%d-%d" % (rangeStr, range[0], range[1]) else: rangeStr = "%s,%d" % (rangeStr, range[0]) return rangeStr[1:] Is there a straightforward way I can merge this into a single iteration? What else could I do to make it more Pythonic? A: >>> from itertools import count, groupby >>> L=[1, 2, 3, 4, 6, 7, 8, 9, 12, 13, 19, 20, 22, 23, 40, 44] >>> G=(list(x) for _,x in groupby(L, lambda x,c=count(): next(c)-x)) >>> print ",".join("-".join(map(str,(g[0],g[-1])[:len(g)])) for g in G) 1-4,6-9,12-13,19-20,22-23,40,44 The idea here is to pair each element with count(). Then the difference between the value and count() is constant for consecutive values. groupby() does the rest of the work As Jeff suggests, an alternative to count() is to use enumerate(). This adds some extra cruft that needs to be stripped out in the print statement G=(list(x) for _,x in groupby(enumerate(L), lambda (i,x):i-x)) print ",".join("-".join(map(str,(g[0][1],g[-1][1])[:len(g)])) for g in G) Update: for the sample list given here, the version with enumerate runs about 5% slower than the version using count() on my computer A: Whether this is pythonic is up for debate. But it is very compact. The real meat is in the Rangify() function. There's still room for improvement if you want efficiency or Pythonism. def CreateRangeString(zones): #assuming sorted and distinct deltas = [a-b for a, b in zip(zones[1:], zones[:-1])] deltas.append(-1) def Rangify((b, p), (z, d)): if p is not None: if d == 1: return (b, p) b.append('%d-%d'%(p,z)) return (b, None) else: if d == 1: return (b, z) b.append(str(z)) return (b, None) return ','.join(reduce(Rangify, zip(zones, deltas), ([], None))[0]) To describe the parameters: deltas is the distance to the next value (inspired from an answer here on SO) Rangify() does the reduction on these parameters b - base or accumulator p - previous start range z - zone number d - delta A: To concatenate strings you should use ','.join. This removes the 2nd loop. def createRangeString(zones): rangeIdx = 0 ranges = [[zones[0], zones[0]]] for zone in list(zones): if ranges[rangeIdx][1] in (zone, zone-1): ranges[rangeIdx][1] = zone else: ranges.append([zone, zone]) rangeIdx += 1 return ','.join( map( lambda p: '%s-%s'%tuple(p) if p[0] != p[1] else str(p[0]), ranges ) ) Although I prefer a more generic approach: from itertools import groupby # auxiliary functor to allow groupby to compare by adjacent elements. class cmp_to_groupby_key(object): def __init__(self, f): self.f = f self.uninitialized = True def __call__(self, newv): if self.uninitialized or not self.f(self.oldv, newv): self.curkey = newv self.uninitialized = False self.oldv = newv return self.curkey # returns the first and last element of an iterable with O(1) memory. def first_and_last(iterable): first = next(iterable) last = first for i in iterable: last = i return (first, last) # convert groups into list of range strings def create_range_string_from_groups(groups): for _, g in groups: first, last = first_and_last(g) if first != last: yield "{0}-{1}".format(first, last) else: yield str(first) def create_range_string(zones): groups = groupby(zones, cmp_to_groupby_key(lambda a,b: b-a<=1)) return ','.join(create_range_string_from_groups(groups)) assert create_range_string([0,1,2,3]) == '0-3' assert create_range_string([0, 1, 2, 4, 8]) == '0-2,4,8' assert create_range_string([1,2,3,4,6,7,8,9,12,13,19,20,22,22,22,23,40,44]) == '1-4,6-9,12-13,19-20,22-23,40,44' A: This is more verbose, mainly because I have used generic functions that I have and that are minor variations of itertools functions and recipes: from itertools import tee, izip_longest def pairwise_longest(iterable): "variation of pairwise in http://docs.python.org/library/itertools.html#recipes" a, b = tee(iterable) next(b, None) return izip_longest(a, b) def takeuntil(predicate, iterable): """returns all elements before and including the one for which the predicate is true variation of http://docs.python.org/library/itertools.html#itertools.takewhile""" for x in iterable: yield x if predicate(x): break def get_range(it): "gets a range from a pairwise iterator" rng = list(takeuntil(lambda (a,b): (b is None) or (b-a>1), it)) if rng: b, e = rng[0][0], rng[-1][0] return "%d-%d" % (b,e) if b != e else "%d" % b def create_ranges(zones): it = pairwise_longest(zones) return ",".join(iter(lambda:get_range(it),None)) k=[0,1,2,4,5,7,9,12,13,14,15] print create_ranges(k) #0-2,4-5,7,9,12-15 A: Here is my solution. You need to keep track of various pieces of information while you iterate through the list and create the result - this screams generator to me. So here goes: def rangeStr(start, end): '''convert two integers into a range start-end, or a single value if they are the same''' return str(start) if start == end else "%s-%s" %(start, end) def makeRange(seq): '''take a sequence of ints and return a sequence of strings with the ranges ''' # make sure that seq is an iterator seq = iter(seq) start = seq.next() current = start for val in seq: current += 1 if val != current: yield rangeStr(start, current-1) start = current = val # make sure the last range is included in the output yield rangeStr(start, current) def stringifyRanges(seq): return ','.join(makeRange(seq)) >>> l = [1,2,3, 7,8,9, 11, 20,21,22,23] >>> l2 = [1,2,3, 7,8,9, 11, 20,21,22,23, 30] >>> stringifyRanges(l) '1-3,7-9,11,20-23' >>> stringifyRanges(l2) '1-3,7-9,11,20-23,30' My version will work correctly if given an empty list, which I think some of the others will not. >>> stringifyRanges( [] ) '' makeRanges will work on any iterator that returns integers and lazily returns a sequence of strings so can be used on infinite sequences. edit: I have updated the code to handle single numbers that are not part of a range. edit2: refactored out rangeStr to remove duplication. A: def createRangeString(zones): """Create a string with integer ranges in the format of '%d-%d' >>> createRangeString([0, 1, 2, 4, 8]) "0-2,4,8" >>> createRangeString([1,2,3,4,6,7,8,9,12,13,19,20,22,22,22,23,40,44]) "1-4,6-9,12-13,19-20,22-23,40,44" """ buffer = [] try: st = ed = zones[0] for i in zones[1:]: delta = i - ed if delta == 1: ed = i elif not (delta == 0): buffer.append((st, ed)) st = ed = i else: buffer.append((st, ed)) except IndexError: pass return ','.join( "%d" % st if st==ed else "%d-%d" % (st, ed) for st, ed in buffer) A: how about this mess... def rangefy(mylist): mylist, mystr, start = mylist + [None], "", 0 for i, v in enumerate(mylist[:-1]): if mylist[i+1] != v + 1: mystr += ["%d,"%v,"%d-%d,"%(start,v)][start!=v] start = mylist[i+1] return mystr[:-1]
Pythonic way to convert a list of integers into a string of comma-separated ranges
I have a list of integers which I need to parse into a string of ranges. For example: [0, 1, 2, 3] -> "0-3" [0, 1, 2, 4, 8] -> "0-2,4,8" And so on. I'm still learning more pythonic ways of handling lists, and this one is a bit difficult for me. My latest thought was to create a list of lists which keeps track of paired numbers: [ [0, 3], [4, 4], [5, 9], [20, 20] ] I could then iterate across this structure, printing each sub-list as either a range, or a single value. I don't like doing this in two iterations, but I can't seem to keep track of each number within each iteration. My thought would be to do something like this: Here's my most recent attempt. It works, but I'm not fully satisfied; I keep thinking there's a more elegant solution which completely escapes me. The string-handling iteration isn't the nicest, I know -- it's pretty early in the morning for me :) def createRangeString(zones): rangeIdx = 0 ranges = [[zones[0], zones[0]]] for zone in list(zones): if ranges[rangeIdx][1] in (zone, zone-1): ranges[rangeIdx][1] = zone else: ranges.append([zone, zone]) rangeIdx += 1 rangeStr = "" for range in ranges: if range[0] != range[1]: rangeStr = "%s,%d-%d" % (rangeStr, range[0], range[1]) else: rangeStr = "%s,%d" % (rangeStr, range[0]) return rangeStr[1:] Is there a straightforward way I can merge this into a single iteration? What else could I do to make it more Pythonic?
[ ">>> from itertools import count, groupby\n>>> L=[1, 2, 3, 4, 6, 7, 8, 9, 12, 13, 19, 20, 22, 23, 40, 44]\n>>> G=(list(x) for _,x in groupby(L, lambda x,c=count(): next(c)-x))\n>>> print \",\".join(\"-\".join(map(str,(g[0],g[-1])[:len(g)])) for g in G)\n1-4,6-9,12-13,19-20,22-23,40,44\n\nThe idea here is to pair ea...
[ 22, 3, 1, 1, 0, 0, 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0003429510_list_python.txt
Q: Given a list of words, make a subset of phrases with them What is the best way performance wise to take a list of words and turn them into phrases in python. words = ["hey","there","stack","overflow"] print magicFunction(words) >>> ["hey","there","stack","overflow", "hey there stack","hey there", "there stack overflow","there stack", "stack overflow", "hey there stack overflow" ] Order doesnt matter.... UPDATE: Should have been more specific, the words have to be consecutive, as in the list as in my example print out. So we could have "hey there" but not "hey stack" A: I think something like this will work, although I don't have access to python at the moment. def magic_function(words): for start in range(len(words)): for end in range(start + 1, len(words) + 1): yield " ".join(words[start:end]) A: import itertools # Adapted from Python Cookbook 2nd Ed. 19.7. def windows(iterable, length=2, overlap=0): """ Return an iterator over overlapping windows of length <length> of <iterable>. """ it = iter(iterable) results = list(itertools.islice(it, length)) while len(results) == length: yield results results = results[length-overlap:] results.extend(itertools.islice(it, length-overlap)) def magic_function(seq): return [' '.join(window) for n in range(len(words)) for window in windows(seq, n + 1, n)] Results: >>> words = ["hey","there","stack","overflow"] >>> print magic_function(words) ['hey', 'there', 'stack', 'overflow', 'hey there', 'there stack', 'stack overflow', 'hey there stack', 'there stack overflow', 'hey there stack overflow'] A: This will work, seems reasonably efficient. def magicFunction(words): phrases = [] start = 0 end = 0 for i in xrange(1, len(words) + 1): start = 0 end = i while (end <= len(words)): phrases.append(" ".join(words[start:end])) start += 1 end += 1 return phrases
Given a list of words, make a subset of phrases with them
What is the best way performance wise to take a list of words and turn them into phrases in python. words = ["hey","there","stack","overflow"] print magicFunction(words) >>> ["hey","there","stack","overflow", "hey there stack","hey there", "there stack overflow","there stack", "stack overflow", "hey there stack overflow" ] Order doesnt matter.... UPDATE: Should have been more specific, the words have to be consecutive, as in the list as in my example print out. So we could have "hey there" but not "hey stack"
[ "I think something like this will work, although I don't have access to python at the moment.\ndef magic_function(words):\n for start in range(len(words)):\n for end in range(start + 1, len(words) + 1):\n yield \" \".join(words[start:end])\n\n", "import itertools\n\n# Adapted from Python Cookbook 2nd Ed....
[ 2, 1, 0 ]
[]
[]
[ "list", "python", "string" ]
stackoverflow_0003685805_list_python_string.txt
Q: how to set a namespace prefix in an attribute value using the lxml? I'm trying to create XML Schema using lxml. For the begining something like this: <xs:schema xmlns="http://www.goo.com" xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified" targetNamespace="http://www.goo.com"> <xs:element type="xs:string" name="name"/> <xs:element type="xs:positiveInteger" name="age"/> </xs:schema> I've done it this way - by putthing xs: before value, but I think it could be done better. def schema(): SCHEMA_NAMESPACE = "http://www.w3.org/2001/XMLSchema" XS = "{%s}" % SCHEMA_NAMESPACE NSMAP = {None: "http://www.goo.com"} schema = etree.Element(XS+"schema", nsmap = NSMAP, targetNamespace="http://www.goo.com", elementFormDefault="qualified") element = etree.Element(XS+"element", attrib = {"name" : "name", "type" : "xs:string"}) schema.append(element) element = etree.Element(XS+"element", attrib = {"name" : "age", "type" : "xs:positiveInteger"}) schema.append(element) return etree.tostring(schema, pretty_print=True) Can it be written somehow better? A: Somewhat as an aside, you need to include "xs": SCHEMA_NAMESPACE or such in your NSMAP -- otherwise nothing in your generated XML actually maps the 'xs' prefix to correct namespace. That will also allow you to just specify your element names with prefixes; e.g. "xs:element". As far as your main question, I think this is probably fine, as long as you always use the same prefix-to-namespace mapping everywhere, such as with a global NSMAP. If you're processing XML with potentially arbitrary namespace prefixes, then make sure to either: add your nsmap with the 'xs' prefix to every element you create; or use the _Element.nsmap attribute to get the namespace map of the parent attribute, invert it, and look up the appropriate prefix in the inverted map. An example of the latter: SCHEMA_NAMESPACE = "http://www.w3.org/2001/XMLSchema" def add_element(schema): nsmap = schema.nsmap nsrmap = dict([(uri, prefix) for prefix, uri in nsmap.items()]) prefix = nsrmap[SCHEMA_NAMESPACE] xs = lambda name: "%s:%s" % (prefix, name) element = schema.makeelement(xs("element"), nsmap=nsmap, attrib={'name': 'age', 'type': xs('string')}) schema.append(element) return etree.tostring(schema, pretty_print=True) But that's probably overkill for most cases.
how to set a namespace prefix in an attribute value using the lxml?
I'm trying to create XML Schema using lxml. For the begining something like this: <xs:schema xmlns="http://www.goo.com" xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified" targetNamespace="http://www.goo.com"> <xs:element type="xs:string" name="name"/> <xs:element type="xs:positiveInteger" name="age"/> </xs:schema> I've done it this way - by putthing xs: before value, but I think it could be done better. def schema(): SCHEMA_NAMESPACE = "http://www.w3.org/2001/XMLSchema" XS = "{%s}" % SCHEMA_NAMESPACE NSMAP = {None: "http://www.goo.com"} schema = etree.Element(XS+"schema", nsmap = NSMAP, targetNamespace="http://www.goo.com", elementFormDefault="qualified") element = etree.Element(XS+"element", attrib = {"name" : "name", "type" : "xs:string"}) schema.append(element) element = etree.Element(XS+"element", attrib = {"name" : "age", "type" : "xs:positiveInteger"}) schema.append(element) return etree.tostring(schema, pretty_print=True) Can it be written somehow better?
[ "Somewhat as an aside, you need to include \"xs\": SCHEMA_NAMESPACE or such in your NSMAP -- otherwise nothing in your generated XML actually maps the 'xs' prefix to correct namespace. That will also allow you to just specify your element names with prefixes; e.g. \"xs:element\".\nAs far as your main question, I t...
[ 2 ]
[]
[]
[ "lxml", "namespaces", "prefix", "python" ]
stackoverflow_0003685374_lxml_namespaces_prefix_python.txt
Q: Python compatability issue - 'in ' requires character as left operand I make no claims to know anything at all about writing Python scripts or programming in general, but I've been tasked with writing one anyway that will have to operate on a wide variety of Python versions. I wrote, and did my testing on versions 2.3, 2.4 and all went well. Version 2.2 though is giving me fits and my google-fu must be weak because I'm coming up short with an answer. The section of code that's giving me problems: commandnum = 0 for commandloop in arguments[0:]: if "java" in arguments[commandnum][0]: for argloop in arguments[commandnum]: if "Dcatalina.home" in argloop: instance = argloop.split("/") elif "-Xms" in argloop: xms = argloop.split("Xms") elif "-Xmx" in argloop: xmx = argloop.split("Xmx") elif "-XX:MaxPermSize" in argloop: permsize = argloop.split("=") elif "Dcatalina.base" in argloop: home = argloop.split("=") appdir = home[-1]+"/webapps" warfiles = [] for war in os.listdir(appdir): if fnmatch.fnmatch(war, '*.war'): warfiles.append(war) thefiles = ",".join(warfiles) try: instance except NameError: instance = None if instance is not None: print "%s %s %s %s %s %s" % (instance[-2], xms[-1], xmx[-1], permsize[-1], appdir, thefiles) commandnum = commandnum + 1 I understand that this is UGLY and probably a BAD thing, but the error that I'm getting is in the string matching using 'in'. From what I gather by googling, in Python 2.2 you're limited to one character for string matching using 'in'. What is the equivalent way in 2.2 to match on a string? Upgrading machines to a modern version of Python is out of the question. Any help/abuse is appreciated. Try to be gentle as this is the first thing of any size I've written outside of shell scripts and MS Basic on my TRS-80 CoCo in 1981. A: Try using str.find, which returns -1 if the substring cannot be found. >>> s = "The quick brown fox" >>> s.find("The") 0 >>> s.find("brown") 10 >>> s.find("waffles") -1 A: You can always match using regular expressions. It's been a while since I used 2.2, so I can't recall if re is available, or if you need to go back to regex, but I'm sure the docs on the python site will go back far enough. Update: it looks like re is available.
Python compatability issue - 'in ' requires character as left operand
I make no claims to know anything at all about writing Python scripts or programming in general, but I've been tasked with writing one anyway that will have to operate on a wide variety of Python versions. I wrote, and did my testing on versions 2.3, 2.4 and all went well. Version 2.2 though is giving me fits and my google-fu must be weak because I'm coming up short with an answer. The section of code that's giving me problems: commandnum = 0 for commandloop in arguments[0:]: if "java" in arguments[commandnum][0]: for argloop in arguments[commandnum]: if "Dcatalina.home" in argloop: instance = argloop.split("/") elif "-Xms" in argloop: xms = argloop.split("Xms") elif "-Xmx" in argloop: xmx = argloop.split("Xmx") elif "-XX:MaxPermSize" in argloop: permsize = argloop.split("=") elif "Dcatalina.base" in argloop: home = argloop.split("=") appdir = home[-1]+"/webapps" warfiles = [] for war in os.listdir(appdir): if fnmatch.fnmatch(war, '*.war'): warfiles.append(war) thefiles = ",".join(warfiles) try: instance except NameError: instance = None if instance is not None: print "%s %s %s %s %s %s" % (instance[-2], xms[-1], xmx[-1], permsize[-1], appdir, thefiles) commandnum = commandnum + 1 I understand that this is UGLY and probably a BAD thing, but the error that I'm getting is in the string matching using 'in'. From what I gather by googling, in Python 2.2 you're limited to one character for string matching using 'in'. What is the equivalent way in 2.2 to match on a string? Upgrading machines to a modern version of Python is out of the question. Any help/abuse is appreciated. Try to be gentle as this is the first thing of any size I've written outside of shell scripts and MS Basic on my TRS-80 CoCo in 1981.
[ "Try using str.find, which returns -1 if the substring cannot be found.\n>>> s = \"The quick brown fox\"\n>>> s.find(\"The\")\n0\n>>> s.find(\"brown\")\n10\n>>> s.find(\"waffles\")\n-1\n\n", "You can always match using regular expressions. It's been a while since I used 2.2, so I can't recall if re is available, ...
[ 3, 0 ]
[]
[]
[ "python" ]
stackoverflow_0003686125_python.txt
Q: Python bytecode compiler; removes unnecessary variables? Given the following: def foo(): x = a_method_returning_a_long_list() y = a_method_which_filters_a_list(x) return y will Python's bytecode compiler keep x & y in memory, or is it clever enough to reduce it to the following? def foo(): return a_method_which_filters_a_list(a_method_returning_a_long_list()) A: It keeps x and y in memory: import dis dis.dis(foo) 2 0 LOAD_GLOBAL 0 (a_method_returning_a_long_list) 3 CALL_FUNCTION 0 6 STORE_FAST 0 (x) 3 9 LOAD_GLOBAL 1 (a_method_which_filters_a_list) 12 LOAD_FAST 0 (x) 15 CALL_FUNCTION 1 18 STORE_FAST 1 (y) 4 21 LOAD_FAST 1 (y) 24 RETURN_VALUE The whole operation is quite efficient, as it is done using the LOAD_FAST and STORE_FAST codes. As Roadrunner-EX remarks in one of the comments, the amount of memory used by your two versions of foo is basically the same, as x and y are just references (i.e., pointers) to the results. A: In [1]: import dis In [2]: def f(): ...: x = f1() ...: y = f2(x) ...: return y ...: In [3]: dis.dis(f) 2 0 LOAD_GLOBAL 0 (f1) 3 CALL_FUNCTION 0 6 STORE_FAST 0 (x) 3 9 LOAD_GLOBAL 1 (f2) 12 LOAD_FAST 0 (x) 15 CALL_FUNCTION 1 18 STORE_FAST 1 (y) 4 21 LOAD_FAST 1 (y) 24 RETURN_VALUE So it looks like both variables are held separately. A: I'm not certain, but I would guess it would keep them in memory, for 2 reasons. First, it's probably more effort than its worth to do that. There wouldn't be a huge performance change either way. And second, the variables x and y are probably themselves taking up memory (in the form of pointers/references), which the compiler would not touch, due to the explicit nature of the assignment.
Python bytecode compiler; removes unnecessary variables?
Given the following: def foo(): x = a_method_returning_a_long_list() y = a_method_which_filters_a_list(x) return y will Python's bytecode compiler keep x & y in memory, or is it clever enough to reduce it to the following? def foo(): return a_method_which_filters_a_list(a_method_returning_a_long_list())
[ "It keeps x and y in memory:\nimport dis\ndis.dis(foo)\n 2 0 LOAD_GLOBAL 0 (a_method_returning_a_long_list)\n 3 CALL_FUNCTION 0\n 6 STORE_FAST 0 (x)\n\n 3 9 LOAD_GLOBAL 1 (a_method_which_filters_a_list)\n ...
[ 3, 2, 0 ]
[]
[]
[ "bytecode", "python" ]
stackoverflow_0003686101_bytecode_python.txt
Q: Couldn't close file in functional way in python3.1? I wrote a line of code using lambda to close a list of file objects in python2.6: map(lambda f: f.close(), files) It works, but doesn't in python3.1. Why? Here is my test code: import sys files = [sys.stdin, sys.stderr] for f in files: print(f.closed) # False in 2.6 & 3.1 map(lambda o : o.close(), files) for f in files: print(f.closed) # True in 2.6 but False in 3.1 for f in files: f.close() for f in files: print(f.closed) # True in 2.6 & 3.1 A: map returns a list in Python 2, but an iterator in Python 3. So the files will be closed only if you iterate over the result. Never apply map or similar "functional" functions to functions with side effects. Python is not a functional language, and will never be. Use a for loop: for o in files: o.close() A: Because map in Python 3 is a lazy iterator. Quoting the docs: Return an iterator that applies function to every item of iterable, yielding the results. E.g. in Python 2, map(f, seq) equivalent to [f(i) for i in seq], but in Python 3, it's (f(i) for i in seq) - subtly different syntax, but very different semantics. To make the map variant work, you'd need to consume the iterator. Ergo, it's simpler (and more idiomatic: map, comprehension and generators shouldn't have side effects!) to use an explicit for-loop.
Couldn't close file in functional way in python3.1?
I wrote a line of code using lambda to close a list of file objects in python2.6: map(lambda f: f.close(), files) It works, but doesn't in python3.1. Why? Here is my test code: import sys files = [sys.stdin, sys.stderr] for f in files: print(f.closed) # False in 2.6 & 3.1 map(lambda o : o.close(), files) for f in files: print(f.closed) # True in 2.6 but False in 3.1 for f in files: f.close() for f in files: print(f.closed) # True in 2.6 & 3.1
[ "map returns a list in Python 2, but an iterator in Python 3. So the files will be closed only if you iterate over the result.\nNever apply map or similar \"functional\" functions to functions with side effects. Python is not a functional language, and will never be. Use a for loop:\nfor o in files:\n o.close()\...
[ 6, 4 ]
[]
[]
[ "functional_programming", "python", "python_3.x" ]
stackoverflow_0003686244_functional_programming_python_python_3.x.txt
Q: Compensate for Auto White Balance with OpenCV I'm working on an app that takes in webcam data, applies various transformations, blurs and then does a background subtraction and threshold filter. It's a type of optical touch screen retrofitting system (the design is so different that tbeta/touchlib can't be used). The camera's white balance is screwing up the threshold filter by brightening everything whenever a user's hand is seen and darkening when it leaves, causing one of those to exhibit immense quantities of static. Is there a good way to counteract it? Is taking a corner, assuming it's constant and adjusting the rest of the image's brightness so that it stays constant a good idea? A: You could try interfacing your camera through DirectShow and turn off Auto White Balance through your code or you could try first with the camera software deployed with it. It often gives you ability to do certain modifications as white balance and similar stuff.
Compensate for Auto White Balance with OpenCV
I'm working on an app that takes in webcam data, applies various transformations, blurs and then does a background subtraction and threshold filter. It's a type of optical touch screen retrofitting system (the design is so different that tbeta/touchlib can't be used). The camera's white balance is screwing up the threshold filter by brightening everything whenever a user's hand is seen and darkening when it leaves, causing one of those to exhibit immense quantities of static. Is there a good way to counteract it? Is taking a corner, assuming it's constant and adjusting the rest of the image's brightness so that it stays constant a good idea?
[ "You could try interfacing your camera through DirectShow and turn off Auto White Balance through your code or you could try first with the camera software deployed with it. It often gives you ability to do certain modifications as white balance and similar stuff.\n" ]
[ 1 ]
[]
[]
[ "background_subtraction", "opencv", "python", "touchscreen", "webcam" ]
stackoverflow_0003680829_background_subtraction_opencv_python_touchscreen_webcam.txt
Q: Why can't django load multiple pages simultaneously? I have a django aplication with an admin panel. When i add some item (it takes about 10 seconds to add it), i can't load any other page. The page is waiting for the first page to load, and then it load itself. A: Are you using the development server? It's single-threaded by design. You'll need to run your Django app in a real web server (like Apache) to load pages simultaneously. A: As Bob points out, the devserver/runserver is single-threaded, but if you want to, there is a multi-threaded local dev server option
Why can't django load multiple pages simultaneously?
I have a django aplication with an admin panel. When i add some item (it takes about 10 seconds to add it), i can't load any other page. The page is waiting for the first page to load, and then it load itself.
[ "Are you using the development server? It's single-threaded by design. You'll need to run your Django app in a real web server (like Apache) to load pages simultaneously.\n", "As Bob points out, the devserver/runserver is single-threaded, but if you want to, there is a multi-threaded local dev server option\n" ]
[ 5, 2 ]
[]
[]
[ "django", "load", "python", "simultaneous" ]
stackoverflow_0003686209_django_load_python_simultaneous.txt
Q: Including mercurial extensions from eggs Is there some way to import an extension from an .egg file? For example hggit installs itself as hg_git-0.2.4-py2.5.egg, which cannot be listed under [extensions] directly, or it's interpreted as a standard .py file. Is there some way to include that file as an extension? Alternatively, is there some way to install hg-git manually in a way that doesn't create .egg file, but an unpacked directory? A: if the egg is installed on your Python module path (aka: you easy_installed it), just do: name_of_extension= in the extensions part of your .hgrc
Including mercurial extensions from eggs
Is there some way to import an extension from an .egg file? For example hggit installs itself as hg_git-0.2.4-py2.5.egg, which cannot be listed under [extensions] directly, or it's interpreted as a standard .py file. Is there some way to include that file as an extension? Alternatively, is there some way to install hg-git manually in a way that doesn't create .egg file, but an unpacked directory?
[ "if the egg is installed on your Python module path (aka: you easy_installed it), just do:\nname_of_extension=\n\nin the extensions part of your .hgrc\n" ]
[ 1 ]
[]
[]
[ "distutils", "egg", "hgrc", "mercurial", "python" ]
stackoverflow_0003686256_distutils_egg_hgrc_mercurial_python.txt
Q: How can textmate make my python (pylons) development easier? I have textmate, but honestly the only thing I can do with it is simply edit a file. The handy little file browser is aslo useful. (how can I show/hide that file browser anyhow!) But I have no other knowledge/tricks up my sleeve, care to help me out? A: If you look under the Bundles menu in TextMate there is a Python-specific sub-menu that exposes a bunch of helpful things like syntax checking, script debugging, insertion of oft used code blocks, manual look ups and so on. Most of them are bound to keyboard shortcuts (or can be bound if they are not). Also, under the Bundles are sort of general-to-code or general-to-text-editing tasks in sub-menus. You can set up templates for new file creation that let you start new files with all the little bits and pieces you like to see in new files (copyright notice, author, SCC tags, etc.) See the File -> New From Template -> Edit Templates... menu option to do that. It ships with 4 Python templates already. Finally, that browser is called the Project Drawer. View -> Show Project Drawer to get it to show up. It'll only be available when the window you're viewing is a project window, not a single document window. A: So the killer "app" of TextMate is the bundle community. These are collections of commands, snippets etc that third parties have created. Say you're using Mako as your template language. Google "TextMate Mako" and you'll find this article on a Mako bundle for TextMate. Google around for some of the other tools you use: I don't think there's one for Pylons specifically (nor Turbogears).. but hey, you could always start one... And there's bound to be a TextMate bundle for some of the other Python tools you use (use the PEP8 checker? There's a bundle for that, for instance...)
How can textmate make my python (pylons) development easier?
I have textmate, but honestly the only thing I can do with it is simply edit a file. The handy little file browser is aslo useful. (how can I show/hide that file browser anyhow!) But I have no other knowledge/tricks up my sleeve, care to help me out?
[ "If you look under the Bundles menu in TextMate there is a Python-specific sub-menu that exposes a bunch of helpful things like syntax checking, script debugging, insertion of oft used code blocks, manual look ups and so on. Most of them are bound to keyboard shortcuts (or can be bound if they are not).\nAlso, unde...
[ 2, 0 ]
[]
[]
[ "pylons", "python", "textmate" ]
stackoverflow_0003678221_pylons_python_textmate.txt
Q: how to connect HAL using dbus I'm using python and dbus. What i really need is a way to get the input from my microphone into my python program and then play it back from the program. I googled a lot and it seems pyaudio might do the trick but pyaudio does not work with my ubuntu 10.04. The next option i saw was telepathy. But i don't need something that big, either. Seeing how telepathy works over dbus, i fugured this might be the way to go. Unfortunately i'm unable to connect to the Harware Abstraction LAyer ans use it to get the input from my microphone. Is there any way to do this, or should i seek elsewhere? A: This is really not related to HAL or D-Bus at all. Telepathy's definitely not the answer: it's an IM framework. :) If I were you, I'd look at GStreamer, which is the standard multimedia framework on the Linux desktop, via the pygst binding. You'll want to use the gconfaudiosrc element to pull audio from the default microphone, and send it to gconfaudiosink. To check that this works, run gst-launch-0.10 gconfaudiosrc ! gconfaudiosink in a terminal: you should hear everything you say into your microphone echoed out of your speakers. This blog post by the Internet's Jono Bacon might be a good starting point. You could try modifying it to use gconfaudiosrc rather than filesrc, decodebin and audioconvert. You could also take a look at this tutorial; the GStreamer Application Development Manual is a lot more detailed.
how to connect HAL using dbus
I'm using python and dbus. What i really need is a way to get the input from my microphone into my python program and then play it back from the program. I googled a lot and it seems pyaudio might do the trick but pyaudio does not work with my ubuntu 10.04. The next option i saw was telepathy. But i don't need something that big, either. Seeing how telepathy works over dbus, i fugured this might be the way to go. Unfortunately i'm unable to connect to the Harware Abstraction LAyer ans use it to get the input from my microphone. Is there any way to do this, or should i seek elsewhere?
[ "This is really not related to HAL or D-Bus at all. Telepathy's definitely not the answer: it's an IM framework. :) If I were you, I'd look at GStreamer, which is the standard multimedia framework on the Linux desktop, via the pygst binding.\nYou'll want to use the gconfaudiosrc element to pull audio from the defau...
[ 1 ]
[]
[]
[ "dbus", "linux", "microphone", "python", "ubuntu" ]
stackoverflow_0003665490_dbus_linux_microphone_python_ubuntu.txt
Q: Is there good planning GUI component (widget) for python? I'm working on a scheduling app and looking for a calendar, timeline or other planning related GUI component for Python. Are you aware of any ? A: Have a look at PyQt. It has a calendar widget and the wrapper allows you to modify the rendering of the calendar. A: Your question is not really clear so I can't know your needs but, maybe, you should check faces, a powerful and free project management tool that you "program" in python. A: wxPython has two, CalendarCtrl and Calendar. The former offers a more "native" feel, at least on Windows. If you download the wxPython Demo program you can see both in action.
Is there good planning GUI component (widget) for python?
I'm working on a scheduling app and looking for a calendar, timeline or other planning related GUI component for Python. Are you aware of any ?
[ "Have a look at PyQt. It has a calendar widget and the wrapper allows you to modify the rendering of the calendar.\n", "Your question is not really clear so I can't know your needs but, maybe, you should check faces, a powerful and free project management tool that you \"program\" in python.\n", "wxPython has t...
[ 2, 1, 0 ]
[]
[]
[ "components", "python", "user_interface", "widget" ]
stackoverflow_0003684908_components_python_user_interface_widget.txt
Q: Creating restricted permutations of a list of items by category I am trying to create a number of restricted permutations of a list of items. Each item has a category, and I need to find combinations of items such that each combination does not have multiple items from the same category. To illustrate, here's some sample data: Name | Category ==========|========== 1. Orange | fruit 2. Apple | fruit 3. GI-Joe | toy 4. VCR | electronics 5. Racquet | sporting goods The combinations would be restricted to length three, I don't need every combination of every length. So a set of combinations for the above list could be: (Orange, GI-Joe, VCR) (Orange, GI-Joe, Racquet) (Orange, VCR, Racquet) (Apple, GI-Joe, VCR) (Apple, GI-Joe, Racquet) ... and so on. I do this fairly often, on various lists. The lists will never be more than 40 items in length, but understandably that could create thousands of combinations (though there will likely be around 10 unique categories per each list, restricting it somewhat) I've come up with some pseudo-python for how I would implement it recursively. It's been too long since I took combinatorics, but from what I recall this is essentially a subset of the combinations of the set, something like C(list length, desired size). There's likely some library modules which can make this cleaner (or at least more performant) I was wondering if perhaps there was a better approach than what I've got (perhaps one which uses itertools.combinations somehow): # For the sake of this problem, let's assume the items are hashable so they # can be added to a set. def combinate(items, size=3): assert size >=2, "You jerk, don't try it." def _combinate(index, candidate): if len(candidate) == size: results.add(candidate) return candidate_cats = set(x.category for x in candidate) for i in range(index, len(items)): item = items[i] if item.category not in candidate_cats: _combinate(i, candidate + (item, )) results = set() for i, item in enumerate(items[:(1-size)]): _combinate(i, (item, )) return results A: Naive approach: #!/usr/bin/env python import itertools items = { 'fruits' : ('Orange', 'Apple'), 'toys' : ('GI-Joe', ), 'electronics' : ('VCR', ), 'sporting_goods' : ('Racquet', ) } def combinate(items, size=3): if size > len(items): raise Exception("Lower the `size` or add more products, dude!") for cats in itertools.combinations(items.keys(), size): cat_items = [[products for products in items[cat]] for cat in cats] for x in itertools.product(*cat_items): yield zip(cats, x) if __name__ == '__main__': for x in combinate(items): print x Will yield: # ==> # # [('electronics', 'VCR'), ('toys', 'GI-Joe'), ('sporting_goods', 'Racquet')] # [('electronics', 'VCR'), ('toys', 'GI-Joe'), ('fruits', 'Orange')] # [('electronics', 'VCR'), ('toys', 'GI-Joe'), ('fruits', 'Apple')] # [('electronics', 'VCR'), ('sporting_goods', 'Racquet'), ('fruits', 'Orange')] # [('electronics', 'VCR'), ('sporting_goods', 'Racquet'), ('fruits', 'Apple')] # [('toys', 'GI-Joe'), ('sporting_goods', 'Racquet'), ('fruits', 'Orange')] # [('toys', 'GI-Joe'), ('sporting_goods', 'Racquet'), ('fruits', 'Apple')] A: What you seek to generate is the Cartesian product of elements taken from set of category. The partitioning into multiple sets is relatively easy: item_set[category].append(item) With proper instantiation (e.g.) collections.defaultdict for item_set[category] and then itertools.product will give you the desired output.
Creating restricted permutations of a list of items by category
I am trying to create a number of restricted permutations of a list of items. Each item has a category, and I need to find combinations of items such that each combination does not have multiple items from the same category. To illustrate, here's some sample data: Name | Category ==========|========== 1. Orange | fruit 2. Apple | fruit 3. GI-Joe | toy 4. VCR | electronics 5. Racquet | sporting goods The combinations would be restricted to length three, I don't need every combination of every length. So a set of combinations for the above list could be: (Orange, GI-Joe, VCR) (Orange, GI-Joe, Racquet) (Orange, VCR, Racquet) (Apple, GI-Joe, VCR) (Apple, GI-Joe, Racquet) ... and so on. I do this fairly often, on various lists. The lists will never be more than 40 items in length, but understandably that could create thousands of combinations (though there will likely be around 10 unique categories per each list, restricting it somewhat) I've come up with some pseudo-python for how I would implement it recursively. It's been too long since I took combinatorics, but from what I recall this is essentially a subset of the combinations of the set, something like C(list length, desired size). There's likely some library modules which can make this cleaner (or at least more performant) I was wondering if perhaps there was a better approach than what I've got (perhaps one which uses itertools.combinations somehow): # For the sake of this problem, let's assume the items are hashable so they # can be added to a set. def combinate(items, size=3): assert size >=2, "You jerk, don't try it." def _combinate(index, candidate): if len(candidate) == size: results.add(candidate) return candidate_cats = set(x.category for x in candidate) for i in range(index, len(items)): item = items[i] if item.category not in candidate_cats: _combinate(i, candidate + (item, )) results = set() for i, item in enumerate(items[:(1-size)]): _combinate(i, (item, )) return results
[ "Naive approach:\n#!/usr/bin/env python\n\nimport itertools\n\nitems = {\n 'fruits' : ('Orange', 'Apple'),\n 'toys' : ('GI-Joe', ),\n 'electronics' : ('VCR', ),\n 'sporting_goods' : ('Racquet', )\n}\n\ndef combinate(items, size=3):\n if size > len(items):\n raise Exception(\"Lower the `size` o...
[ 2, 1 ]
[]
[]
[ "combinatorics", "python", "set" ]
stackoverflow_0003686521_combinatorics_python_set.txt
Q: Twisted: why is it that passing a deferred callback to a deferred thread makes the thread blocking all of a sudden? I unsuccessfully tried using txredis (the non blocking twisted api for redis) for a persisting message queue I'm trying to set up with a scrapy project I am working on. I found that although the client was not blocking, it became much slower than it could have been because what should have been one event in the reactor loop was split up into thousands of steps. So instead, I tried making use of redis-py (the regular blocking twisted api) and wrapping the call in a deferred thread. It works great, however I want to perform an inner deferred when I make a call to redis as I would like to set up connection pooling in attempts to speed things up further. Below is my interpretation of some sample code taken from the twisted docs for a deferred thread to illustrate my use case: #!/usr/bin/env python from twisted.internet import reactor,threads from twisted.internet.task import LoopingCall import time def main_loop(): print 'doing stuff in main loop.. do not block me!' def aBlockingRedisCall(): print 'doing lookup... this may take a while' time.sleep(10) return 'results from redis' def result(res): print res def main(): lc = LoopingCall(main_loop) lc.start(2) d = threads.deferToThread(aBlockingRedisCall) d.addCallback(result) reactor.run() if __name__=='__main__': main() And here is my alteration for connection pooling that makes the code in the deferred thread blocking : #!/usr/bin/env python from twisted.internet import reactor,defer from twisted.internet.task import LoopingCall import time def main_loop(): print 'doing stuff in main loop.. do not block me!' def aBlockingRedisCall(x): if x<5: #all connections are busy, try later print '%s is less than 5, get a redis client later' % x x+=1 d = defer.Deferred() d.addCallback(aBlockingRedisCall) reactor.callLater(1.0,d.callback,x) return d else: print 'got a redis client; doing lookup.. this may take a while' time.sleep(10) # this is now blocking.. any ideas? d = defer.Deferred() d.addCallback(gotFinalResult) d.callback(x) return d def gotFinalResult(x): return 'final result is %s' % x def result(res): print res def aBlockingMethod(): print 'going to sleep...' time.sleep(10) print 'woke up' def main(): lc = LoopingCall(main_loop) lc.start(2) d = defer.Deferred() d.addCallback(aBlockingRedisCall) d.addCallback(result) reactor.callInThread(d.callback, 1) reactor.run() if __name__=='__main__': main() So my question is, does anyone know why my alteration causes the deferred thread to be blocking and/or can anyone suggest a better solution? A: Well, as the twisted docs say: Deferreds do not make the code magically not block Whenever you're using blocking code, such as sleep, you have to defer it to a new thread. #!/usr/bin/env python from twisted.internet import reactor,defer, threads from twisted.internet.task import LoopingCall import time def main_loop(): print 'doing stuff in main loop.. do not block me!' def aBlockingRedisCall(x): if x<5: #all connections are busy, try later print '%s is less than 5, get a redis client later' % x x+=1 d = defer.Deferred() d.addCallback(aBlockingRedisCall) reactor.callLater(1.0,d.callback,x) return d else: print 'got a redis client; doing lookup.. this may take a while' def getstuff( x ): time.sleep(3) return "stuff is %s" % x # getstuff is blocking, so you need to push it to a new thread d = threads.deferToThread(getstuff, x) d.addCallback(gotFinalResult) return d def gotFinalResult(x): return 'final result is %s' % x def result(res): print res def aBlockingMethod(): print 'going to sleep...' time.sleep(10) print 'woke up' def main(): lc = LoopingCall(main_loop) lc.start(2) d = defer.Deferred() d.addCallback(aBlockingRedisCall) d.addCallback(result) reactor.callInThread(d.callback, 1) reactor.run() if __name__=='__main__': main() In case the redis api is not very complex it might be more natural to rewrite it using twisted.web, instead of just calling the blocking api in a lot threads. A: There's also an up-to-date Redis client for twisted which already supports the new protocol and features of Redis 2.x. You should definetely give it a try. It's called txredisapi. For the persistent message queue, I'd recommend RestMQ. A redis-based message queue system built on top of cyclone and txredisapi. http://github.com/gleicon/restmq Cheers A: On a related note, you could probably gain a lot by using a Redis client created specifically for Twisted, such as this one: http://github.com/deldotdr/txRedis
Twisted: why is it that passing a deferred callback to a deferred thread makes the thread blocking all of a sudden?
I unsuccessfully tried using txredis (the non blocking twisted api for redis) for a persisting message queue I'm trying to set up with a scrapy project I am working on. I found that although the client was not blocking, it became much slower than it could have been because what should have been one event in the reactor loop was split up into thousands of steps. So instead, I tried making use of redis-py (the regular blocking twisted api) and wrapping the call in a deferred thread. It works great, however I want to perform an inner deferred when I make a call to redis as I would like to set up connection pooling in attempts to speed things up further. Below is my interpretation of some sample code taken from the twisted docs for a deferred thread to illustrate my use case: #!/usr/bin/env python from twisted.internet import reactor,threads from twisted.internet.task import LoopingCall import time def main_loop(): print 'doing stuff in main loop.. do not block me!' def aBlockingRedisCall(): print 'doing lookup... this may take a while' time.sleep(10) return 'results from redis' def result(res): print res def main(): lc = LoopingCall(main_loop) lc.start(2) d = threads.deferToThread(aBlockingRedisCall) d.addCallback(result) reactor.run() if __name__=='__main__': main() And here is my alteration for connection pooling that makes the code in the deferred thread blocking : #!/usr/bin/env python from twisted.internet import reactor,defer from twisted.internet.task import LoopingCall import time def main_loop(): print 'doing stuff in main loop.. do not block me!' def aBlockingRedisCall(x): if x<5: #all connections are busy, try later print '%s is less than 5, get a redis client later' % x x+=1 d = defer.Deferred() d.addCallback(aBlockingRedisCall) reactor.callLater(1.0,d.callback,x) return d else: print 'got a redis client; doing lookup.. this may take a while' time.sleep(10) # this is now blocking.. any ideas? d = defer.Deferred() d.addCallback(gotFinalResult) d.callback(x) return d def gotFinalResult(x): return 'final result is %s' % x def result(res): print res def aBlockingMethod(): print 'going to sleep...' time.sleep(10) print 'woke up' def main(): lc = LoopingCall(main_loop) lc.start(2) d = defer.Deferred() d.addCallback(aBlockingRedisCall) d.addCallback(result) reactor.callInThread(d.callback, 1) reactor.run() if __name__=='__main__': main() So my question is, does anyone know why my alteration causes the deferred thread to be blocking and/or can anyone suggest a better solution?
[ "Well, as the twisted docs say:\n\nDeferreds do not make the code\n magically not block\n\nWhenever you're using blocking code, such as sleep, you have to defer it to a new thread.\n#!/usr/bin/env python\nfrom twisted.internet import reactor,defer, threads\nfrom twisted.internet.task import LoopingCall\nimport tim...
[ 12, 3, 0 ]
[]
[]
[ "multithreading", "python", "redis", "twisted" ]
stackoverflow_0002466000_multithreading_python_redis_twisted.txt
Q: twisted: difference between `defer.execute` and `threads.deferToThread` What is the difference between defer.execute() and threads.deferToThread() in twisted? Both take the same arguments - a function, and parameters to call it with - and return a deferred which will be fired with the result of calling the function. The threads version explicitly states that it will be run in a thread. However, if the defer version doesn't, then what would ever be the point of calling it? Code that runs in the reactor should never block, so any function it calls would have to not block. At that point, you could just do defer.succeed(f(*args, **kwargs)) instead of defer.execute(f, args, kwargs) with the same results. A: defer.execute does indeed execute the function in a blocking manner, in the same thread and you are correct in that defer.execute(f, args, kwargs) does the same as defer.succeed(f(*args, **kwargs)) except that defer.execute will return a callback that has had the errback fired if function f throws an exception. Meanwhile, in your defer.succeed example, if the function threw an exception, it would propagate outwards, which may not be desired. For ease of understanding, I'll just paste the source of defer.execute here: def execute(callable, *args, **kw): """Create a deferred from a callable and arguments. Call the given function with the given arguments. Return a deferred which has been fired with its callback as the result of that invocation or its errback with a Failure for the exception thrown. """ try: result = callable(*args, **kw) except: return fail() else: return succeed(result) In other words, defer.execute is just a shortcut to take a blocking function's result as a deferred which you can then add callbacks/errbacks to. The callbacks will be fired with normal chaining semantics. It seems a bit crazy, but Deferreds can 'fire' before you add callbacks and the callbacks will still be called. So to answer your question, why is this useful? Well, defer.execute is useful both for testing / mocking as well as simply integrating an async api with synchronous code. Also useful is defer.maybeDeferred which calls the function and then if the function already returns a deferred simply returns it, else functions similar to defer.execute. This is useful for when you write an API which expects a callable that when called gives you a deferred, and you want to be able to accept normal blocking functions as well. For example, say you had an application which fetched pages and did things with it. And, for some reason, you needed to run this in a synchronous fashion for a specific use case, like in a single-shot crontab script, or in response to a request in a WSGI application, but still keep the same codebase. If your code looked like this, it could be done: from twisted.internet import defer from twisted.web.client import getPage def process_feed(url, getter=getPage): d = defer.maybeDeferred(getter, url) d.addCallback(_process_feed) def _process_feed(result): pass # do something with result here To run this in a synchronous context, without the reactor, you could just pass an alternate getter function, like so: from urllib2 import urlopen def synchronous_getter(url): resp = urlopen(url) result = resp.read() resp.close() return result
twisted: difference between `defer.execute` and `threads.deferToThread`
What is the difference between defer.execute() and threads.deferToThread() in twisted? Both take the same arguments - a function, and parameters to call it with - and return a deferred which will be fired with the result of calling the function. The threads version explicitly states that it will be run in a thread. However, if the defer version doesn't, then what would ever be the point of calling it? Code that runs in the reactor should never block, so any function it calls would have to not block. At that point, you could just do defer.succeed(f(*args, **kwargs)) instead of defer.execute(f, args, kwargs) with the same results.
[ "defer.execute does indeed execute the function in a blocking manner, in the same thread and you are correct in that defer.execute(f, args, kwargs) does the same as defer.succeed(f(*args, **kwargs)) except that defer.execute will return a callback that has had the errback fired if function f throws an exception. M...
[ 9 ]
[]
[]
[ "deferred_execution", "multithreading", "python", "twisted" ]
stackoverflow_0003686608_deferred_execution_multithreading_python_twisted.txt
Q: Fresh solr instance for every hudson test build I'm building a test suite for a python site, powered by hudson. Currently, the workflow for a test run looks like: Pull down the latest version from the repository. Create a new mysql db and import schema file and some fixture data. Run tests, largely powered by webtest, which means not needing to run a web server. Delete mysql db. This pattern is similar to how Django handles tests. I'd like to replicate this pattern for solr; I have a test copy of the schema.xml file in my repository and want to prop up a new solr instance with an empty index at the start of each test and nuke it when done. The schema.xml file could change (much like the mysql schema), which is why it needs to be rebuilt from scratch each time (in addition to good testing hygiene). I'm finding the solr documentation to be fairly limited in this regard. I'm fine with running solr under jetty, which should simplify matters, but I'm at a loss for how to dynamically create a new solr instance or at least a fresh core on every deployment. A: Use the Solr Admin API to create a new core.
Fresh solr instance for every hudson test build
I'm building a test suite for a python site, powered by hudson. Currently, the workflow for a test run looks like: Pull down the latest version from the repository. Create a new mysql db and import schema file and some fixture data. Run tests, largely powered by webtest, which means not needing to run a web server. Delete mysql db. This pattern is similar to how Django handles tests. I'd like to replicate this pattern for solr; I have a test copy of the schema.xml file in my repository and want to prop up a new solr instance with an empty index at the start of each test and nuke it when done. The schema.xml file could change (much like the mysql schema), which is why it needs to be rebuilt from scratch each time (in addition to good testing hygiene). I'm finding the solr documentation to be fairly limited in this regard. I'm fine with running solr under jetty, which should simplify matters, but I'm at a loss for how to dynamically create a new solr instance or at least a fresh core on every deployment.
[ "Use the Solr Admin API to create a new core.\n" ]
[ 0 ]
[]
[]
[ "hudson", "python", "solr", "webob" ]
stackoverflow_0003686798_hudson_python_solr_webob.txt
Q: What does data="@/some/path" mean in Python? This is from some code I'm looking at... I think it's some sort of special format string that loads the file at the path into a binary string assigned to data, but I'm not sure as when I try to replicate it all I get is a standard string. Or is it actually a standard string and I'm reading too much into it? A: It's actually just a string.
What does data="@/some/path" mean in Python?
This is from some code I'm looking at... I think it's some sort of special format string that loads the file at the path into a binary string assigned to data, but I'm not sure as when I try to replicate it all I get is a standard string. Or is it actually a standard string and I'm reading too much into it?
[ "It's actually just a string.\n" ]
[ 4 ]
[]
[]
[ "binary", "format_string", "python", "string" ]
stackoverflow_0003686920_binary_format_string_python_string.txt
Q: How to connect to Cassandra inside a Pylons app? I created a new Pylons project, and would like to use Cassandra as my database server. I plan on using Pycassa to be able to use cassandra 0.7beta. Unfortunately, I don't know where to instantiate the connection to make it available in my application. The goal would be to : Create a pool when the application is launched Get a connection from the pool for each request, and make it available to my controllers and libraries (in the context of the request). The best would be to get a connexion from the pool "lazily", i.e. only if needed If a connexion has been used, release it when the request has been processed Additionally, is there something important I should know about it ? When I see some comments like "Be careful when using a QueuePool with use_threadlocal=True, especially with retries enabled. Synchronization may be required to prevent the connection from changing while another thread is using it.", what does it mean exactly ? Thanks. -- Pierre A: Well. I worked a little more. In fact, using a connection manager was probably not a good idea as this should be the template context. Additionally, opening a connection for each thread is not really a big deal. Opening a connection per request would be. I ended up with just pycassa.connect_thread_local() in app_globals, and there I go. A: Okay. I worked a little, I learned a lot, and I found a possible answer. Creating the pool The best place to create the pool seems to be in the app_globals.py file, which is basically a container for objects which will be accessible "throughout the life of the application". Exactly what I want for a pool, in fact. I just added at the end of the file my init code, which takes settings from the pylons configuration file : """Creating an instance of the Pycassa Pool""" kwargs = {} # Parsing servers if 'cassandra.servers' in config['app_conf']: servers = config['app_conf']['cassandra.servers'].split(',') if len(servers): kwargs['server_list'] = servers # Parsing timeout if 'cassandra.timeout' in config['app_conf']: try: kwargs['timeout'] = float(config['app_conf']['cassandra.timeout']) except: pass # Finally creating the pool self.cass_pool = pycassa.QueuePool(keyspace='Keyspace1', **kwargs) I could have done better, like moving that in a function, or supporting more parameters (pool size, ...). Which I'll do. Getting a connection at each request Well. There seems to be the simple way : in the file base.py, adding something like c.conn = g.cass_pool.get() before calling WSGIController, something like c.conn.return_to_pool() after. This is simple, and works. But this gets a connection from the pool even when it's not required by the controller. I have to dig a little deeper. Creating a connection manager I had the simple idea to create a class which would be instantiated at each request in the base.py file, and which would automatically grab a connection from the pool when requested (and release it after). This is a really simple class : class LocalManager: '''Requests a connection from a Pycassa Pool when needed, and releases it at the end of the object's life''' def __init__(self, pool): '''Class constructor''' assert isinstance(pool, Pool) self._pool = pool self._conn = None def get(self): '''Grabs a connection from the pool if not already done, and returns it''' if self._conn is None: self._conn = self._pool.get() return self._conn def __getattr__(self, key): '''It's cooler to write "c.conn" than "c.get()" in the code, isn't it?''' if key == 'conn': return self.get() else: return self.__dict__[key] def __del__(self): '''Releases the connection, if needed''' if not self._conn is None: self._conn.return_to_pool() Just added c.cass = CassandraLocalManager(g.cass_pool) before calling WSGIController in base.py, del(c.cass) after, and I'm all done. And it works : conn = c.cass.conn cf = pycassa.ColumnFamily(conn, 'TestCF') print cf.get('foo') \o/ I don't know if this is the best way to do this. If not, please let me know =) Plus, I still did not understand the "synchronization" part in Pycassa source code. If it is needed in my case, and what should I do to avoid problems. Thanks.
How to connect to Cassandra inside a Pylons app?
I created a new Pylons project, and would like to use Cassandra as my database server. I plan on using Pycassa to be able to use cassandra 0.7beta. Unfortunately, I don't know where to instantiate the connection to make it available in my application. The goal would be to : Create a pool when the application is launched Get a connection from the pool for each request, and make it available to my controllers and libraries (in the context of the request). The best would be to get a connexion from the pool "lazily", i.e. only if needed If a connexion has been used, release it when the request has been processed Additionally, is there something important I should know about it ? When I see some comments like "Be careful when using a QueuePool with use_threadlocal=True, especially with retries enabled. Synchronization may be required to prevent the connection from changing while another thread is using it.", what does it mean exactly ? Thanks. -- Pierre
[ "Well. I worked a little more. In fact, using a connection manager was probably not a good idea as this should be the template context. Additionally, opening a connection for each thread is not really a big deal. Opening a connection per request would be.\nI ended up with just pycassa.connect_thread_local() in app_...
[ 2, 1 ]
[]
[]
[ "cassandra", "pylons", "python" ]
stackoverflow_0003671535_cassandra_pylons_python.txt
Q: Convert DD/MM/YYYY HH:MM:SS into MySQL TIMESTAMP I would like a simple way to find and reformat text of the format 'DD/MM/YYYY' into 'YYYY/MM/DD' to be compatible with MySQL TIMESTAMPs, in a list of text items that may or may not contain a date atall, under python. (I'm thinking RegEx?) Basically i am looking for a way to inspect a list of items and correct any timestamp formats found. Great thing about standards is that there are so many to choose from.... A: If you're using the MySQLdb (also known as "mysql-python") module, for any datetime or timestamp field you can provide a datetime type instead of a string. This is the type that is returned, also and is the preferred way to provide the value. For Python 2.5 and above, you can do: from datetime import datetime value = datetime.strptime(somestring, "%d/%m/%Y") For older versions of python, it's a bit more verbose, but not really a big issue. import time from datetime import datetime timetuple = time.strptime(somestring, "%d/%m/%Y") value = datetime(*timetuple[:6]) The various format-strings are taken directly from what's accepted by your C library. Look up man strptime on unix to find other acceptable format values. Not all of the time formats are portable, but most of the basic ones are. Note datetime values can contain timezones. I do not believe MySQL knows exactly what to do with these, though. The datetimes I make above are usually considered as "naive" datetimes. If timezones are important, consider something like the pytz library. A: You can read the string into a datetime object and then output it back as a string using a different format. For e.g. >>> from datetime import datetime >>> datetime.strptime("31/12/2009", "%d/%m/%Y").strftime("%Y/%m/%d") '2009/12/31' Basically i am looking for a way to inspect a list of items and correct any timestamp formats found. If the input format is inconsistent, can vary, then you are better off with dateutil. >>> from dateutil.parser import parse >>> parse("31/12/2009").strftime("%Y/%m/%d") '2009/12/31' Dateutil can handle a lot of input formats automatically. To operate on a list you can map the a wrapper over the parse function over the list and convert the values appropriately.
Convert DD/MM/YYYY HH:MM:SS into MySQL TIMESTAMP
I would like a simple way to find and reformat text of the format 'DD/MM/YYYY' into 'YYYY/MM/DD' to be compatible with MySQL TIMESTAMPs, in a list of text items that may or may not contain a date atall, under python. (I'm thinking RegEx?) Basically i am looking for a way to inspect a list of items and correct any timestamp formats found. Great thing about standards is that there are so many to choose from....
[ "If you're using the MySQLdb (also known as \"mysql-python\") module, for any datetime or timestamp field you can provide a datetime type instead of a string. This is the type that is returned, also and is the preferred way to provide the value.\nFor Python 2.5 and above, you can do:\nfrom datetime import datetime\...
[ 1, 1 ]
[]
[]
[ "datetime", "mysql", "python", "regex", "timestamp" ]
stackoverflow_0003687223_datetime_mysql_python_regex_timestamp.txt
Q: How do I cascade deletes to multiple tables in SqlAlchemy? I have a table with several dependent tables that I want cascade delete. I'm having problems with it cascading too far. Some code will help explain. class Map(Base): .... #One to many relationship between the Map and Tile. #Each Map is made up of many tiles tiles = relationship('Tile', lazy='joined', backref='map', cascade="all, delete") class Tile(Base): .... #Reference to Map class. map_id = Column(Integer, ForeignKey('maps.id'), nullable=False) #Reference to graphics for this tile #This is a many to one relationship, each Graphic is used by many Tiles graphics_id = Column(Integer, ForeignKey("graphics.id"), nullable=False) graphics = relationship("Graphic", uselist=False) class Graphic(Base): .... #Nothing special here The problem is that when I delete Map Graphics is also deleted which is not what I want. I assume this is to do with cascade. How do I fix this so that deleting the Map class will delete the Tiles but not the Graphics? A: Your code (unless some details were omitted) should work as you expect it to: Graphics should not be deleted. As can be seen in relationship, the default cascade parameter is save-update, merge, which should not trigger the delete if you were to delete a Map. To test, please create a routine that creates a Map, a Tile and a Graphic; then delete a Map and check if the Graphic is deleted - I would not expect this to happen. If the conjecture is correct, then your Graphic object must be deleted because of some other relationship. I am referring to SA version 0.64, but I believe that the default configuration was not different also in earlier version. A: I got it working by changing graphics = relationship("Graphic", uselist=False) to graphics = relationship("Graphic", uselist=False, lazy='joined', backref=backref('tiles', cascade="all, delete, delete-orphan")) I'm not sure that this is perfect answer but it works.
How do I cascade deletes to multiple tables in SqlAlchemy?
I have a table with several dependent tables that I want cascade delete. I'm having problems with it cascading too far. Some code will help explain. class Map(Base): .... #One to many relationship between the Map and Tile. #Each Map is made up of many tiles tiles = relationship('Tile', lazy='joined', backref='map', cascade="all, delete") class Tile(Base): .... #Reference to Map class. map_id = Column(Integer, ForeignKey('maps.id'), nullable=False) #Reference to graphics for this tile #This is a many to one relationship, each Graphic is used by many Tiles graphics_id = Column(Integer, ForeignKey("graphics.id"), nullable=False) graphics = relationship("Graphic", uselist=False) class Graphic(Base): .... #Nothing special here The problem is that when I delete Map Graphics is also deleted which is not what I want. I assume this is to do with cascade. How do I fix this so that deleting the Map class will delete the Tiles but not the Graphics?
[ "Your code (unless some details were omitted) should work as you expect it to:\nGraphics should not be deleted. As can be seen in relationship, the default cascade parameter is save-update, merge, which should not trigger the delete if you were to delete a Map.\nTo test, please create a routine that creates a Map, ...
[ 0, 0 ]
[]
[]
[ "orm", "python", "sqlalchemy" ]
stackoverflow_0003679601_orm_python_sqlalchemy.txt
Q: Reading Text with Accent - Python I did some script in python that connects to GMAIL and print a email text... But, often my emails has words with "accent". And there is my problem... For example a text that I got: "PLANO DE S=C3=9ADE" should be printed as "PLANO DE SAÚDE". How can I turn legible my email text? What can I use to convert theses letters with accent? Thanks, The code suggested by Andrey, works fine on windows, but on Linux I still getting the wrong print: >>> b = 'PLANO DE S=C3=9ADE' >>> s = b.decode('quopri').decode('utf-8') >>> print s PLANO DE SÃDE Rafael, Thanks, you are correct about the word, it was misspelled. But the problem still the same here. Another example: CORRECT WORD: obersevação >>> b = 'Observa=C3=A7=C3=B5es' >>> s = b.decode('quopri').decode('utf-8') >>> print s Observações I am using Debian with UTF-8 locale: >>> :~$ locale LANG=en_US.UTF-8 Andrey, Thanks for your time. I agree with your explanation, but still with same problem here. Take look in my test: s='Observa=C3=A7=C3=B5es' s2= s.decode('quopri').decode('utf-8') >>> print s Observa=C3=A7=C3=B5es >>> print s2 Observações >>> import locale >>> ENCODING = locale.getpreferredencoding() >>> print s.encode(ENCODING) Observa=C3=A7=C3=B5es >>> print s2.encode(ENCODING) Observações >>> print ENCODING UTF-8 A: This encoding is called Quoted-printable. In your example, you have a string (Python's unicode) encoded in UTF-8 bytes (Python's str) encoded in quoted printable bytes. So the right way to get a string value is: >>> b = 'PLANO DE S=C3=9ADE' >>> s = b.decode('quopri').decode('utf-8') >>> print s PLANO DE SÚDE Update: There might be some issues with the console conding though. s holds a fully correct Unicode string value (of Python type unicode). But when you use the print statement, the value must be converted to bytes (Python's str) in order to be written to OS file descriptor number 1 (the standard output pipe). So the print statement implementation checks your console encoding, then makes some guesses and prints the results. In fact, in Python 2 the results will be different for printing from the interactive shell, running your process non-interactively and running your process while redirecting the output to a file. The best way to output encoded strings in Python 2 is not agreed upon. Two ways that make most sense are: 1) Use locale's encoding guess and manually encode strings. import locale ENCODING = locale.getpreferredencoding() print s.encode(ENCODING) 2) Use an encoding option (command-line, hard-coded or whatever). from getopt import getopt ENCODING = 'UTF-8' opts, args = getopt(sys.argv[1:], '', ['encoding=']) for opt, arg in opts: if opt == '--encoding': ENCODING = arg print s.encode(ENCODING) Update 2: If nothing helps and you still sure that your console encoding and font are set to UTF-8, then try this: import sys, os ENCODING = 'UTF-8' stdout = os.fdopen(sys.stdout.fileno(), 'wb') s = u'привет' # Don't forget to use a Unicode literal staring with u'' stdout.write(s.encode(ENCODING)) At this point you must see the Russian word привет in cyrillic character set in your console :) If this is the case, then you should use this binary stdout instead of normal sys.stdout. A: Your string is wrong, look: 'PLANO DE S=C3=9ADE' == 'PLANO DE S\xc3\x9aDE' Where is the missing "A" in SAÚDE? If you decode 'PLANO DE S=C3=9ADE' as a quoted-printable, you will get only 'PLANO DE SÚDE'. Running this code here on linux (Ubuntu 9.10): >>> b = 'PLANO DE S=C3=9ADE' >>> s = b.decode('quopri').decode('utf-8') >>> print s PLANO DE SÚDE
Reading Text with Accent - Python
I did some script in python that connects to GMAIL and print a email text... But, often my emails has words with "accent". And there is my problem... For example a text that I got: "PLANO DE S=C3=9ADE" should be printed as "PLANO DE SAÚDE". How can I turn legible my email text? What can I use to convert theses letters with accent? Thanks, The code suggested by Andrey, works fine on windows, but on Linux I still getting the wrong print: >>> b = 'PLANO DE S=C3=9ADE' >>> s = b.decode('quopri').decode('utf-8') >>> print s PLANO DE SÃDE Rafael, Thanks, you are correct about the word, it was misspelled. But the problem still the same here. Another example: CORRECT WORD: obersevação >>> b = 'Observa=C3=A7=C3=B5es' >>> s = b.decode('quopri').decode('utf-8') >>> print s Observações I am using Debian with UTF-8 locale: >>> :~$ locale LANG=en_US.UTF-8 Andrey, Thanks for your time. I agree with your explanation, but still with same problem here. Take look in my test: s='Observa=C3=A7=C3=B5es' s2= s.decode('quopri').decode('utf-8') >>> print s Observa=C3=A7=C3=B5es >>> print s2 Observações >>> import locale >>> ENCODING = locale.getpreferredencoding() >>> print s.encode(ENCODING) Observa=C3=A7=C3=B5es >>> print s2.encode(ENCODING) Observações >>> print ENCODING UTF-8
[ "This encoding is called Quoted-printable. In your example, you have a string (Python's unicode) encoded in UTF-8 bytes (Python's str) encoded in quoted printable bytes. So the right way to get a string value is:\n>>> b = 'PLANO DE S=C3=9ADE'\n>>> s = b.decode('quopri').decode('utf-8')\n>>> print s\nPLANO DE SÚDE\n...
[ 4, 0 ]
[]
[]
[ "diacritics", "linux", "python", "quoted_printable", "utf_8" ]
stackoverflow_0003680352_diacritics_linux_python_quoted_printable_utf_8.txt
Q: PHP or Python for creating webcharts after calculations? Is it a good idea to code such a script in Python and in which language is handy for fast performance and useful libraries/frameworks for charts?(charts would be created after calculating an expression which is input from the user) EDIT:It's a web server-side script A: I'm not exactly sure what you mean by "charts", but if you mean plotting/creating graphs, perhaps you should look at R, a free software environment for statistical computing and graphics. It has good graphics capabilities, and can connect to many environments, including Python. A: For Python - check matplotlib - it should do everything you need to do, including outputting to PNG, JPEG, etc. A: It's reasonably straightforward to generate graphics on the fly in python using the reportlab library http://www.reportlab.com/software/opensource/. It includes functionality to create bar, line, pie, etc. charts from lists of data. Don't be misled by the emphasis on generating PDFs, the library also can create just PNG or GIF images. Also, the official documentation is very verbose and intimidating, but it's quite accessible once you actually start coding. The following page explains the whole process for Django, but the same approach would be applicable to any framework: http://code.djangoproject.com/wiki/Charts. In particular, note that you can build a graphic in-memory and return it as an HttpResponse. It's quite fast. I use something like the following in one of my apps: def my_chart(request): response = HttpResponse( my_function_to_make_a_chart().asString('png'), 'image/png', ) return response That django view would be associated with a URL that you would embed directly into your HTML document as an tag: <img src="/my_site/my_chart/" alt="A cool chart" />
PHP or Python for creating webcharts after calculations?
Is it a good idea to code such a script in Python and in which language is handy for fast performance and useful libraries/frameworks for charts?(charts would be created after calculating an expression which is input from the user) EDIT:It's a web server-side script
[ "I'm not exactly sure what you mean by \"charts\", but if you mean plotting/creating graphs, perhaps you should look at R, a free software environment for statistical computing and graphics. It has good graphics capabilities, and can connect to many environments, including Python.\n", "For Python - check matplot...
[ 1, 1, 0 ]
[]
[]
[ "charts", "php", "python" ]
stackoverflow_0003687252_charts_php_python.txt
Q: How to change a tuple into array in Python? Let's say I have a tuple t = (1,2,3,4). What's the simple way to change it into Array? I can do something like this, array = [] for i in t: array.append(i) But I prefer something like x.toArray() or something. A: If you want to convert a tuple to a list (as you seem to want) use this: >>> t = (1, 2, 3, 4) # t is the tuple (1, 2, 3, 4) >>> l = list(t) # l is the list [1, 2, 3, 4] In addition I would advise against using tupleas the name of a variable.
How to change a tuple into array in Python?
Let's say I have a tuple t = (1,2,3,4). What's the simple way to change it into Array? I can do something like this, array = [] for i in t: array.append(i) But I prefer something like x.toArray() or something.
[ "If you want to convert a tuple to a list (as you seem to want) use this:\n>>> t = (1, 2, 3, 4) # t is the tuple (1, 2, 3, 4)\n>>> l = list(t) # l is the list [1, 2, 3, 4]\n\nIn addition I would advise against using tupleas the name of a variable.\n" ]
[ 64 ]
[]
[]
[ "arrays", "python", "tuples" ]
stackoverflow_0003687702_arrays_python_tuples.txt
Q: Is there anything wrong with creating a Python Pickle powered website? I have been toying with this idea for quite awhile now, but haven't seen any information on people doing it. I have a small website project where I need to load and modify 1 object. This object is pretty simple, and shouldn't be more than a few kb. Instead of running a DB for this small amount of data, why not just use pickle and/or shelve to save this data, and load it? I am planning on using a micro web framework like Bottle or Flask for the project. Are there any reasons to not use this method to load the data? It will only load the pickle file at the time Apache starts up, so I don't think speed will be effected (faster than querying a db). Thanks for any input! A: I wouldn't write a pickled string to a file directly. There are too many low-level details to worry about. Check out Durus, ZODB, or this post from FriendFeed about storing Python objects in MySQL. Don't discard relational databases, though, they give you a lot of bang right out of the box (even for simple projects). A: There is no reason why you can't implement object persistence via the standard Python pickle or shelve modules. Just make sure your objects are cleanly and securely picklable. Scalability may become a concern if your site grows beyond your current scope, but until then your idea should work just fine. If that day comes, the next obvious step would be to consider using Python's excellent SQLite module that comes pre-packaged with recent versions of the language. A: In addition to the concurrency issues you are already aware of, you also must ensure that the file is always in a consistent state. For example, if the server crashes in the middle of writing the file, what happens then? It's a case you need to consider and implement a solution for if you go this route.
Is there anything wrong with creating a Python Pickle powered website?
I have been toying with this idea for quite awhile now, but haven't seen any information on people doing it. I have a small website project where I need to load and modify 1 object. This object is pretty simple, and shouldn't be more than a few kb. Instead of running a DB for this small amount of data, why not just use pickle and/or shelve to save this data, and load it? I am planning on using a micro web framework like Bottle or Flask for the project. Are there any reasons to not use this method to load the data? It will only load the pickle file at the time Apache starts up, so I don't think speed will be effected (faster than querying a db). Thanks for any input!
[ "I wouldn't write a pickled string to a file directly. There are too many low-level details to worry about. Check out Durus, ZODB, or this post from FriendFeed about storing Python objects in MySQL.\nDon't discard relational databases, though, they give you a lot of bang right out of the box (even for simple proj...
[ 3, 2, 1 ]
[]
[]
[ "flask", "pickle", "python", "shelve" ]
stackoverflow_0003681922_flask_pickle_python_shelve.txt
Q: Why do we need connector.commit() after execution? I have a SQLite/Python code that runs the query command as follows. def queryDB(self, command_):_ self.cursor.execute(command_) self.connector.commit() # <---- ??? ... it works pretty well, but I have some questions. Why connector.commit() is needed? What does it do? What does cursor.execute() do? A: Per this website: http://www.amk.ca/python/writing/DB-API.html "For databases that support transactions, the Python interface silently starts a transaction when the cursor is created. The commit() method commits the updates made using that cursor, and the rollback() method discards them. Each method then starts a new transaction. Some databases don't have transactions, but simply apply all changes as they're executed. On these databases, commit() does nothing, but you should still call it in order to be compatible with those databases that do support transactions."
Why do we need connector.commit() after execution?
I have a SQLite/Python code that runs the query command as follows. def queryDB(self, command_):_ self.cursor.execute(command_) self.connector.commit() # <---- ??? ... it works pretty well, but I have some questions. Why connector.commit() is needed? What does it do? What does cursor.execute() do?
[ "Per this website: http://www.amk.ca/python/writing/DB-API.html\n\"For databases that support transactions, the Python interface silently starts a transaction when the cursor is created. The commit() method commits the updates made using that cursor, and the rollback() method discards them. Each method then starts ...
[ 2 ]
[]
[]
[ "python", "sqlite" ]
stackoverflow_0003687370_python_sqlite.txt
Q: IExplorerBrowser control in python I'm try to embed a IExplorerBrowser (Windows Explorer) in a wxpython application but I cannot seem to get the IExplorerBrowser module opened in python I have the CLSID of IExplorerBrowser from the registry but when I try and open it with: from win32com import client client.gencache.GetModuleForCLSID(id) Nothing is returned.. i.e. the module does not exist. Am I going about this the wrong way? I usually use makepy to generate COM wrappers and open them with client.Dispatch, getting the object names from the makepy generated code. However, I can not find the IExplorerBrowser object in the makepy COM browser and am pretty much stuck. Thanks A: Most of the windows shell interfaces can be accessed from win32com.shell. Also take a look at the sample explorer_browser.py, which should be in your site-packages/win32comext/shell/demos directory.
IExplorerBrowser control in python
I'm try to embed a IExplorerBrowser (Windows Explorer) in a wxpython application but I cannot seem to get the IExplorerBrowser module opened in python I have the CLSID of IExplorerBrowser from the registry but when I try and open it with: from win32com import client client.gencache.GetModuleForCLSID(id) Nothing is returned.. i.e. the module does not exist. Am I going about this the wrong way? I usually use makepy to generate COM wrappers and open them with client.Dispatch, getting the object names from the makepy generated code. However, I can not find the IExplorerBrowser object in the makepy COM browser and am pretty much stuck. Thanks
[ "Most of the windows shell interfaces can be accessed from win32com.shell. Also take a look at the sample explorer_browser.py, which should be in your site-packages/win32comext/shell/demos directory.\n" ]
[ 1 ]
[]
[]
[ "com", "python", "win32com" ]
stackoverflow_0003686122_com_python_win32com.txt
Q: Python logging objects I'm trying to reformat the output data sent to the logger based on it's class. For example: strings will be printed as they are dictionaries/lists will be automatically indented/beautified into html my custom classes will be handled on an individual basis and converted to html My problem is that the message sent to the formatter is always a string. The documentation specifically says that you can send objects as messages, but it seems to be converting the objects to strings before I can format them. class MyFormatter(logging.Formatter): def format(self, record): #The problem is that record.message is already a string... ... Where is the appropriate place for me to handle objects sent as messages? A: Ok, I've figured it out. The documentation in the official docs is a little bit unclear, but basically, there are two attributes LogRecord.message -> a string representation of the message and LogRecord.msg -> the message itself. To get the actual object, you must reference the .msg for it to work. I hope this was useful to someone else. A: Maybe in the __str__() method of the objects you are logging?
Python logging objects
I'm trying to reformat the output data sent to the logger based on it's class. For example: strings will be printed as they are dictionaries/lists will be automatically indented/beautified into html my custom classes will be handled on an individual basis and converted to html My problem is that the message sent to the formatter is always a string. The documentation specifically says that you can send objects as messages, but it seems to be converting the objects to strings before I can format them. class MyFormatter(logging.Formatter): def format(self, record): #The problem is that record.message is already a string... ... Where is the appropriate place for me to handle objects sent as messages?
[ "Ok, I've figured it out.\nThe documentation in the official docs is a little bit unclear, but basically, there are two attributes\nLogRecord.message -> a string representation of the message\nand \nLogRecord.msg -> the message itself.\nTo get the actual object, you must reference the .msg for it to work.\nI hope t...
[ 4, 0 ]
[]
[]
[ "logging", "python" ]
stackoverflow_0003687864_logging_python.txt
Q: Profiling Python generators I'm adapting an application that makes heavy use of generators to produce its results to provide a web.py web interface. So far, I could wrap the call to the for-loop and the output-producing statements in a function and call that using cProfile.run() or runctx(). Conceptually: def output(): for value in generator(): print(value) cProfile.run('output()') In web.py, I have to wrap it the following way, since I want to immediately produce output from the potentially long-running computation in each iteration step using yield: class index: def GET(self): for value in generator(): yield make_pretty_html(value) Is there a way to profile all calls to the generator like in the first example when it's used like in the second one? A: I finally found a solution. Return value of profiling via here. import cProfile import pstats import glob import math def gen(): for i in range(1, 10): yield math.factorial(i) class index(object): def GET(self): p = cProfile.Profile() it = gen() while True: try: nxt = p.runcall(next, it) except StopIteration: break print nxt p.print_stats() index().GET() I also could merge multiple such profiling results (once I start giving unique file names) via documentation and store/analyze them combined. A: It sounds like you're trying to profile each call to 'next' on the generator? If so, you could wrap your generator in a profiling generator. Something like this, where the commented off part will be sending the results to a log or database. def iter_profiler(itr): itr = iter(itr) while True: try: start = time.time() value = itr.next() end = time.time() except StopIteration: break # do something with (end - stop) times here yield value Then instead of instantiating your generator as generator() you would use iter_profiler(generator()) A: Can you just use time.time() to profile the parts you are interested in? Just get the current time and subtract from the last time you made a measurement.
Profiling Python generators
I'm adapting an application that makes heavy use of generators to produce its results to provide a web.py web interface. So far, I could wrap the call to the for-loop and the output-producing statements in a function and call that using cProfile.run() or runctx(). Conceptually: def output(): for value in generator(): print(value) cProfile.run('output()') In web.py, I have to wrap it the following way, since I want to immediately produce output from the potentially long-running computation in each iteration step using yield: class index: def GET(self): for value in generator(): yield make_pretty_html(value) Is there a way to profile all calls to the generator like in the first example when it's used like in the second one?
[ "I finally found a solution. Return value of profiling via here.\nimport cProfile\nimport pstats\nimport glob\nimport math\n\ndef gen():\n for i in range(1, 10):\n yield math.factorial(i)\n\nclass index(object):\n def GET(self):\n p = cProfile.Profile()\n\n it = gen()\n while True:...
[ 7, 2, 0 ]
[]
[]
[ "profiler", "profiling", "python", "web.py" ]
stackoverflow_0003570335_profiler_profiling_python_web.py.txt
Q: How would I go about downloading a file from a submitted link then reuploading to my server for streaming? I'm working on a project where a user can submit a link to a sound file hosted on another site through a form. I'd like to download that file to my server and make it available for streaming. I might have to upload it to Amazon S3. I'm doing this in Django but I'm new to Python. Can anyone point me in the right direction for how to do this? A: Here's how I would do it: Create a model like SoundUpload like: class SoundUpload(models.Model): STATUS_CHOICES = ( (0, 'Unprocessed'), (1, 'Ready'), (2, 'Bad File'), ) uploaded_by = models.ForeignKey(User) original_url = models.URLField(verify_true=False) download_url = models.URLField(null=True, blank=True) status = models.IntegerField(choices=STATUS_CHOICES, default=0) Next create the view w/a ModelForm and save the info to the database. Hook up a post-save signal on the SoundUpload model that kicks of a django-celery Task. This will ensure that the UI responds while you're processing all the data. def process_new_sound_upload(sender, **kwargs): # Bury to prevent circular dependency issues. from your_project.tasks import ProcessSoundUploadTask if kwargs.get('created', False): instance = kwargs.get('instance') ProcessSoundUploadTask.delay(instance.id) post_save.connect(process_new_sound_upload, sender=SoundUpload) In the ProcessSoundUploadTask task you'll want to: Lookup the model object based on the passed in id. Using pycurl download the file to a temporary folder (w/very limitied permissions). Use ffmpeg (or similar) to ensure it's a real sound file. Do any other virus style checks here (depends on how much you trust your users). If it turn out to be a bad file set the SoundUpload.status field to 2 (Bad File), save it, and return to stop processing the task. Perhaps send out an email here. Use boto to upload the file to s3. See this example. Update the SoundUpload.download_url to be the s3 url, the status to be "processed" and save the object. Do any other post-processing (sending notification emails, etc.) The key to this approach is using django-celery. Once the task is kicked off through the post_save signal the UI can return, thus creating a very "snappy" experience. This task gets put onto an AMQP message queue that can be processed by multiple workers (dedicated EC2 instances, etc.), so you'll be able to scale without too much trouble. This may seem like a bit overkill, but it's really not as much work as it seems.
How would I go about downloading a file from a submitted link then reuploading to my server for streaming?
I'm working on a project where a user can submit a link to a sound file hosted on another site through a form. I'd like to download that file to my server and make it available for streaming. I might have to upload it to Amazon S3. I'm doing this in Django but I'm new to Python. Can anyone point me in the right direction for how to do this?
[ "Here's how I would do it:\n\nCreate a model like SoundUpload like:\nclass SoundUpload(models.Model):\n STATUS_CHOICES = (\n (0, 'Unprocessed'),\n (1, 'Ready'),\n (2, 'Bad File'),\n )\n uploaded_by = models.ForeignKey(User)\n original_url = models.URLField(verify_true=False)\n do...
[ 0 ]
[]
[]
[ "amazon_s3", "django", "download", "file_upload", "python" ]
stackoverflow_0003688160_amazon_s3_django_download_file_upload_python.txt
Q: GUIs vs TUIs in Python I'm interested in doing rapid app development in Python. Since this is mainly for prototyping purposes, I'm looking for a way of creating "rough" user interfaces. By this, I mean that they don't have to look professional, they just have to be flexible enough to make it look the way I want. Originally I was going to do this by creating a GUI (using something like GTK), but now I'm starting to think about TUIs (using ncurses). What are the differences between creating a GUI versus a TUI? Would I be able to create the interface faster in pyGTK or Python's curses module? A: pyGTK is a lot more than curses. It includes an event loop, for one. If you're going to create TUIs, at least use something comparable, like urwid. A: If you are looking for a simple way to mockup a simple GUI, you might consider using a lightweight web framework like flask. You'll have access to a range of standard widgets (buttons, picklists, textboxes, etc.). Just plain HTML is perfectly usable while you're focusing on the functionality of whatever you're building and you can add some CSS later to make things pretty. Consider how a "Hello world" app in flask (below, taken from the project home page) compares to this 80 line pyGTK example. from flask import Flask app = Flask(__name__) @app.route("/") def hello(): return "Hello World!" if __name__ == "__main__": app.run() The web development route spares you much of the boilerplate work involved with desktop GUI development.
GUIs vs TUIs in Python
I'm interested in doing rapid app development in Python. Since this is mainly for prototyping purposes, I'm looking for a way of creating "rough" user interfaces. By this, I mean that they don't have to look professional, they just have to be flexible enough to make it look the way I want. Originally I was going to do this by creating a GUI (using something like GTK), but now I'm starting to think about TUIs (using ncurses). What are the differences between creating a GUI versus a TUI? Would I be able to create the interface faster in pyGTK or Python's curses module?
[ "pyGTK is a lot more than curses. It includes an event loop, for one. If you're going to create TUIs, at least use something comparable, like urwid.\n", "If you are looking for a simple way to mockup a simple GUI, you might consider using a lightweight web framework like flask. You'll have access to a range of st...
[ 0, 0 ]
[]
[]
[ "gtk", "ncurses", "python", "tui", "user_interface" ]
stackoverflow_0003687922_gtk_ncurses_python_tui_user_interface.txt
Q: CherryPy - saving checkboxes selection to variables I'm trying to build a simple webpage with multiple checkboxes, a Textbox and a submit buttom. I've just bumped into web programing in Python and am trying to figure out out to do it with CherryPy. I need to associate each checkbox to a variable so my .py file knows which ones were selected when clicking the 'Start button'. Can someone please give some code example ? Do I have any advantage including some Python Javascript Compiler like Pyjamas? <form action="../remote_targets/ssh_grab.py"> <label for="goal"><strong>Host Availability:</strong></label> <input style="margin-left: 30px;" type="checkbox" name="goal[]" value="cpu" /> CPU idle<br> <input style="margin-left: 30px;" type="checkbox" name="goal[]" value="lighttpd" /> Lighttpd Service<br> <input style="margin-left: 30px;" type="checkbox" name="goal[]" value="mysql" /> Mysql Service<br> </form> Thanks ! A: Here's a minimal example: import cherrypy class Root(object): @cherrypy.expose def default(self, **kwargs): print kwargs return '''<form action="" method="POST"> Host Availability: <input type="checkbox" name="goal" value="cpu" /> CPU idle <input type="checkbox" name="goal" value="lighttpd" /> Lighttpd Service <input type="checkbox" name="goal" value="mysql" /> Mysql Service <input type="submit"> </form>''' cherrypy.quickstart(Root()) And here is the terminal output: $ python stacktest.py [10/Sep/2010:14:25:55] HTTP Serving HTTP on http://0.0.0.0:8080/ CherryPy Checker: The Application mounted at '' has an empty config. Submitted goal argument: None 127.0.0.1 - - [10/Sep/2010:14:26:09] "GET / HTTP/1.1" 200 276 "" "Mozilla..." Submitted goal argument: ['cpu', 'mysql'] 127.0.0.1 - - [10/Sep/2010:14:26:15] "POST / HTTP/1.1" 200 276 "http://localhost:8003/" "Mozilla..." [10/Sep/2010:14:26:26] ENGINE <Ctrl-C> hit: shutting down app engine [10/Sep/2010:14:26:26] HTTP HTTP Server shut down [10/Sep/2010:14:26:26] ENGINE CherryPy shut down $ As you can see, CherryPy will collect multiple controls with the same name into a list. You don't need the [] suffix to tell it to do that. Then, iterate over the list to see which values were submitted. (Keep in mind that, if only one item is selected, then the goal argument will be a single string instead of a list!)
CherryPy - saving checkboxes selection to variables
I'm trying to build a simple webpage with multiple checkboxes, a Textbox and a submit buttom. I've just bumped into web programing in Python and am trying to figure out out to do it with CherryPy. I need to associate each checkbox to a variable so my .py file knows which ones were selected when clicking the 'Start button'. Can someone please give some code example ? Do I have any advantage including some Python Javascript Compiler like Pyjamas? <form action="../remote_targets/ssh_grab.py"> <label for="goal"><strong>Host Availability:</strong></label> <input style="margin-left: 30px;" type="checkbox" name="goal[]" value="cpu" /> CPU idle<br> <input style="margin-left: 30px;" type="checkbox" name="goal[]" value="lighttpd" /> Lighttpd Service<br> <input style="margin-left: 30px;" type="checkbox" name="goal[]" value="mysql" /> Mysql Service<br> </form> Thanks !
[ "Here's a minimal example:\nimport cherrypy\n\nclass Root(object):\n @cherrypy.expose\n def default(self, **kwargs):\n print kwargs\n return '''<form action=\"\" method=\"POST\">\nHost Availability:\n<input type=\"checkbox\" name=\"goal\" value=\"cpu\" /> CPU idle\n<input type=\"checkbox\" name=...
[ 10 ]
[]
[]
[ "checkbox", "cherrypy", "python" ]
stackoverflow_0003686773_checkbox_cherrypy_python.txt
Q: web.py on Google App Engine I'm trying to get a web.py application running on GAE. I hoped that sth like the following might work import web from google.appengine.ext.webapp.util import run_wsgi_app [...] def main(): app = web.application(urls, globals()) run_wsgi_app(app) But obviously the app object doesn't conform with the run_wsgi_app function's expectations. The error msg says sth like app has no __call__ function, so I tried passing app.run instead, but that didn't work either. How can I make the call to run_wsgi_app work? A: Here is a snippet of StackPrinter, a webpy application that runs on top of Google App Engine. from google.appengine.ext.webapp.util import run_wsgi_app import web ... app = web.application(urls, globals()) def main(): application = app.wsgifunc() run_wsgi_app(application) if __name__ == '__main__': main() A: You don't need to import or use run_wsgi_app, web.py has a runcgi method that works perfectly! if __name__ == '__main__': app.cgirun()
web.py on Google App Engine
I'm trying to get a web.py application running on GAE. I hoped that sth like the following might work import web from google.appengine.ext.webapp.util import run_wsgi_app [...] def main(): app = web.application(urls, globals()) run_wsgi_app(app) But obviously the app object doesn't conform with the run_wsgi_app function's expectations. The error msg says sth like app has no __call__ function, so I tried passing app.run instead, but that didn't work either. How can I make the call to run_wsgi_app work?
[ "Here is a snippet of StackPrinter, a webpy application that runs on top of Google App Engine.\nfrom google.appengine.ext.webapp.util import run_wsgi_app\nimport web\n...\napp = web.application(urls, globals())\n\ndef main():\n\n application = app.wsgifunc()\n run_wsgi_app(application)\n\nif __name__ == '__ma...
[ 11, 0 ]
[]
[]
[ "google_app_engine", "python", "web.py" ]
stackoverflow_0003665292_google_app_engine_python_web.py.txt
Q: Convert function to single line list comprehension Is it possible to convert this function, list comprehension combination into a single list comprehension (so that keep is not needed)? def keep(list, i, big): for small in list[i+1:]: if 0 == big % small: return False return True multiples[:] = [n for i,n in enumerate(multiples) if keep(multiples, i, n)] A: I think this is it: multiples[:] = [n for i,n in enumerate(multiples) if all(n % small for small in multiples[i+1:])] A: multiples[:] = [n for i, n in enumerate(multiples) if 0 not in [n % other for other in multiples[i+1:]] Advisible? Probably not. A: First thing is to learn to not use names like list in your code. Remember also the "first make it work, then optimize". If you continue to learn things, it is likely that in any case after one month you are not any more happy with your code. Try to make readable code. For that it helps if you can (heaven forbid!) read your own code after putting it aside for few weeks. That said, it is actually more readable sometimes to make list comprehension, but often you can do it only after writing more stupid version of code.
Convert function to single line list comprehension
Is it possible to convert this function, list comprehension combination into a single list comprehension (so that keep is not needed)? def keep(list, i, big): for small in list[i+1:]: if 0 == big % small: return False return True multiples[:] = [n for i,n in enumerate(multiples) if keep(multiples, i, n)]
[ "I think this is it:\nmultiples[:] = [n for i,n in enumerate(multiples) \n if all(n % small for small in multiples[i+1:])] \n\n", "multiples[:] = [n for i, n in enumerate(multiples) if 0 not in [n % other for other in multiples[i+1:]]\nAdvisible? Probably not.\n", "First thing is to learn ...
[ 6, 2, 1 ]
[]
[]
[ "list_comprehension", "python" ]
stackoverflow_0003688468_list_comprehension_python.txt
Q: Django - Multiple columns primary key I would like to implement multicolumns primary keys in django. I've tried to implement an AutoSlugField() which concatenate my columns values(foreignkey/dates) ... models.py : class ProductProduction(models.Model): enterprise = models.ForeignKey('Enterprise') product = models.ForeignKey('Product') date = models.DateTimeField() count = models.IntegerField() slug = AutoSlugField(populate_from= lambda instance: instance.enterprise.username + '-' + instance.product.name + '-' + str(date)) When I pass the following parameters : - 'Megacorp','robot','09/10/2010',5 => slug = 'Megacorp-robot-09/10/2010' ... the next time in pass the triplet, a new value has been inserted : - 'Megacorp','robot','09/10/2010',10 => slug = 'Megacorp-robot-09/10/2010' => same slug value => insert ???? I tried to add primary_key=True parameter to the slug... but it creates new instance with a "-1" "-2" ... and NO update is made at all... Did I miss something ? Thanks, Yoan A: Here is the explanation of the autoslugfield I used. http://packages.python.org/django-autoslug/fields.html Regards, Yoan
Django - Multiple columns primary key
I would like to implement multicolumns primary keys in django. I've tried to implement an AutoSlugField() which concatenate my columns values(foreignkey/dates) ... models.py : class ProductProduction(models.Model): enterprise = models.ForeignKey('Enterprise') product = models.ForeignKey('Product') date = models.DateTimeField() count = models.IntegerField() slug = AutoSlugField(populate_from= lambda instance: instance.enterprise.username + '-' + instance.product.name + '-' + str(date)) When I pass the following parameters : - 'Megacorp','robot','09/10/2010',5 => slug = 'Megacorp-robot-09/10/2010' ... the next time in pass the triplet, a new value has been inserted : - 'Megacorp','robot','09/10/2010',10 => slug = 'Megacorp-robot-09/10/2010' => same slug value => insert ???? I tried to add primary_key=True parameter to the slug... but it creates new instance with a "-1" "-2" ... and NO update is made at all... Did I miss something ? Thanks, Yoan
[ "Here is the explanation of the autoslugfield I used.\nhttp://packages.python.org/django-autoslug/fields.html\nRegards,\nYoan\n" ]
[ 0 ]
[]
[]
[ "composite_primary_key", "django", "django_models", "foreign_keys", "python" ]
stackoverflow_0003684011_composite_primary_key_django_django_models_foreign_keys_python.txt
Q: Method vs. function in case of simple function I have original class (DownloadPage) and I need to add just one simple functionality (get_info). What is better approach in OOP? def get_info(page): # make simple function ... result = get_info(DownloadPage()) or class MyDownloadPage(DownloadPage): # make new class with inheritance def get_info(self): # with one method ... result = MyDownloadPage().get_info() Thanks A: The answer truly depends on whether or not you want that get_info function/method can function on things other than a MyDownloadPage. At present, I'd go with the free function but when requirements solidify one way or the other it should be easy enough to transform your solution either way. (I prefer the free function because it doesn't restrict what can be passed to it, and any functionality that relies only on another object's public interface should be a function to reduce coupling.) A: Since the distance from one solution to the other is small, it hardly makes a difference with one function. As soon as you have two functions, I'd say the derived class is more maintainable. Once the two functions want to share a common value, like a logger or configuration variable, then you'll see more benefit. A: This may be heretical, by why not just add it as a method to the existing class? def get_info(self): ... DownloadPage.get_info = get_info I realize this isn't exactly standard practice, but I'd be curious to hear Python experts explain why not.
Method vs. function in case of simple function
I have original class (DownloadPage) and I need to add just one simple functionality (get_info). What is better approach in OOP? def get_info(page): # make simple function ... result = get_info(DownloadPage()) or class MyDownloadPage(DownloadPage): # make new class with inheritance def get_info(self): # with one method ... result = MyDownloadPage().get_info() Thanks
[ "The answer truly depends on whether or not you want that get_info function/method can function on things other than a MyDownloadPage. At present, I'd go with the free function but when requirements solidify one way or the other it should be easy enough to transform your solution either way.\n(I prefer the free fu...
[ 3, 2, 1 ]
[]
[]
[ "oop", "python" ]
stackoverflow_0003686535_oop_python.txt
Q: Change enclosing quotes in Vim In Vim, it's a quick 3-character command to change what's inside the current quoted string (e.g., ci"), but is there a simple way to change what type of quotes are currently surrounding the cursor? Sometimes I need to go from "blah" to """blah""" or "blah" to 'blah' (in Python source code) and I'd ideally like to do it quickly using default key bindings. A: Try the surround.vim plugin. I find it an essential addition to any vim installation. A: Surround.vim is great, but I don't think it'll handle your triple-quoted needs directly. The way I've done stuff along these lines (when surround wasn't appropriate) was to use %, make the change, then double-backtick to go back to the starting point. E.g. if the cursor is somewhere in a single-quoted string, do f'%, make the change, then double-backtick and ..
Change enclosing quotes in Vim
In Vim, it's a quick 3-character command to change what's inside the current quoted string (e.g., ci"), but is there a simple way to change what type of quotes are currently surrounding the cursor? Sometimes I need to go from "blah" to """blah""" or "blah" to 'blah' (in Python source code) and I'd ideally like to do it quickly using default key bindings.
[ "Try the surround.vim plugin. I find it an essential addition to any vim installation.\n", "Surround.vim is great, but I don't think it'll handle your triple-quoted needs directly.\nThe way I've done stuff along these lines (when surround wasn't appropriate) was to use %, make the change, then double-backtick to ...
[ 18, 2 ]
[]
[]
[ "python", "surround", "vim" ]
stackoverflow_0003687260_python_surround_vim.txt
Q: QGraphicsView not displaying QGraphicsItems Using PyQt4. My goal is to load in "parts" of a .png, assign them to QGraphicsItems, add them to the scene, and have the QGraphicsView display them. (Right now I don't care about their coordinates, all I care about is getting the darn thing to work). Currently nothing is displayed. At first I thought it was a problem with items being added and QGraphicsView not updating, but after reading up a bit more on viewports, that didn't really make sense. So I tested adding the QGraphicsView items before even setting the view (so I know it wouldn't be an update problem) and it still displayed nothing. The path is definitely correct. Here is some code that shows what is going on... Ignore spacing issues, layout got messed up when pasting class MainWindow(QtGui.QMainWindow): def __init__(self, parent = None): QtGui.QMainWindow.__init__(self, parent) self.setWindowTitle('NT State Editor') winWidth = 1024 winHeight = 768 screen = QtGui.QDesktopWidget().availableGeometry() screenCenterX = (screen.width() - winWidth) / 2 screenCenterY = (screen.height() - winHeight) / 2 self.setGeometry(screenCenterX, screenCenterY, winWidth, winHeight) self.tileMap = tilemap.TileMap() self.tileBar = tilebar.TileBar() mapView = QtGui.QGraphicsView(self.tileMap) tileBarView = QtGui.QGraphicsView(self.tileBar) button = tilebar.LoadTilesButton() QtCore.QObject.connect(button, QtCore.SIGNAL('selectedFile'), self.tileBar.loadTiles) hbox = QtGui.QHBoxLayout() hbox.addWidget(mapView) hbox.addWidget(self.tileBarView) hbox.addWidget(button) mainWidget = QtGui.QWidget() mainWidget.setLayout(hbox) self.setCentralWidget(mainWidget) app = QtGui.QApplication(sys.argv) mainWindow = MainWindow() mainWindow.show() sys.exit(app.exec_()) -- class Tile(QtGui.QGraphicsPixmapItem): def __init__(self, parent = None): QtGui.QGraphicsPixmapItem(self, parent) self.idAttr = -1 class TileBar(QtGui.QGraphicsScene): def __init__(self, parent = None): QtGui.QGraphicsScene.__init__(self, parent) def loadTiles(self, filename): tree = ElementTree() tree.parse(filename) root = tree.getroot() sheets = root.findall('sheet') for sheet in sheets: sheetPath = sheet.get('path') sheetImg = QtGui.QImage(sheetPath) strips = sheet.findall('strip') for strip in strips: tile = Tile() tile.idAttr = strip.get('id') clip = strip.find('clip') x = clip.get('x') y = clip.get('y') width = clip.get('width') height = clip.get('height') subImg = sheetImg.copy(int(x), int(y), int(width), int(height)) pixmap = QtGui.QPixmap.fromImage(subImg) tile.setPixmap(pixmap) self.addItem(tile) I tried some stuff with connecting the TileBar's 'changed()' signal with various 'view' functions, but none of them worked. I've had a bit of trouble finding good examples of ways to use the Graphics View Framework, (most are very very small scale) so let me know if I'm doing it completely wrong. Any help is appreciated. Thanks. A: It's quite hard to tell what's wrong with your code as it's not complete and missing some parts to get it compiled. Though there are couple of places which could potentially cause the problem: Your Title class constructor; I believe you should be calling the base class constructor there by executing: QtGui.QGraphicsPixmapItem.__init__(self, parent). Looks like your graphic scene objects are getting constructed in the button's onclick signal. There could be problems with your signal connecting to the proper slot, you should see warnings in the output if there are such problems in your widget. It looks like you're loading images file names from the xml file, it's quite hard to check if the logic over there is straight but potentially you could have a problem over there too. Below is simplified version of your code which loads ab image into the Title and adds it to the graphic scene: import sys from PyQt4 import QtGui, QtCore class Tile(QtGui.QGraphicsPixmapItem): def __init__(self, parent=None): QtGui.QGraphicsPixmapItem.__init__(self, parent) self.idAttr = -1 class TileBar(QtGui.QGraphicsScene): def __init__(self, parent=None): QtGui.QGraphicsScene.__init__(self, parent) #def loadTiles(self, filename): def loadTiles(self): sheetImg = QtGui.QImage("put_your_file_name_here.png") pixmap = QtGui.QPixmap.fromImage(sheetImg) tile = Tile() tile.setPixmap(pixmap) self.addItem(tile) # skipping your ElementTree parsing logic here class MainWindow(QtGui.QMainWindow): def __init__(self, parent=None): QtGui.QMainWindow.__init__(self, parent) self.setWindowTitle('NT State Editor') winWidth = 1024 winHeight = 768 screen = QtGui.QDesktopWidget().availableGeometry() screenCenterX = (screen.width() - winWidth) / 2 screenCenterY = (screen.height() - winHeight) / 2 self.setGeometry(screenCenterX, screenCenterY, winWidth, winHeight) #self.tileMap = Tiletilebar.Map() self.tileBar = TileBar() #mapView = QtGui.QGraphicsView(self.tileMap) self.tileBarView = QtGui.QGraphicsView(self.tileBar) #button = self.tilebar.LoadTilesButton() button = QtGui.QPushButton() QtCore.QObject.connect(button, QtCore.SIGNAL("clicked()"), self.tileBar.loadTiles) #self.tileBar.loadTiles('some_file_name') hbox = QtGui.QHBoxLayout() #hbox.addWidget(mapView) hbox.addWidget(self.tileBarView) hbox.addWidget(button) mainWidget = QtGui.QWidget() mainWidget.setLayout(hbox) self.setCentralWidget(mainWidget) app = QtGui.QApplication([]) exm = MainWindow() exm.show() app.exec_() hope this helps, regards
QGraphicsView not displaying QGraphicsItems
Using PyQt4. My goal is to load in "parts" of a .png, assign them to QGraphicsItems, add them to the scene, and have the QGraphicsView display them. (Right now I don't care about their coordinates, all I care about is getting the darn thing to work). Currently nothing is displayed. At first I thought it was a problem with items being added and QGraphicsView not updating, but after reading up a bit more on viewports, that didn't really make sense. So I tested adding the QGraphicsView items before even setting the view (so I know it wouldn't be an update problem) and it still displayed nothing. The path is definitely correct. Here is some code that shows what is going on... Ignore spacing issues, layout got messed up when pasting class MainWindow(QtGui.QMainWindow): def __init__(self, parent = None): QtGui.QMainWindow.__init__(self, parent) self.setWindowTitle('NT State Editor') winWidth = 1024 winHeight = 768 screen = QtGui.QDesktopWidget().availableGeometry() screenCenterX = (screen.width() - winWidth) / 2 screenCenterY = (screen.height() - winHeight) / 2 self.setGeometry(screenCenterX, screenCenterY, winWidth, winHeight) self.tileMap = tilemap.TileMap() self.tileBar = tilebar.TileBar() mapView = QtGui.QGraphicsView(self.tileMap) tileBarView = QtGui.QGraphicsView(self.tileBar) button = tilebar.LoadTilesButton() QtCore.QObject.connect(button, QtCore.SIGNAL('selectedFile'), self.tileBar.loadTiles) hbox = QtGui.QHBoxLayout() hbox.addWidget(mapView) hbox.addWidget(self.tileBarView) hbox.addWidget(button) mainWidget = QtGui.QWidget() mainWidget.setLayout(hbox) self.setCentralWidget(mainWidget) app = QtGui.QApplication(sys.argv) mainWindow = MainWindow() mainWindow.show() sys.exit(app.exec_()) -- class Tile(QtGui.QGraphicsPixmapItem): def __init__(self, parent = None): QtGui.QGraphicsPixmapItem(self, parent) self.idAttr = -1 class TileBar(QtGui.QGraphicsScene): def __init__(self, parent = None): QtGui.QGraphicsScene.__init__(self, parent) def loadTiles(self, filename): tree = ElementTree() tree.parse(filename) root = tree.getroot() sheets = root.findall('sheet') for sheet in sheets: sheetPath = sheet.get('path') sheetImg = QtGui.QImage(sheetPath) strips = sheet.findall('strip') for strip in strips: tile = Tile() tile.idAttr = strip.get('id') clip = strip.find('clip') x = clip.get('x') y = clip.get('y') width = clip.get('width') height = clip.get('height') subImg = sheetImg.copy(int(x), int(y), int(width), int(height)) pixmap = QtGui.QPixmap.fromImage(subImg) tile.setPixmap(pixmap) self.addItem(tile) I tried some stuff with connecting the TileBar's 'changed()' signal with various 'view' functions, but none of them worked. I've had a bit of trouble finding good examples of ways to use the Graphics View Framework, (most are very very small scale) so let me know if I'm doing it completely wrong. Any help is appreciated. Thanks.
[ "It's quite hard to tell what's wrong with your code as it's not complete and missing some parts to get it compiled. Though there are couple of places which could potentially cause the problem:\n\nYour Title class constructor; I believe you should be calling the base class constructor there by executing: QtGui.QGra...
[ 2 ]
[]
[]
[ "pyqt", "python", "qgraphicsitem", "qgraphicsview" ]
stackoverflow_0003682282_pyqt_python_qgraphicsitem_qgraphicsview.txt
Q: Schedule a cron job for execution every hour on certain days on App Engine I would like to schedule a cron task to run every hour, but only from Thursday through Monday. Is a schedule like this possible? From the documentation, it looks like I can schedule a cron task to run at an hourly interval or on specific days at a single specific time, but I cannot figure out how to schedule a cron task to run at an hourly interval only on specific days. I've tried schedules like the following in cron.yaml: every 1 hours thu,fri,sat,sun,mon every 1 hours of thu,fri,sat,sun,mon The way I read the documentation, I think this may be impossible. I'm hoping that I've either missed something or that there is some undocumented syntax for what I'm trying to accomplish. A: Like you noticed in the documentation for cronjobs, the source also seems to indicate that the interval schedule format doesn't let you restrict the interval to particular day(s) of the week. Though you can't schedule your task with a single cronjob, you could schedule it with multiple cronjobs: every thu,fri,sat,sun,mon 00:00 every thu,fri,sat,sun,mon 01:00 ... every thu,fri,sat,sun,mon 23:00 Alternatively, leoluk's comment makes a good suggestion - just use a single, simple interval schedule which invokes your script every hour, but have your script terminate without doing anything if the day of week is one which you wish to exclude (e.g., Tuesday or Wednesday in your case).
Schedule a cron job for execution every hour on certain days on App Engine
I would like to schedule a cron task to run every hour, but only from Thursday through Monday. Is a schedule like this possible? From the documentation, it looks like I can schedule a cron task to run at an hourly interval or on specific days at a single specific time, but I cannot figure out how to schedule a cron task to run at an hourly interval only on specific days. I've tried schedules like the following in cron.yaml: every 1 hours thu,fri,sat,sun,mon every 1 hours of thu,fri,sat,sun,mon The way I read the documentation, I think this may be impossible. I'm hoping that I've either missed something or that there is some undocumented syntax for what I'm trying to accomplish.
[ "Like you noticed in the documentation for cronjobs, the source also seems to indicate that the interval schedule format doesn't let you restrict the interval to particular day(s) of the week.\nThough you can't schedule your task with a single cronjob, you could schedule it with multiple cronjobs:\nevery thu,fri,sa...
[ 2 ]
[]
[]
[ "cron", "google_app_engine", "python" ]
stackoverflow_0003688487_cron_google_app_engine_python.txt
Q: when converting a python list to json and back, do you cast? When you convert a list of user objects into json, and then convert it back to its original state, do you have to cast? Are there any security issues of taking a javascript json object and converting it into a python list object? A: json.dumps(somepython) gives you a valid JSON string representing the Python object somepython (which may perfectly well be a list) and json.loads(ajsonstring) goes the other way 'round -- both without any security issue nor "cast" (?). That's with Python 2.6 or better, using the json module in the standard library. If you're stuck with 2.5 (e.g., for use on Google App Engine), you can use the equivalent third-party module simplejson. A: You will be responsible for writing python to encode and decode your classes. How are you encoding them? That will have a large bearing on how you decode them. Python will not do either for you if you step beyond dicts, lists, unicode, strings, ints, floats, booleans, and None. The canonical way to encode custom classes is to subclass json.JSONEncoder and provide a default method. The default method has signature 'self, obj' and returns obj encoded in json if it knows how to and returns super(clsname, self).default(obj) if does not. If you encode your classes as dicts, then you can write a function that accepts one argument (a decoded dictionary) and returns the decoded object from that. Then pass this function to the constructor for json.JSONDecoder and use the decode method on that instance. All in all, json is not ideally suited for serializing complex classes. If you can capture the entire state of a function in such a way that it can be passed to the init method, then have at it but if not, then you'll just hurt your head trying.
when converting a python list to json and back, do you cast?
When you convert a list of user objects into json, and then convert it back to its original state, do you have to cast? Are there any security issues of taking a javascript json object and converting it into a python list object?
[ "json.dumps(somepython) gives you a valid JSON string representing the Python object somepython (which may perfectly well be a list) and json.loads(ajsonstring) goes the other way 'round -- both without any security issue nor \"cast\" (?). That's with Python 2.6 or better, using the json module in the standard lib...
[ 2, 1 ]
[]
[]
[ "json", "python", "security" ]
stackoverflow_0003689468_json_python_security.txt
Q: Flask/Werkzeug, how to return previous page after login I am using the Flask micro-framework which is based on Werkzeug, which uses Python. Before each restricted page there is a decorator to ensure the user is logged in, currently returning them to the login page if they are not logged in, like so: # Decorator def logged_in(f): @wraps(f) def decorated_function(*args, **kwargs): try: if not session['logged_in']: flash('Please log in first...', 'error') return redirect(url_for('login')) else: return f(*args, **kwargs) except KeyError: flash('Please log in first...', 'error') return redirect(url_for('login')) return decorated_function # Login function @app.route('/', methods=['GET', 'POST']) def login(): """Login page.""" if request.method=='POST': ### Checks database, etc. ### return render_template('login.jinja2') # Example 'restricted' page @app.route('/download_file') @logged_in def download_file(): """Function used to send files for download to user.""" fileid = request.args.get('id', 0) ### ... ### After logging in, it needs to return users to the page that took them to the login page. It also needs to retain things such as the passed variables (i.e. the entire link basically www.example.com/download_file?id=3 ) Does anyone know how to do this? Thank you for your help :-) A: I think standard practice is to append the URL to which the user needs to be redirected after a successful login to the end of the login URL's querystring. You'd change your decorator to something like this (with redundancies in your decorator function also removed): def logged_in(f): @wraps(f) def decorated_function(*args, **kwargs): if session.get('logged_in') is not None: return f(*args, **kwargs) else: flash('Please log in first...', 'error') next_url = get_current_url() # However you do this in Flask login_url = '%s?next=%s' % (url_for('login'), next_url) return redirect(login_url) return decorated_function You'll have to substitute something for get_current_url(), because I don't know how that's done in Flask. Then, in your login handler, when the user successfully logs in, you check to see if there's a next parameter in the request and, if so, you redirect them to that URL. Otherwise, you redirect them to some default URL (usually /, I guess). A: You could use a query string to keep the file info intact over a click or two. One of the nice things about url_for is how it passes unknown parameters as query strings. So without changing your registration page too much you could do something like this: def login_required(f): @wraps(f) def decorated_function(*args, **kwargs): if g.user is None: return redirect(url_for('register', wantsurl = request.path)) return f(*args, **kwargs) return decorated_function Here wantsurl will keep track of the url the user landed on. If an unregistered user goes to /download/some/file.txt, login_required will send you to /register?wantsurl=%2Fdownload%2Fsome%2Ffile.txt Then you add a couple of lines to your registration function: @app.route('/register', methods=['GET', 'POST']) def register(): if request.method == 'GET': if 'wantsurl' in request.args: qs = request.args['wantsurl'] return render_template('register.html', wantsurl=qs) if request.method == 'POST': if 'wantsurl' in request.form and everything_else_ok: return redirect(request.form['wantsurl']) That would automatically redirect to the download on successful registration, provided you have something in the form called 'wantsurl' with the value of qs, or you could have your form submit with a query string; that could just be a little if-else in the template.
Flask/Werkzeug, how to return previous page after login
I am using the Flask micro-framework which is based on Werkzeug, which uses Python. Before each restricted page there is a decorator to ensure the user is logged in, currently returning them to the login page if they are not logged in, like so: # Decorator def logged_in(f): @wraps(f) def decorated_function(*args, **kwargs): try: if not session['logged_in']: flash('Please log in first...', 'error') return redirect(url_for('login')) else: return f(*args, **kwargs) except KeyError: flash('Please log in first...', 'error') return redirect(url_for('login')) return decorated_function # Login function @app.route('/', methods=['GET', 'POST']) def login(): """Login page.""" if request.method=='POST': ### Checks database, etc. ### return render_template('login.jinja2') # Example 'restricted' page @app.route('/download_file') @logged_in def download_file(): """Function used to send files for download to user.""" fileid = request.args.get('id', 0) ### ... ### After logging in, it needs to return users to the page that took them to the login page. It also needs to retain things such as the passed variables (i.e. the entire link basically www.example.com/download_file?id=3 ) Does anyone know how to do this? Thank you for your help :-)
[ "I think standard practice is to append the URL to which the user needs to be redirected after a successful login to the end of the login URL's querystring.\nYou'd change your decorator to something like this (with redundancies in your decorator function also removed):\ndef logged_in(f):\n @wraps(f)\n def dec...
[ 25, 12 ]
[]
[]
[ "authentication", "flask", "python", "werkzeug" ]
stackoverflow_0003686465_authentication_flask_python_werkzeug.txt
Q: Merge SQLite files into one db file, and 'begin/commit' question This post refers to this page for merging SQLite databases. The sequence is as follows. Let's say I want to merge a.db and b.db. In command line I do the following. sqlite3 a.db attach 'b.db' as toM; begin; <-- insert into benchmark select * from toM.benchmark; commit; <-- detach database toM; It works well, but in the referred site, the questioner asks about speeding up, and the answer is to use the 'begin' and 'commit' command. Then, I came up with the following python code to do the exactly same thing. I abstract the SQLite function calls with SQLiteDB, and one of it's method is runCommand(). I got the same error even though I delete the self.connector.commit(). # run command def runCommand(self, command): self.cursor.execute(command) self.connector.commit() # same error even though I delete this line db = SQLiteDB('a.db') cmd = "attach \"%s\" as toMerge" % "b.db" print cmd db.runCommand(cmd) cmd = "begin" db.runCommand(cmd) cmd = "insert into benchmark select * from toMerge.benchmark" db.runCommand(cmd) cmd = "commit" db.runCommand(cmd) cmd = "detach database toMerge" db.runCommand(cmd) But, I got the following error. OperationalError: cannot commit - no transaction is active Even though the error, the result db is well merged. And without the begin/commit, there's no error at all. Why can't I run the begin/commit command? Is it absolutely necessary to run begin/commit to safely merge the db files? The post says that the purpose of begin/commit is for speedup. Then, what's the difference between using and not using the begin/commit command in terms of speedup? A: Apparently, Cursor.execute doesn't support the 'commit' command. It does support the 'begin' command but this is redundant because sqlite3 begins them for you anway: >>> import sqlite3 >>> conn = sqlite3.connect(':memory:') >>> cur = conn.cursor() >>> cur.execute('begin') <sqlite3.Cursor object at 0x0104B020> >>> cur.execute('CREATE TABLE test (id INTEGER)') <sqlite3.Cursor object at 0x0104B020> >>> cur.execute('INSERT INTO test VALUES (1)') <sqlite3.Cursor object at 0x0104B020> >>> cur.execute('commit') Traceback (most recent call last): File "<pyshell#10>", line 1, in <module> cur.execute('commit') OperationalError: cannot commit - no transaction is active >>> just use the commit method on your Connection object. As for your second question, it is not absolutely necessary to call begin/commit when merging the files: just be sure that there is absolutely no disk error, modifications to the db's or people looking at the computer the wrong way while it is happening. So begin/commit is probably a good idea. Of course, if the original db's aren't being modified (I honestly haven't looked) then there is no need for that even. If there is an error, you can just scrap the partial output and start over. It also provides a speedup because every change doesn't have to be written to disk as it occurs. They can be stored in memory and written in bulk. But as mentioned sqlite3 handles this for you. Also, it's worth mentioning that cmd = "attach \"%s\" as toMerge" % "b.db" is wrong in the sense that it's depracated. If you want to do the wrong thing correctly, it's cmd = 'attach "{0}" as toMerge'.format("b.db") #why not just one string though? This is forward compatible with newer versions of python which will make porting code easier. if you want to do the right thing, it's cmd = "attach ? as toMerge" cursor.execute(cmd, ('b.db', )) This avoids sql injection and is, apparently, slightly faster so it's win-win. You could modify your runCommand method as follows: def runCommand(self, sql, params=(), commit=True): self.cursor.execute(sql, params) if commit: self.connector.commit() now you can not commit after every single command by passing commit=False when you don't need a commit. This preserves the notion of transaction.
Merge SQLite files into one db file, and 'begin/commit' question
This post refers to this page for merging SQLite databases. The sequence is as follows. Let's say I want to merge a.db and b.db. In command line I do the following. sqlite3 a.db attach 'b.db' as toM; begin; <-- insert into benchmark select * from toM.benchmark; commit; <-- detach database toM; It works well, but in the referred site, the questioner asks about speeding up, and the answer is to use the 'begin' and 'commit' command. Then, I came up with the following python code to do the exactly same thing. I abstract the SQLite function calls with SQLiteDB, and one of it's method is runCommand(). I got the same error even though I delete the self.connector.commit(). # run command def runCommand(self, command): self.cursor.execute(command) self.connector.commit() # same error even though I delete this line db = SQLiteDB('a.db') cmd = "attach \"%s\" as toMerge" % "b.db" print cmd db.runCommand(cmd) cmd = "begin" db.runCommand(cmd) cmd = "insert into benchmark select * from toMerge.benchmark" db.runCommand(cmd) cmd = "commit" db.runCommand(cmd) cmd = "detach database toMerge" db.runCommand(cmd) But, I got the following error. OperationalError: cannot commit - no transaction is active Even though the error, the result db is well merged. And without the begin/commit, there's no error at all. Why can't I run the begin/commit command? Is it absolutely necessary to run begin/commit to safely merge the db files? The post says that the purpose of begin/commit is for speedup. Then, what's the difference between using and not using the begin/commit command in terms of speedup?
[ "Apparently, Cursor.execute doesn't support the 'commit' command. It does support the 'begin' command but this is redundant because sqlite3 begins them for you anway:\n>>> import sqlite3\n>>> conn = sqlite3.connect(':memory:')\n>>> cur = conn.cursor()\n>>> cur.execute('begin')\n<sqlite3.Cursor object at 0x0104B020>...
[ 13 ]
[]
[]
[ "merge", "python", "sqlite" ]
stackoverflow_0003689694_merge_python_sqlite.txt