title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
How can I enumerate/list all installed applications in Windows XP? | 802,499 | <p>When I say "installed application", I basically mean any application visible in [Control Panel]->[Add/Remove Programs]. </p>
<p>I would prefer to do it in Python, but C or C++ is also fine.</p>
| 6 | 2009-04-29T14:02:12Z | 802,513 | <p>If you mean the list of installed applications that is shown in Add\Remove Programs in the control panel, you can find it in the registry key:</p>
<pre><code>HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Uninstall
</code></pre>
<p><a href="http://support.microsoft.com/kb/247501">more info about how the registry tree is structured can be found here</a>.</p>
<p>You need to use the <a href="http://docs.python.org/3.0/library/winreg.html">winreg API</a> in python to read the values from the registry.</p>
| 10 | 2009-04-29T14:06:25Z | [
"c++",
"python",
"winapi",
"enumeration"
] |
How can I enumerate/list all installed applications in Windows XP? | 802,499 | <p>When I say "installed application", I basically mean any application visible in [Control Panel]->[Add/Remove Programs]. </p>
<p>I would prefer to do it in Python, but C or C++ is also fine.</p>
| 6 | 2009-04-29T14:02:12Z | 803,056 | <p>Control Panel uses Win32 COM api, which is the official method (see Google Groups, Win32)<br />
Never rely on registry.</p>
| 7 | 2009-04-29T15:57:31Z | [
"c++",
"python",
"winapi",
"enumeration"
] |
How can I enumerate/list all installed applications in Windows XP? | 802,499 | <p>When I say "installed application", I basically mean any application visible in [Control Panel]->[Add/Remove Programs]. </p>
<p>I would prefer to do it in Python, but C or C++ is also fine.</p>
| 6 | 2009-04-29T14:02:12Z | 804,547 | <p>The <a href="http://www.microsoft.com/technet/scriptcenter/scripts/python/default.mspx?mfr=true">Microsoft Script Repository</a> has a <a href="http://www.microsoft.com/technet/scriptcenter/scripts/python/apps/user/usappy03.mspx?mfr=true">script for listing all installed software</a>.</p>
<pre><code>import win32com.client
strComputer = "."
objWMIService = win32com.client.Dispatch("WbemScripting.SWbemLocator")
objSWbemServices = objWMIService.ConnectServer(strComputer,"root\cimv2")
colItems = objSWbemServices.ExecQuery("Select * from Win32_Product")
for objItem in colItems:
print "Caption: ", objItem.Caption
print "Description: ", objItem.Description
print "Identifying Number: ", objItem.IdentifyingNumber
print "Install Date: ", objItem.InstallDate
print "Install Date 2: ", objItem.InstallDate2
print "Install Location: ", objItem.InstallLocation
print "Install State: ", objItem.InstallState
print "Name: ", objItem.Name
print "Package Cache: ", objItem.PackageCache
print "SKU Number: ", objItem.SKUNumber
print "Vendor: ", objItem.Vendor
print "Version: ", objItem.Version
</code></pre>
| 6 | 2009-04-29T22:23:37Z | [
"c++",
"python",
"winapi",
"enumeration"
] |
How can I enumerate/list all installed applications in Windows XP? | 802,499 | <p>When I say "installed application", I basically mean any application visible in [Control Panel]->[Add/Remove Programs]. </p>
<p>I would prefer to do it in Python, but C or C++ is also fine.</p>
| 6 | 2009-04-29T14:02:12Z | 3,511,659 | <p>C#.net code for getting the list of installed software using WMI in xp and win7(wmi is the only way in win7) </p>
<pre><code> WqlObjectQuery wqlQuery =
new WqlObjectQuery("SELECT * FROM Win32_Product");
ManagementObjectSearcher searcher =
new ManagementObjectSearcher(wqlQuery);
foreach (ManagementObject software in searcher.Get()) {
Console.WriteLine(software["Caption"]);
}
</code></pre>
| 3 | 2010-08-18T11:43:00Z | [
"c++",
"python",
"winapi",
"enumeration"
] |
How can I enumerate/list all installed applications in Windows XP? | 802,499 | <p>When I say "installed application", I basically mean any application visible in [Control Panel]->[Add/Remove Programs]. </p>
<p>I would prefer to do it in Python, but C or C++ is also fine.</p>
| 6 | 2009-04-29T14:02:12Z | 9,757,013 | <p>The best registry-based implementation that I have seen is the one written by Chris Wright (chris128) posted at <a href="http://www.vbforums.com/showthread.php?t=598355" rel="nofollow">http://www.vbforums.com/showthread.php?t=598355</a>. It uses multiple registry keys and is a lot more complex than any of the answers currently posted here. It seems to produce identical results to the Add/Remove Programs app, and like the ARP app it also provides an option to include updates.</p>
<p>Although it's implemented in VB.NET, it should be easy to convert to other .NET languages like C# or IronPython. I imagine that converting to IronPython first should make it fairly easy to port to regular Python if that's what you want, but I only converted it to C# myself and then cleaned up the code a bit.</p>
<p>Only one small bug to point out: GetUserInstallerKeyPrograms() doesn't add the version for user programs to the list, even though it extracts it. This is easy to fix though.</p>
| 3 | 2012-03-18T08:30:37Z | [
"c++",
"python",
"winapi",
"enumeration"
] |
`final` keyword equivalent for variables in Python? | 802,578 | <p>I couldn't find documentation on an equivalent of Java's <code>final</code> in Python, is there such a thing?</p>
<p>I'm creating a snapshot of an object (used for restoration if anything fails); once this backup variable is assigned, it should not be modified -- a final-like feature in Python would be nice for this.</p>
| 25 | 2009-04-29T14:21:07Z | 802,617 | <p>Python has no equivalent of "final". It doesn't have "public" and "protected" either, except by naming convention. It's not that "bondage and discipline".</p>
| 5 | 2009-04-29T14:28:29Z | [
"java",
"python",
"keyword",
"final"
] |
`final` keyword equivalent for variables in Python? | 802,578 | <p>I couldn't find documentation on an equivalent of Java's <code>final</code> in Python, is there such a thing?</p>
<p>I'm creating a snapshot of an object (used for restoration if anything fails); once this backup variable is assigned, it should not be modified -- a final-like feature in Python would be nice for this.</p>
| 25 | 2009-04-29T14:21:07Z | 802,622 | <p>There is no such thing. In general, the Python attitude is "if you don't want this modified, just don't modify it". Clients of an API are unlikely to just poke around your undocumented internals anyway.</p>
<p>You could, I suppose, work around this by using tuples or namedtuples for the relevant bits of your model, which are inherently immutable. That still doesn't help with any part of your model that has to be mutable of course.</p>
| 7 | 2009-04-29T14:29:04Z | [
"java",
"python",
"keyword",
"final"
] |
`final` keyword equivalent for variables in Python? | 802,578 | <p>I couldn't find documentation on an equivalent of Java's <code>final</code> in Python, is there such a thing?</p>
<p>I'm creating a snapshot of an object (used for restoration if anything fails); once this backup variable is assigned, it should not be modified -- a final-like feature in Python would be nice for this.</p>
| 25 | 2009-04-29T14:21:07Z | 802,623 | <p><a href="http://code.activestate.com/recipes/576527/" rel="nofollow">http://code.activestate.com/recipes/576527/</a> defines a freeze function, although it doesn't work perfectly.</p>
<p>I would consider just leaving it mutable though.</p>
| 3 | 2009-04-29T14:29:05Z | [
"java",
"python",
"keyword",
"final"
] |
`final` keyword equivalent for variables in Python? | 802,578 | <p>I couldn't find documentation on an equivalent of Java's <code>final</code> in Python, is there such a thing?</p>
<p>I'm creating a snapshot of an object (used for restoration if anything fails); once this backup variable is assigned, it should not be modified -- a final-like feature in Python would be nice for this.</p>
| 25 | 2009-04-29T14:21:07Z | 802,631 | <p>There is no ``final'' equivalent in Python.</p>
<p>But, to create read-only fields of class instances, you can use the <a href="http://docs.python.org/3.0/library/functions.html#property">property</a> function.</p>
<p><strong>Edit</strong>: perhaps you want something like this:</p>
<pre><code>class WriteOnceReadWhenever:
def __setattr__(self, attr, value):
if hasattr(self, attr):
raise Exception("Attempting to alter read-only value")
self.__dict__[attr] = value
</code></pre>
| 43 | 2009-04-29T14:30:34Z | [
"java",
"python",
"keyword",
"final"
] |
`final` keyword equivalent for variables in Python? | 802,578 | <p>I couldn't find documentation on an equivalent of Java's <code>final</code> in Python, is there such a thing?</p>
<p>I'm creating a snapshot of an object (used for restoration if anything fails); once this backup variable is assigned, it should not be modified -- a final-like feature in Python would be nice for this.</p>
| 25 | 2009-04-29T14:21:07Z | 802,642 | <p>Having a variable in Java be <code>final</code> basically means that once you assign to a variable, you may not reassign that variable to point to another object. It actually doesn't mean that the object can't be modified. For example, the following Java code works perfectly well:</p>
<pre><code>public final List<String> messages = new LinkedList<String>();
public void addMessage()
{
messages.add("Hello World!"); // this mutates the messages list
}
</code></pre>
<p>but the following wouldn't even compile:</p>
<pre><code>public final List<String> messages = new LinkedList<String>();
public void changeMessages()
{
messages = new ArrayList<String>(); // can't change a final variable
}
</code></pre>
<p>So your question is about whether <code>final</code> exists in Python. It does not.</p>
<p>However, Python does have immutable data structures. For example, while you can mutate a <code>list</code>, you can't mutate a <code>tuple</code>. You can mutate a <code>set</code> but not a <code>frozenset</code>, etc.</p>
<p>My advice would be to just not worry about enforcing non-mutation at the language level and simply concentrate on making sure that you don't write any code which mutates these objects after they're assigned.</p>
| 37 | 2009-04-29T14:32:44Z | [
"java",
"python",
"keyword",
"final"
] |
`final` keyword equivalent for variables in Python? | 802,578 | <p>I couldn't find documentation on an equivalent of Java's <code>final</code> in Python, is there such a thing?</p>
<p>I'm creating a snapshot of an object (used for restoration if anything fails); once this backup variable is assigned, it should not be modified -- a final-like feature in Python would be nice for this.</p>
| 25 | 2009-04-29T14:21:07Z | 802,679 | <p>An assign-once variable is a design issue. You design your application in a way that the variable is set once and once only.</p>
<p>However, if you want run-time checking of your design, you can do it with a wrapper around the object.</p>
<pre><code>class OnePingOnlyPleaseVassily( object ):
def __init__( self ):
self.value= None
def set( self, value ):
if self.value is not None:
raise Exception( "Already set.")
self.value= value
someStateMemo= OnePingOnlyPleaseVassily()
someStateMemo.set( aValue ) # works
someStateMemo.set( aValue ) # fails
</code></pre>
<p>That's clunky, but it will detect design problems at run time.</p>
| 8 | 2009-04-29T14:41:33Z | [
"java",
"python",
"keyword",
"final"
] |
`final` keyword equivalent for variables in Python? | 802,578 | <p>I couldn't find documentation on an equivalent of Java's <code>final</code> in Python, is there such a thing?</p>
<p>I'm creating a snapshot of an object (used for restoration if anything fails); once this backup variable is assigned, it should not be modified -- a final-like feature in Python would be nice for this.</p>
| 25 | 2009-04-29T14:21:07Z | 1,062,995 | <p>you can simulate something like that through the <a href="http://docs.python.org/reference/datamodel.html#customizing-attribute-access" rel="nofollow">descriptor protocol</a>, since it allows to define reading and setting a variable the way you wish.</p>
<pre><code>class Foo(object):
@property
def myvar(self):
# return value here
@myvar.setter
def myvar(self, newvalue):
# do nothing if some condition is met
a = Foo()
print a.myvar
a.myvar = 5 # does nothing if you don't want to
</code></pre>
| 5 | 2009-06-30T10:41:27Z | [
"java",
"python",
"keyword",
"final"
] |
`final` keyword equivalent for variables in Python? | 802,578 | <p>I couldn't find documentation on an equivalent of Java's <code>final</code> in Python, is there such a thing?</p>
<p>I'm creating a snapshot of an object (used for restoration if anything fails); once this backup variable is assigned, it should not be modified -- a final-like feature in Python would be nice for this.</p>
| 25 | 2009-04-29T14:21:07Z | 18,670,083 | <p>Although this is an old question, I figured I would add yet another potential option: You can also use <code>assert</code> to verify a variable is set to what you originally intended it to be set to â a double checking if you will. Although this is not the same as <code>final</code> in Java, it can be used to create a similar effect:</p>
<pre><code>PI = 3.14
radius = 3
try:
assert PI == 3.14
print PI * radius**2
except AssertionError:
print "Yikes."
</code></pre>
<p>As seen above, if <code>PI</code> were for some reason not set to <code>3.14</code>, an <code>AssertionError</code> would be thrown, so a <code>try/except</code> block would probably be a wise addition. Regardless, it may come in handy depending on your situation.</p>
| 0 | 2013-09-07T05:29:23Z | [
"java",
"python",
"keyword",
"final"
] |
Python - Possible to force raise exceptions regardless try/except blocks? | 802,804 | <p>In Python is there any language (or interpreter) feature to force the python interpreter to always raise exceptions even if the exception offending code is inside a try/except block ?</p>
<p>I've just inherited a larger and old codebase written in python, whose purpose is to communicate with some custom designed hardware we also developed.
Many communication errors and timeouts are being masked/missed due to the following (simplified) pattern of code:</p>
<pre><code>try:
serialport.write(MSG)
except:
some_logging_function_mostly_not_working_that_might_be_here_or_not()
#or just:
#pass
</code></pre>
<p>In order to avoid the typical scenario of "just rewrite the whole thing from scratch", I'm currently trying to fix all exceptions errors/timeouts. I'm doing this by disabling by hand the all exception handling code, one at a time. </p>
| 5 | 2009-04-29T15:06:40Z | 802,822 | <p>You can use multiple exception handlers to handle multiple exceptions.</p>
<pre><code>try:
serialport.write(MSG)
except Handler1:
some_logging_function_mostly_not_working_that_might_be_here_or_not()
#or just:
#pass
except Handler2:
some_logging_function_mostly_not_working_that_might_be_here_or_not2()
#or just:
#pass
</code></pre>
| -3 | 2009-04-29T15:10:59Z | [
"python",
"exception"
] |
Python - Possible to force raise exceptions regardless try/except blocks? | 802,804 | <p>In Python is there any language (or interpreter) feature to force the python interpreter to always raise exceptions even if the exception offending code is inside a try/except block ?</p>
<p>I've just inherited a larger and old codebase written in python, whose purpose is to communicate with some custom designed hardware we also developed.
Many communication errors and timeouts are being masked/missed due to the following (simplified) pattern of code:</p>
<pre><code>try:
serialport.write(MSG)
except:
some_logging_function_mostly_not_working_that_might_be_here_or_not()
#or just:
#pass
</code></pre>
<p>In order to avoid the typical scenario of "just rewrite the whole thing from scratch", I'm currently trying to fix all exceptions errors/timeouts. I'm doing this by disabling by hand the all exception handling code, one at a time. </p>
| 5 | 2009-04-29T15:06:40Z | 802,826 | <p>The "all-exceptions" <code>except:</code> block is a remarkably bad thing and must simply be found and replaced with sensible except handling.</p>
<p>In this case <code>grep</code> is your friend. A good IDE can help make these unpleasant chores manageable. </p>
<p>But there's no "ignore the code as written" option in Python.</p>
| 12 | 2009-04-29T15:12:16Z | [
"python",
"exception"
] |
Python - Possible to force raise exceptions regardless try/except blocks? | 802,804 | <p>In Python is there any language (or interpreter) feature to force the python interpreter to always raise exceptions even if the exception offending code is inside a try/except block ?</p>
<p>I've just inherited a larger and old codebase written in python, whose purpose is to communicate with some custom designed hardware we also developed.
Many communication errors and timeouts are being masked/missed due to the following (simplified) pattern of code:</p>
<pre><code>try:
serialport.write(MSG)
except:
some_logging_function_mostly_not_working_that_might_be_here_or_not()
#or just:
#pass
</code></pre>
<p>In order to avoid the typical scenario of "just rewrite the whole thing from scratch", I'm currently trying to fix all exceptions errors/timeouts. I'm doing this by disabling by hand the all exception handling code, one at a time. </p>
| 5 | 2009-04-29T15:06:40Z | 802,837 | <p>No, not really. Your best bet is to change the code to something more like this:</p>
<pre><code>try:
serialport.write(MSG)
except:
some_logging_function_mostly_not_working_that_might_be_here_or_not()
raise
</code></pre>
<p>This will make it re-raise the exact exception. The main thing that you need to understand is that if there were a way to make all exceptions exit the system, you wouldn't be able to use a for loop (iterators raise a StopIteration exception).</p>
| 3 | 2009-04-29T15:15:44Z | [
"python",
"exception"
] |
Getting realtime output using subprocess | 803,265 | <p>I am trying to write a wrapper script for a command line program (svnadmin verify) that will display a nice progress indicator for the operation. This requires me to be able to see each line of output from the wrapped program as soon as it is output.</p>
<p>I figured that I'd just execute the program using subprocess.Popen, use stdout=PIPE, then read each line as it came in and act on it accordingly. However, when I ran the following code, the output appeared to be buffered somewhere, causing it to appear in two chunks, lines 1 through 332, then 333 through 439 (the last line of output)</p>
<pre><code>from subprocess import Popen, PIPE, STDOUT
p = Popen('svnadmin verify /var/svn/repos/config', stdout = PIPE,
stderr = STDOUT, shell = True)
for line in p.stdout:
print line.replace('\n', '')
</code></pre>
<p>After looking at the documentation on subprocess a little, I discovered the bufsize parameter to Popen, so I tried setting bufsize to 1 (buffer each line) and 0 (no buffer), but neither value seemed to change the way the lines were being delivered.</p>
<p>At this point I was starting to grasp for straws, so I wrote the following output loop:</p>
<pre><code>while True:
try:
print p.stdout.next().replace('\n', '')
except StopIteration:
break
</code></pre>
<p>but got the same result.</p>
<p>Is it possible to get 'realtime' program output of a program executed using subprocess? Is there some other option in Python that is forward-compatible (not exec*)?</p>
| 64 | 2009-04-29T16:45:48Z | 803,396 | <p>You can try this:</p>
<pre><code>import subprocess
import sys
process = subprocess.Popen(
cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE
)
while True:
out = process.stdout.read(1)
if out == '' and process.poll() != None:
break
if out != '':
sys.stdout.write(out)
sys.stdout.flush()
</code></pre>
<p>If you use readline instead of read, there will be some cases where the input message is not printed. Try it with a command the requires an inline input and see for yourself.</p>
| 11 | 2009-04-29T17:21:10Z | [
"python",
"subprocess"
] |
Getting realtime output using subprocess | 803,265 | <p>I am trying to write a wrapper script for a command line program (svnadmin verify) that will display a nice progress indicator for the operation. This requires me to be able to see each line of output from the wrapped program as soon as it is output.</p>
<p>I figured that I'd just execute the program using subprocess.Popen, use stdout=PIPE, then read each line as it came in and act on it accordingly. However, when I ran the following code, the output appeared to be buffered somewhere, causing it to appear in two chunks, lines 1 through 332, then 333 through 439 (the last line of output)</p>
<pre><code>from subprocess import Popen, PIPE, STDOUT
p = Popen('svnadmin verify /var/svn/repos/config', stdout = PIPE,
stderr = STDOUT, shell = True)
for line in p.stdout:
print line.replace('\n', '')
</code></pre>
<p>After looking at the documentation on subprocess a little, I discovered the bufsize parameter to Popen, so I tried setting bufsize to 1 (buffer each line) and 0 (no buffer), but neither value seemed to change the way the lines were being delivered.</p>
<p>At this point I was starting to grasp for straws, so I wrote the following output loop:</p>
<pre><code>while True:
try:
print p.stdout.next().replace('\n', '')
except StopIteration:
break
</code></pre>
<p>but got the same result.</p>
<p>Is it possible to get 'realtime' program output of a program executed using subprocess? Is there some other option in Python that is forward-compatible (not exec*)?</p>
| 64 | 2009-04-29T16:45:48Z | 803,408 | <p>I ran into the same problem awhile back. My solution was to ditch iterating for the <code>read</code> method, which will return immediately even if your subprocess isn't finished executing, etc.</p>
| 2 | 2009-04-29T17:24:09Z | [
"python",
"subprocess"
] |
Getting realtime output using subprocess | 803,265 | <p>I am trying to write a wrapper script for a command line program (svnadmin verify) that will display a nice progress indicator for the operation. This requires me to be able to see each line of output from the wrapped program as soon as it is output.</p>
<p>I figured that I'd just execute the program using subprocess.Popen, use stdout=PIPE, then read each line as it came in and act on it accordingly. However, when I ran the following code, the output appeared to be buffered somewhere, causing it to appear in two chunks, lines 1 through 332, then 333 through 439 (the last line of output)</p>
<pre><code>from subprocess import Popen, PIPE, STDOUT
p = Popen('svnadmin verify /var/svn/repos/config', stdout = PIPE,
stderr = STDOUT, shell = True)
for line in p.stdout:
print line.replace('\n', '')
</code></pre>
<p>After looking at the documentation on subprocess a little, I discovered the bufsize parameter to Popen, so I tried setting bufsize to 1 (buffer each line) and 0 (no buffer), but neither value seemed to change the way the lines were being delivered.</p>
<p>At this point I was starting to grasp for straws, so I wrote the following output loop:</p>
<pre><code>while True:
try:
print p.stdout.next().replace('\n', '')
except StopIteration:
break
</code></pre>
<p>but got the same result.</p>
<p>Is it possible to get 'realtime' program output of a program executed using subprocess? Is there some other option in Python that is forward-compatible (not exec*)?</p>
| 64 | 2009-04-29T16:45:48Z | 803,421 | <p>I tried this, and for some reason while the code</p>
<pre><code>for line in p.stdout:
...
</code></pre>
<p>buffers aggressively, the variant</p>
<pre><code>while True:
line = p.stdout.readline()
if not line: break
...
</code></pre>
<p>does not. Apparently this is a known bug: <a href="http://bugs.python.org/issue3907">http://bugs.python.org/issue3907</a></p>
| 43 | 2009-04-29T17:26:53Z | [
"python",
"subprocess"
] |
Getting realtime output using subprocess | 803,265 | <p>I am trying to write a wrapper script for a command line program (svnadmin verify) that will display a nice progress indicator for the operation. This requires me to be able to see each line of output from the wrapped program as soon as it is output.</p>
<p>I figured that I'd just execute the program using subprocess.Popen, use stdout=PIPE, then read each line as it came in and act on it accordingly. However, when I ran the following code, the output appeared to be buffered somewhere, causing it to appear in two chunks, lines 1 through 332, then 333 through 439 (the last line of output)</p>
<pre><code>from subprocess import Popen, PIPE, STDOUT
p = Popen('svnadmin verify /var/svn/repos/config', stdout = PIPE,
stderr = STDOUT, shell = True)
for line in p.stdout:
print line.replace('\n', '')
</code></pre>
<p>After looking at the documentation on subprocess a little, I discovered the bufsize parameter to Popen, so I tried setting bufsize to 1 (buffer each line) and 0 (no buffer), but neither value seemed to change the way the lines were being delivered.</p>
<p>At this point I was starting to grasp for straws, so I wrote the following output loop:</p>
<pre><code>while True:
try:
print p.stdout.next().replace('\n', '')
except StopIteration:
break
</code></pre>
<p>but got the same result.</p>
<p>Is it possible to get 'realtime' program output of a program executed using subprocess? Is there some other option in Python that is forward-compatible (not exec*)?</p>
| 64 | 2009-04-29T16:45:48Z | 3,186,931 | <p>Using pexpect [ <a href="http://www.noah.org/wiki/Pexpect" rel="nofollow">http://www.noah.org/wiki/Pexpect</a> ] with non-blocking readlines will resolve this problem. It stems from the fact that pipes are buffered, and so your app's output is getting buffered by the pipe, therefore you can't get to that output until the buffer fills or the process dies.</p>
| 0 | 2010-07-06T14:09:22Z | [
"python",
"subprocess"
] |
Getting realtime output using subprocess | 803,265 | <p>I am trying to write a wrapper script for a command line program (svnadmin verify) that will display a nice progress indicator for the operation. This requires me to be able to see each line of output from the wrapped program as soon as it is output.</p>
<p>I figured that I'd just execute the program using subprocess.Popen, use stdout=PIPE, then read each line as it came in and act on it accordingly. However, when I ran the following code, the output appeared to be buffered somewhere, causing it to appear in two chunks, lines 1 through 332, then 333 through 439 (the last line of output)</p>
<pre><code>from subprocess import Popen, PIPE, STDOUT
p = Popen('svnadmin verify /var/svn/repos/config', stdout = PIPE,
stderr = STDOUT, shell = True)
for line in p.stdout:
print line.replace('\n', '')
</code></pre>
<p>After looking at the documentation on subprocess a little, I discovered the bufsize parameter to Popen, so I tried setting bufsize to 1 (buffer each line) and 0 (no buffer), but neither value seemed to change the way the lines were being delivered.</p>
<p>At this point I was starting to grasp for straws, so I wrote the following output loop:</p>
<pre><code>while True:
try:
print p.stdout.next().replace('\n', '')
except StopIteration:
break
</code></pre>
<p>but got the same result.</p>
<p>Is it possible to get 'realtime' program output of a program executed using subprocess? Is there some other option in Python that is forward-compatible (not exec*)?</p>
| 64 | 2009-04-29T16:45:48Z | 6,414,278 | <pre><code>p = subprocess.Popen(cmd, stdout=subprocess.PIPE, bufsize=1)
for line in iter(p.stdout.readline, b''):
print line,
p.stdout.close()
p.wait()
</code></pre>
| 21 | 2011-06-20T16:13:51Z | [
"python",
"subprocess"
] |
Getting realtime output using subprocess | 803,265 | <p>I am trying to write a wrapper script for a command line program (svnadmin verify) that will display a nice progress indicator for the operation. This requires me to be able to see each line of output from the wrapped program as soon as it is output.</p>
<p>I figured that I'd just execute the program using subprocess.Popen, use stdout=PIPE, then read each line as it came in and act on it accordingly. However, when I ran the following code, the output appeared to be buffered somewhere, causing it to appear in two chunks, lines 1 through 332, then 333 through 439 (the last line of output)</p>
<pre><code>from subprocess import Popen, PIPE, STDOUT
p = Popen('svnadmin verify /var/svn/repos/config', stdout = PIPE,
stderr = STDOUT, shell = True)
for line in p.stdout:
print line.replace('\n', '')
</code></pre>
<p>After looking at the documentation on subprocess a little, I discovered the bufsize parameter to Popen, so I tried setting bufsize to 1 (buffer each line) and 0 (no buffer), but neither value seemed to change the way the lines were being delivered.</p>
<p>At this point I was starting to grasp for straws, so I wrote the following output loop:</p>
<pre><code>while True:
try:
print p.stdout.next().replace('\n', '')
except StopIteration:
break
</code></pre>
<p>but got the same result.</p>
<p>Is it possible to get 'realtime' program output of a program executed using subprocess? Is there some other option in Python that is forward-compatible (not exec*)?</p>
| 64 | 2009-04-29T16:45:48Z | 15,288,921 | <p>Complete solution:</p>
<pre><code>import contextlib
import subprocess
# Unix, Windows and old Macintosh end-of-line
newlines = ['\n', '\r\n', '\r']
def unbuffered(proc, stream='stdout'):
stream = getattr(proc, stream)
with contextlib.closing(stream):
while True:
out = []
last = stream.read(1)
# Don't loop forever
if last == '' and proc.poll() is not None:
break
while last not in newlines:
# Don't loop forever
if last == '' and proc.poll() is not None:
break
out.append(last)
last = stream.read(1)
out = ''.join(out)
yield out
def example():
cmd = ['ls', '-l', '/']
proc = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
# Make all end-of-lines '\n'
universal_newlines=True,
)
for line in unbuffered(proc):
print line
example()
</code></pre>
| 0 | 2013-03-08T07:33:30Z | [
"python",
"subprocess"
] |
Getting realtime output using subprocess | 803,265 | <p>I am trying to write a wrapper script for a command line program (svnadmin verify) that will display a nice progress indicator for the operation. This requires me to be able to see each line of output from the wrapped program as soon as it is output.</p>
<p>I figured that I'd just execute the program using subprocess.Popen, use stdout=PIPE, then read each line as it came in and act on it accordingly. However, when I ran the following code, the output appeared to be buffered somewhere, causing it to appear in two chunks, lines 1 through 332, then 333 through 439 (the last line of output)</p>
<pre><code>from subprocess import Popen, PIPE, STDOUT
p = Popen('svnadmin verify /var/svn/repos/config', stdout = PIPE,
stderr = STDOUT, shell = True)
for line in p.stdout:
print line.replace('\n', '')
</code></pre>
<p>After looking at the documentation on subprocess a little, I discovered the bufsize parameter to Popen, so I tried setting bufsize to 1 (buffer each line) and 0 (no buffer), but neither value seemed to change the way the lines were being delivered.</p>
<p>At this point I was starting to grasp for straws, so I wrote the following output loop:</p>
<pre><code>while True:
try:
print p.stdout.next().replace('\n', '')
except StopIteration:
break
</code></pre>
<p>but got the same result.</p>
<p>Is it possible to get 'realtime' program output of a program executed using subprocess? Is there some other option in Python that is forward-compatible (not exec*)?</p>
| 64 | 2009-04-29T16:45:48Z | 21,317,334 | <p>I used this solution to get realtime output on a subprocess. This loop will stop as soon as the process completes leaving out a need for a break statement or possible infinite loop. </p>
<pre><code>sub_process = subprocess.Popen(my_command, close_fds=True, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
while sub_process.poll() is None:
out = sub_process.stdout.read(1)
sys.stdout.write(out)
sys.stdout.flush()
</code></pre>
| 1 | 2014-01-23T19:14:21Z | [
"python",
"subprocess"
] |
Getting realtime output using subprocess | 803,265 | <p>I am trying to write a wrapper script for a command line program (svnadmin verify) that will display a nice progress indicator for the operation. This requires me to be able to see each line of output from the wrapped program as soon as it is output.</p>
<p>I figured that I'd just execute the program using subprocess.Popen, use stdout=PIPE, then read each line as it came in and act on it accordingly. However, when I ran the following code, the output appeared to be buffered somewhere, causing it to appear in two chunks, lines 1 through 332, then 333 through 439 (the last line of output)</p>
<pre><code>from subprocess import Popen, PIPE, STDOUT
p = Popen('svnadmin verify /var/svn/repos/config', stdout = PIPE,
stderr = STDOUT, shell = True)
for line in p.stdout:
print line.replace('\n', '')
</code></pre>
<p>After looking at the documentation on subprocess a little, I discovered the bufsize parameter to Popen, so I tried setting bufsize to 1 (buffer each line) and 0 (no buffer), but neither value seemed to change the way the lines were being delivered.</p>
<p>At this point I was starting to grasp for straws, so I wrote the following output loop:</p>
<pre><code>while True:
try:
print p.stdout.next().replace('\n', '')
except StopIteration:
break
</code></pre>
<p>but got the same result.</p>
<p>Is it possible to get 'realtime' program output of a program executed using subprocess? Is there some other option in Python that is forward-compatible (not exec*)?</p>
| 64 | 2009-04-29T16:45:48Z | 30,214,720 | <p>Real Time Output Issue resolved:
I did encountered similar issue in Python, while capturing the real time output from c program. I added "<strong>fflush(stdout)</strong>;" in my C code. It worked for me. Here is the snip the code </p>
<p><< C Program >></p>
<pre><code>#include <stdio.h>
void main()
{
int count = 1;
while (1)
{
printf(" Count %d\n", count++);
fflush(stdout);
sleep(1);
}
}
</code></pre>
<p><< Python Program >></p>
<pre><code>#!/usr/bin/python
import os, sys
import subprocess
procExe = subprocess.Popen(".//count", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True)
while procExe.poll() is None:
line = procExe.stdout.readline()
print("Print:" + line)
</code></pre>
<p><< OUTPUT>>
Print: Count 1
Print: Count 2
Print: Count 3</p>
<p>Hope it helps.</p>
<p>~sairam</p>
| 0 | 2015-05-13T12:26:10Z | [
"python",
"subprocess"
] |
Getting realtime output using subprocess | 803,265 | <p>I am trying to write a wrapper script for a command line program (svnadmin verify) that will display a nice progress indicator for the operation. This requires me to be able to see each line of output from the wrapped program as soon as it is output.</p>
<p>I figured that I'd just execute the program using subprocess.Popen, use stdout=PIPE, then read each line as it came in and act on it accordingly. However, when I ran the following code, the output appeared to be buffered somewhere, causing it to appear in two chunks, lines 1 through 332, then 333 through 439 (the last line of output)</p>
<pre><code>from subprocess import Popen, PIPE, STDOUT
p = Popen('svnadmin verify /var/svn/repos/config', stdout = PIPE,
stderr = STDOUT, shell = True)
for line in p.stdout:
print line.replace('\n', '')
</code></pre>
<p>After looking at the documentation on subprocess a little, I discovered the bufsize parameter to Popen, so I tried setting bufsize to 1 (buffer each line) and 0 (no buffer), but neither value seemed to change the way the lines were being delivered.</p>
<p>At this point I was starting to grasp for straws, so I wrote the following output loop:</p>
<pre><code>while True:
try:
print p.stdout.next().replace('\n', '')
except StopIteration:
break
</code></pre>
<p>but got the same result.</p>
<p>Is it possible to get 'realtime' program output of a program executed using subprocess? Is there some other option in Python that is forward-compatible (not exec*)?</p>
| 64 | 2009-04-29T16:45:48Z | 36,660,310 | <p>This is what I did:</p>
<pre><code>def read_command_output(command):
process = subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
return iter(process.stdout.readline, b"")
for output_line in read_command_output(cmd):
print(line)
</code></pre>
<p>It continuously prints new lines as they are output by the subprocess. </p>
| -1 | 2016-04-16T05:01:55Z | [
"python",
"subprocess"
] |
Getting realtime output using subprocess | 803,265 | <p>I am trying to write a wrapper script for a command line program (svnadmin verify) that will display a nice progress indicator for the operation. This requires me to be able to see each line of output from the wrapped program as soon as it is output.</p>
<p>I figured that I'd just execute the program using subprocess.Popen, use stdout=PIPE, then read each line as it came in and act on it accordingly. However, when I ran the following code, the output appeared to be buffered somewhere, causing it to appear in two chunks, lines 1 through 332, then 333 through 439 (the last line of output)</p>
<pre><code>from subprocess import Popen, PIPE, STDOUT
p = Popen('svnadmin verify /var/svn/repos/config', stdout = PIPE,
stderr = STDOUT, shell = True)
for line in p.stdout:
print line.replace('\n', '')
</code></pre>
<p>After looking at the documentation on subprocess a little, I discovered the bufsize parameter to Popen, so I tried setting bufsize to 1 (buffer each line) and 0 (no buffer), but neither value seemed to change the way the lines were being delivered.</p>
<p>At this point I was starting to grasp for straws, so I wrote the following output loop:</p>
<pre><code>while True:
try:
print p.stdout.next().replace('\n', '')
except StopIteration:
break
</code></pre>
<p>but got the same result.</p>
<p>Is it possible to get 'realtime' program output of a program executed using subprocess? Is there some other option in Python that is forward-compatible (not exec*)?</p>
| 64 | 2009-04-29T16:45:48Z | 38,745,040 | <p>Found this "plug-and-play" function <a href="http://www.saltycrane.com/blog/2009/10/how-capture-stdout-in-real-time-python/" rel="nofollow">here</a>. Worked like a charm!</p>
<pre><code>import subprocess
def myrun(cmd):
"""from http://blog.kagesenshi.org/2008/02/teeing-python-subprocesspopen-output.html
"""
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
stdout = []
while True:
line = p.stdout.readline()
stdout.append(line)
print line,
if line == '' and p.poll() != None:
break
return ''.join(stdout)
</code></pre>
| -1 | 2016-08-03T13:28:47Z | [
"python",
"subprocess"
] |
Inserting multiple model instances using a single db.put() on Google App Engine | 803,517 | <p><strong>Edit:</strong> Sorry I didn't clarify this, it's a Google App Engine related question.</p>
<p>According to <a href="http://code.google.com/appengine/docs/python/datastore/functions.html" rel="nofollow">this</a>, I can give db.put() a list of model instances and ask it to input them all into the datastore. However, I haven't been able do this successfully. I'm still a little new with Python, so go easy on me</p>
<pre><code>list_of_models = []
for i in range(0, len(items) - 1):
point = ModelName()
... put the model info here ...
list_of_models.append(point)
db.put(list_of_models)
</code></pre>
<p>Could anyone point out where I'm going wrong?</p>
| 3 | 2009-04-29T17:53:25Z | 804,295 | <p>Please define what you mean by "going wrong" -- the tiny pieces of code you're showing could perfectly well be part of an app that's quite "right". Consider e.g.:</p>
<pre><code>class Hello(db.Model):
name = db.StringProperty()
when = db.DateTimeProperty()
class MainHandler(webapp.RequestHandler):
def get(self):
self.response.out.write('Hello world!')
one = Hello(name='Uno', when=datetime.datetime.now())
two = Hello(name='Due', when=datetime.datetime.now())
both = [one, two]
db.put(both)
</code></pre>
<p>this does insert the two entities correctly each time that get method is called, for example if a sample app continues with:</p>
<pre><code>def main():
application = webapp.WSGIApplication([('/', MainHandler)],
debug=True)
wsgiref.handlers.CGIHandler().run(application)
if __name__ == '__main__':
main()
</code></pre>
<p>as in a typical "hello world" app engine app. You can verify the correct addition of both entities with the datastore viewer of the sdk console, or of course by adding another handler which gets the entities back and shows them, etc etc.</p>
<p>So please clarify!</p>
| 4 | 2009-04-29T21:12:22Z | [
"python",
"google-app-engine"
] |
Merge two lists of lists - Python | 803,526 | <p>This is a great primer but doesn't answer what I need:
<a href="http://stackoverflow.com/questions/464342/combining-two-sorted-lists-in-python">http://stackoverflow.com/questions/464342/combining-two-sorted-lists-in-python</a></p>
<p>I have two Python lists, each is a list of datetime,value pairs:</p>
<pre><code>list_a = [['1241000884000', 3], ['1241004212000', 4], ['1241006473000', 11]]
</code></pre>
<p>And:</p>
<pre><code>list_x = [['1241000884000', 16], ['1241000992000', 16], ['1241001121000', 17], ['1241001545000', 19], ['1241004212000', 20], ['1241006473000', 22]]
</code></pre>
<ol>
<li>There are actually numerous list_a lists with different key/values.</li>
<li>All list_a datetimes are in list_x.</li>
<li>I want to make a list, list_c, corresponding to each list_a which has each datetime from list_x and value_a/value_x.</li>
</ol>
<p>Bonus:</p>
<p>In my real program, list_a is actually a list within a dictionary like so. Taking the answer to the dictionary level would be:</p>
<pre><code>dict = {object_a: [['1241000884000', 3], ['1241004212000', 4], ['1241006473000', 11]], object_b: [['1241004212000', 2]]}
</code></pre>
<p>I can figure that part out though.</p>
| 1 | 2009-04-29T17:57:29Z | 803,561 | <p>Here's some code that does what you asked for. You can turn your list of pairs into a dictionary straightforwardly. Then keys that are shared can be found by intersecting the sets of keys. Finally, constructing the result dictionary is easy given the set of shared keys.</p>
<pre><code>dict_a = dict(list_a)
dict_x = dict(list_x)
shared_keys = set(dict_a).intersection(set(dict_x))
result = dict((k, (dict_a[k], dict_x[k])) for k in shared_keys)
</code></pre>
| 3 | 2009-04-29T18:08:34Z | [
"python",
"django",
"list"
] |
Merge two lists of lists - Python | 803,526 | <p>This is a great primer but doesn't answer what I need:
<a href="http://stackoverflow.com/questions/464342/combining-two-sorted-lists-in-python">http://stackoverflow.com/questions/464342/combining-two-sorted-lists-in-python</a></p>
<p>I have two Python lists, each is a list of datetime,value pairs:</p>
<pre><code>list_a = [['1241000884000', 3], ['1241004212000', 4], ['1241006473000', 11]]
</code></pre>
<p>And:</p>
<pre><code>list_x = [['1241000884000', 16], ['1241000992000', 16], ['1241001121000', 17], ['1241001545000', 19], ['1241004212000', 20], ['1241006473000', 22]]
</code></pre>
<ol>
<li>There are actually numerous list_a lists with different key/values.</li>
<li>All list_a datetimes are in list_x.</li>
<li>I want to make a list, list_c, corresponding to each list_a which has each datetime from list_x and value_a/value_x.</li>
</ol>
<p>Bonus:</p>
<p>In my real program, list_a is actually a list within a dictionary like so. Taking the answer to the dictionary level would be:</p>
<pre><code>dict = {object_a: [['1241000884000', 3], ['1241004212000', 4], ['1241006473000', 11]], object_b: [['1241004212000', 2]]}
</code></pre>
<p>I can figure that part out though.</p>
| 1 | 2009-04-29T17:57:29Z | 803,667 | <p>"I want to make a list, list_c, corresponding to each list_a which has each datetime from list_x and value_a/value_x."</p>
<pre><code>def merge_lists( list_a, list_x ):
dict_x= dict(list_x)
for k,v in list_a:
if k in dict_x:
yield k, (v, dict_x[k])
</code></pre>
<p>Something like that may work also.</p>
<pre><code>merged= list( merge_lists( someDict['object_a'], someDict['object_b'] )
</code></pre>
<p>This may be slightly quicker because it only makes one dictionary for lookups, and leaves the other list alone.</p>
| 3 | 2009-04-29T18:35:51Z | [
"python",
"django",
"list"
] |
Merge two lists of lists - Python | 803,526 | <p>This is a great primer but doesn't answer what I need:
<a href="http://stackoverflow.com/questions/464342/combining-two-sorted-lists-in-python">http://stackoverflow.com/questions/464342/combining-two-sorted-lists-in-python</a></p>
<p>I have two Python lists, each is a list of datetime,value pairs:</p>
<pre><code>list_a = [['1241000884000', 3], ['1241004212000', 4], ['1241006473000', 11]]
</code></pre>
<p>And:</p>
<pre><code>list_x = [['1241000884000', 16], ['1241000992000', 16], ['1241001121000', 17], ['1241001545000', 19], ['1241004212000', 20], ['1241006473000', 22]]
</code></pre>
<ol>
<li>There are actually numerous list_a lists with different key/values.</li>
<li>All list_a datetimes are in list_x.</li>
<li>I want to make a list, list_c, corresponding to each list_a which has each datetime from list_x and value_a/value_x.</li>
</ol>
<p>Bonus:</p>
<p>In my real program, list_a is actually a list within a dictionary like so. Taking the answer to the dictionary level would be:</p>
<pre><code>dict = {object_a: [['1241000884000', 3], ['1241004212000', 4], ['1241006473000', 11]], object_b: [['1241004212000', 2]]}
</code></pre>
<p>I can figure that part out though.</p>
| 1 | 2009-04-29T17:57:29Z | 1,624,575 | <p>Could try extend:</p>
<pre><code>list_a.extend(list_b)
</code></pre>
| 0 | 2009-10-26T12:40:41Z | [
"python",
"django",
"list"
] |
Merge two lists of lists - Python | 803,526 | <p>This is a great primer but doesn't answer what I need:
<a href="http://stackoverflow.com/questions/464342/combining-two-sorted-lists-in-python">http://stackoverflow.com/questions/464342/combining-two-sorted-lists-in-python</a></p>
<p>I have two Python lists, each is a list of datetime,value pairs:</p>
<pre><code>list_a = [['1241000884000', 3], ['1241004212000', 4], ['1241006473000', 11]]
</code></pre>
<p>And:</p>
<pre><code>list_x = [['1241000884000', 16], ['1241000992000', 16], ['1241001121000', 17], ['1241001545000', 19], ['1241004212000', 20], ['1241006473000', 22]]
</code></pre>
<ol>
<li>There are actually numerous list_a lists with different key/values.</li>
<li>All list_a datetimes are in list_x.</li>
<li>I want to make a list, list_c, corresponding to each list_a which has each datetime from list_x and value_a/value_x.</li>
</ol>
<p>Bonus:</p>
<p>In my real program, list_a is actually a list within a dictionary like so. Taking the answer to the dictionary level would be:</p>
<pre><code>dict = {object_a: [['1241000884000', 3], ['1241004212000', 4], ['1241006473000', 11]], object_b: [['1241004212000', 2]]}
</code></pre>
<p>I can figure that part out though.</p>
| 1 | 2009-04-29T17:57:29Z | 1,794,230 | <p>Nothing beats a nice functional one-liner:</p>
<pre><code>reduce(lambda l1,l2: l1 + l2, list)
</code></pre>
| 2 | 2009-11-25T02:08:39Z | [
"python",
"django",
"list"
] |
Merge two lists of lists - Python | 803,526 | <p>This is a great primer but doesn't answer what I need:
<a href="http://stackoverflow.com/questions/464342/combining-two-sorted-lists-in-python">http://stackoverflow.com/questions/464342/combining-two-sorted-lists-in-python</a></p>
<p>I have two Python lists, each is a list of datetime,value pairs:</p>
<pre><code>list_a = [['1241000884000', 3], ['1241004212000', 4], ['1241006473000', 11]]
</code></pre>
<p>And:</p>
<pre><code>list_x = [['1241000884000', 16], ['1241000992000', 16], ['1241001121000', 17], ['1241001545000', 19], ['1241004212000', 20], ['1241006473000', 22]]
</code></pre>
<ol>
<li>There are actually numerous list_a lists with different key/values.</li>
<li>All list_a datetimes are in list_x.</li>
<li>I want to make a list, list_c, corresponding to each list_a which has each datetime from list_x and value_a/value_x.</li>
</ol>
<p>Bonus:</p>
<p>In my real program, list_a is actually a list within a dictionary like so. Taking the answer to the dictionary level would be:</p>
<pre><code>dict = {object_a: [['1241000884000', 3], ['1241004212000', 4], ['1241006473000', 11]], object_b: [['1241004212000', 2]]}
</code></pre>
<p>I can figure that part out though.</p>
| 1 | 2009-04-29T17:57:29Z | 3,417,675 | <p>List concatenation and list extension are available options:</p>
<pre>
>>> mylist = [1, 2]
>>> mylist + [5, 6] # concatenation
[1,2,5,6]
>>> mylist.extend([7, 8]) # extension
>>> print mylist
[1,2,7,8]
>>> mylist += [9, 10] # extension
>>> print mylist
[1,2,7,8,9,10]
</pre>
| 2 | 2010-08-05T18:00:15Z | [
"python",
"django",
"list"
] |
Python embedding with threads -- avoiding deadlocks? | 803,566 | <p>Is there any way to embed python, allow callbacks from python to C++, allowing the Pythhon code to spawn threads, and avoiding deadlocks?</p>
<p>The problem is this:</p>
<ul>
<li><p>To call into Python, I need to hold the GIL. Typically, I do this by getting the main thread state when I first create the interpreter, and then using PyEval_RestoreThread() to take the GIL and swap in the thread state before I call into Python.</p></li>
<li><p>When called from Python, I may need to access some protected resources that are protected by a separate critical section in my host. This means that Python will hold the GIL (potentially from some other thread than I initially called into), and then attempt to acquire my protection lock.</p></li>
<li><p>When calling into Python, I may need to hold the same locks, because I may be iterating over some collection of objects, for example.</p></li>
</ul>
<p>The problem is that even if I hold the GIL when I call into Python, Python may give it up, give it to another thread, and then have that thread call into my host, expecting to take the host locks. Meanwhile, the host may take the host locks, and the GIL lock, and call into Python. Deadlock ensues.</p>
<p>The problem here is that Python relinquishes the GIL to another thread while I've called into it. That's what it's expected to do, but it makes it impossible to sequence locking -- even if I first take GIL, then take my own lock, then call Python, Python will call into my system from another thread, expecting to take my own lock (because it un-sequenced the GIL by releasing it).</p>
<p>I can't really make the rest of my system use the GIL for all possible locks in the system -- and that wouldn't even work right, because Python may still release it to another thread.</p>
<p>I can't really guarantee that my host doesn't hold any locks when entering Python, either, because I'm not in control of all the code in the host.</p>
<p>So, is it just the case that this can't be done?</p>
| 7 | 2009-04-29T18:09:33Z | 803,703 | <p>"When calling into Python, I may need to hold the same locks, because I may be iterating over some collection of objects, for example."</p>
<p>This often indicates that a single process with multiple threads isn't appropriate. Perhaps this is a situation where multiple processes -- each with a specific object from the collection -- makes more sense.</p>
<p>Independent process -- each with their own pool of threads -- may be easier to manage.</p>
| 2 | 2009-04-29T18:44:19Z | [
"python",
"multithreading",
"deadlock",
"embedding"
] |
Python embedding with threads -- avoiding deadlocks? | 803,566 | <p>Is there any way to embed python, allow callbacks from python to C++, allowing the Pythhon code to spawn threads, and avoiding deadlocks?</p>
<p>The problem is this:</p>
<ul>
<li><p>To call into Python, I need to hold the GIL. Typically, I do this by getting the main thread state when I first create the interpreter, and then using PyEval_RestoreThread() to take the GIL and swap in the thread state before I call into Python.</p></li>
<li><p>When called from Python, I may need to access some protected resources that are protected by a separate critical section in my host. This means that Python will hold the GIL (potentially from some other thread than I initially called into), and then attempt to acquire my protection lock.</p></li>
<li><p>When calling into Python, I may need to hold the same locks, because I may be iterating over some collection of objects, for example.</p></li>
</ul>
<p>The problem is that even if I hold the GIL when I call into Python, Python may give it up, give it to another thread, and then have that thread call into my host, expecting to take the host locks. Meanwhile, the host may take the host locks, and the GIL lock, and call into Python. Deadlock ensues.</p>
<p>The problem here is that Python relinquishes the GIL to another thread while I've called into it. That's what it's expected to do, but it makes it impossible to sequence locking -- even if I first take GIL, then take my own lock, then call Python, Python will call into my system from another thread, expecting to take my own lock (because it un-sequenced the GIL by releasing it).</p>
<p>I can't really make the rest of my system use the GIL for all possible locks in the system -- and that wouldn't even work right, because Python may still release it to another thread.</p>
<p>I can't really guarantee that my host doesn't hold any locks when entering Python, either, because I'm not in control of all the code in the host.</p>
<p>So, is it just the case that this can't be done?</p>
| 7 | 2009-04-29T18:09:33Z | 804,431 | <p>The code that is called by python should release the GIL before taking any of your locks.
That way I believe it can't get into the dead-lock.</p>
| 1 | 2009-04-29T21:49:50Z | [
"python",
"multithreading",
"deadlock",
"embedding"
] |
Python embedding with threads -- avoiding deadlocks? | 803,566 | <p>Is there any way to embed python, allow callbacks from python to C++, allowing the Pythhon code to spawn threads, and avoiding deadlocks?</p>
<p>The problem is this:</p>
<ul>
<li><p>To call into Python, I need to hold the GIL. Typically, I do this by getting the main thread state when I first create the interpreter, and then using PyEval_RestoreThread() to take the GIL and swap in the thread state before I call into Python.</p></li>
<li><p>When called from Python, I may need to access some protected resources that are protected by a separate critical section in my host. This means that Python will hold the GIL (potentially from some other thread than I initially called into), and then attempt to acquire my protection lock.</p></li>
<li><p>When calling into Python, I may need to hold the same locks, because I may be iterating over some collection of objects, for example.</p></li>
</ul>
<p>The problem is that even if I hold the GIL when I call into Python, Python may give it up, give it to another thread, and then have that thread call into my host, expecting to take the host locks. Meanwhile, the host may take the host locks, and the GIL lock, and call into Python. Deadlock ensues.</p>
<p>The problem here is that Python relinquishes the GIL to another thread while I've called into it. That's what it's expected to do, but it makes it impossible to sequence locking -- even if I first take GIL, then take my own lock, then call Python, Python will call into my system from another thread, expecting to take my own lock (because it un-sequenced the GIL by releasing it).</p>
<p>I can't really make the rest of my system use the GIL for all possible locks in the system -- and that wouldn't even work right, because Python may still release it to another thread.</p>
<p>I can't really guarantee that my host doesn't hold any locks when entering Python, either, because I'm not in control of all the code in the host.</p>
<p>So, is it just the case that this can't be done?</p>
| 7 | 2009-04-29T18:09:33Z | 808,498 | <p>There was recently some discussion of a similar issue on the pyopenssl list. I'm afraid if I try to explain this I'm going to get it wrong, so instead I'll refer you to <a href="https://bugs.launchpad.net/pyopenssl/%2Bbug/344815" rel="nofollow">the problem in question</a>.</p>
| 0 | 2009-04-30T18:53:15Z | [
"python",
"multithreading",
"deadlock",
"embedding"
] |
How do you use FCKEditor's image upload and browser with mod-wsgi? | 803,613 | <p>I am using FCKEditor within a Django app served by Apache/mod-wsgi. I don't want to install php just for FCKEditor andI see FCKEditor offers image uploading and image browsing through Python. I just haven't found good instructions on how to set this all up.</p>
<p>So currently Django is running through a wsgi interface using this setup:</p>
<pre><code>import os, sys
DIRNAME = os.sep.join(os.path.abspath(__file__).split(os.sep)[:-3])
sys.path.append(DIRNAME)
os.environ['DJANGO_SETTINGS_MODULE'] = 'myapp.settings'
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
</code></pre>
<p>In fckeditor in the editor->filemanager->connectors->py directory there is a file called wsgi.py: </p>
<pre><code>from connector import FCKeditorConnector
from upload import FCKeditorQuickUpload
import cgitb
from cStringIO import StringIO
# Running from WSGI capable server (recomended)
def App(environ, start_response):
"WSGI entry point. Run the connector"
if environ['SCRIPT_NAME'].endswith("connector.py"):
conn = FCKeditorConnector(environ)
elif environ['SCRIPT_NAME'].endswith("upload.py"):
conn = FCKeditorQuickUpload(environ)
else:
start_response ("200 Ok", [('Content-Type','text/html')])
yield "Unknown page requested: "
yield environ['SCRIPT_NAME']
return
try:
# run the connector
data = conn.doResponse()
# Start WSGI response:
start_response ("200 Ok", conn.headers)
# Send response text
yield data
except:
start_response("500 Internal Server Error",[("Content-type","text/html")])
file = StringIO()
cgitb.Hook(file = file).handle()
yield file.getvalue()
</code></pre>
<p>I need these two things two work together by means of modifying my django wsgi file to serve the fckeditor parts correctly or make apache serve both django and fckeditor correctly on a single domain.</p>
| 1 | 2009-04-29T18:23:30Z | 820,889 | <p>This describes how to embed the FCK editor and enable image uploading.</p>
<p>First you need to edit fckconfig.js to change the image upload
URL to point to some URL inside your server.</p>
<pre><code>FCKConfig.ImageUploadURL = "/myapp/root/imageUploader";
</code></pre>
<p>This will point to the server relative URL to receive the upload.
FCK will send the uploaded file to that handler using the CGI variable
name "NewFile" encoded using multipart/form-data. Unfortunately you
will have to implement /myapp/root/imageUploader, because I don't think
the FCK distribution stuff can be easily adapted to other frameworks.</p>
<p>The imageUploader should extract the NewFile and store it
somewhere on the server.
The response generated by /myapp/root/imageUploader should emulate
the HTML constructed in /editor/.../fckoutput.py.
Something like this (whiff template format)</p>
<pre><code>{{env
whiff.content_type: "text/html",
whiff.headers: [
["Expires","Mon, 26 Jul 1997 05:00:00 GMT"],
["Cache-Control","no-store, no-cache, must-revalidate"],
["Cache-Control","post-check=0, pre-check=0"],
["Pragma","no-cache"]
]
/}}
<script>
//alert("!! RESPONSE RECIEVED");
errorNumber = 0;
fileUrl = "fileurl.png";
fileName = "filename.png";
customMsg = "";
window.parent.OnUploadCompleted(errorNumber, fileUrl, fileName, customMsg);
</script>
</code></pre>
<p>The {{env ...}} stuff at the top indicate the content type and
recommended HTTP headers to send. The fileUrl should be the Url to
use to find the image on the server.</p>
<p>Here are the basic steps to get the html fragment which
generates the FCK editor widget. The only tricky part is you have to put the
right client indentification into the os.environ -- it's ugly
but that's the way the FCK library works right now (I filed a bug
report).</p>
<pre><code>import fckeditor # you must have the fck editor python support installed to use this module
import os
inputName = "myInputName" # the name to use for the input element in the form
basePath = "/server/relative/path/to/fck/installation/" # the location of FCK static files
if basePath[-1:]!="/":
basePath+="/" # basepath must end in slash
oFCKeditor = fckeditor.FCKeditor(inputName)
oFCKeditor.BasePath = basePath
oFCKeditor.Height = 300 # the height in pixels of the editor
oFCKeditor.Value = "<h1>initial html to be editted</h1>"
os.environ["HTTP_USER_AGENT"] = "Mozilla/5.0 (Macintosh; U;..." # or whatever
# there must be some way to figure out the user agent in Django right?
htmlOut = oFCKeditor.Create()
# insert htmlOut into your page where you want the editor to appear
return htmlOut
</code></pre>
<p>The above is untested, but it's based on the below which is tested.</p>
<p>Here is how to use FCK editor using mod-wsgi:
Technically it uses a couple features of WHIFF (see
<a href="http://whiff.sourceforge.net" rel="nofollow">WHIFF.sourceforge.net</a>),
-- in fact it is part of the WHIFF distribution --
but
the WHIFF features are easily removed.
<p>
I don't know how to install it in Django, but if
Django allows wsgi apps to be installed easily, you
should be able to do it.
<p>
NOTE: FCK allows the client to inject pretty much anything
into HTML pages -- you will want to filter the returned value for evil
attacks.
(eg: see whiff.middleware.TestSafeHTML middleware for
an example of how to do this).</p>
<pre>
"""
Introduce an FCK editor input element. (requires FCKeditor http://www.fckeditor.net/).
Note: this implementation can generate values containing code injection attacks if you
don't filter the output generated for evil tags and values.
"""
import fckeditor # you must have the fck editor python support installed to use this module
from whiff.middleware import misc
import os
class FCKInput(misc.utility):
def __init__(self,
inputName, # name for input element
basePath, # server relative URL root for FCK HTTP install
value = ""): # initial value for input
self.inputName = inputName
self.basePath = basePath
self.value = value
def __call__(self, env, start_response):
inputName = self.param_value(self.inputName, env).strip()
basePath = self.param_value(self.basePath, env).strip()
if basePath[-1:]!="/":
basePath+="/"
value = self.param_value(self.value, env)
oFCKeditor = fckeditor.FCKeditor(inputName)
oFCKeditor.BasePath = basePath
oFCKeditor.Height = 300 # this should be a require!
oFCKeditor.Value = value
# hack around a bug in fck python library: need to put the user agent in os.environ
# XXX this hack is not safe for multi threaded servers (theoretically)... need to lock on os.env
os_environ = os.environ
new_os_env = os_environ.copy()
new_os_env.update(env)
try:
os.environ = new_os_env
htmlOut = oFCKeditor.Create()
finally:
# restore the old os.environ
os.environ = os_environ
start_response("200 OK", [('Content-Type', 'text/html')])
return [htmlOut]
__middleware__ = FCKInput
def test():
env = {
"HTTP_USER_AGENT":
"Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.14) Gecko/20080404 Firefox/2.0.0.14"
}
f = FCKInput("INPUTNAME", "/MY/BASE/PATH", "THE <EM>HTML</EM> VALUE TO START WITH")
r = f(env, misc.ignore)
print "test result"
print "".join(list(r))
if __name__=="__main__":
test()
</pre>
<p>See this working, for example, at
<a href="http://aaron.oirt.rutgers.edu/myapp/docs/W1500.whyIsWhiffCool" rel="nofollow">
<a href="http://aaron.oirt.rutgers.edu/myapp/docs/W1500.whyIsWhiffCool" rel="nofollow">http://aaron.oirt.rutgers.edu/myapp/docs/W1500.whyIsWhiffCool</a></a>.</p>
<p>btw: thanks. I needed to look into this anyway.</p>
| 1 | 2009-05-04T16:48:09Z | [
"python",
"django",
"fckeditor",
"mod-wsgi"
] |
How do you use FCKEditor's image upload and browser with mod-wsgi? | 803,613 | <p>I am using FCKEditor within a Django app served by Apache/mod-wsgi. I don't want to install php just for FCKEditor andI see FCKEditor offers image uploading and image browsing through Python. I just haven't found good instructions on how to set this all up.</p>
<p>So currently Django is running through a wsgi interface using this setup:</p>
<pre><code>import os, sys
DIRNAME = os.sep.join(os.path.abspath(__file__).split(os.sep)[:-3])
sys.path.append(DIRNAME)
os.environ['DJANGO_SETTINGS_MODULE'] = 'myapp.settings'
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
</code></pre>
<p>In fckeditor in the editor->filemanager->connectors->py directory there is a file called wsgi.py: </p>
<pre><code>from connector import FCKeditorConnector
from upload import FCKeditorQuickUpload
import cgitb
from cStringIO import StringIO
# Running from WSGI capable server (recomended)
def App(environ, start_response):
"WSGI entry point. Run the connector"
if environ['SCRIPT_NAME'].endswith("connector.py"):
conn = FCKeditorConnector(environ)
elif environ['SCRIPT_NAME'].endswith("upload.py"):
conn = FCKeditorQuickUpload(environ)
else:
start_response ("200 Ok", [('Content-Type','text/html')])
yield "Unknown page requested: "
yield environ['SCRIPT_NAME']
return
try:
# run the connector
data = conn.doResponse()
# Start WSGI response:
start_response ("200 Ok", conn.headers)
# Send response text
yield data
except:
start_response("500 Internal Server Error",[("Content-type","text/html")])
file = StringIO()
cgitb.Hook(file = file).handle()
yield file.getvalue()
</code></pre>
<p>I need these two things two work together by means of modifying my django wsgi file to serve the fckeditor parts correctly or make apache serve both django and fckeditor correctly on a single domain.</p>
| 1 | 2009-04-29T18:23:30Z | 951,920 | <p>Edit: Ultimately I was unhappy with this solution also so I made a <a href="http://bitbucket.org/megaman821/django-fckconnector/" rel="nofollow">Django app</a> that takes care of the file uploads and browsing.</p>
<p>This is the solution I finally hacked together after reading the fckeditor code:</p>
<pre><code>import os, sys
def fck_handler(environ, start_response):
path = environ['PATH_INFO']
if path.endswith(('upload.py', 'connector.py')):
sys.path.append('/#correct_path_to#/fckeditor/editor/filemanager/connectors/py/')
if path.endswith('upload.py'):
from upload import FCKeditorQuickUpload
conn = FCKeditorQuickUpload(environ)
else:
from connector import FCKeditorConnector
conn = FCKeditorConnector(environ)
try:
data = conn.doResponse()
start_response('200 Ok', conn.headers)
return data
except:
start_response("500 Internal Server Error",[("Content-type","text/html")])
return "There was an error"
else:
sys.path.append('/path_to_your_django_site/')
os.environ['DJANGO_SETTINGS_MODULE'] = 'your_django_site.settings'
import django.core.handlers.wsgi
handler = django.core.handlers.wsgi.WSGIHandler()
return handler(environ, start_response)
application = fck_handler
</code></pre>
| 0 | 2009-06-04T17:18:57Z | [
"python",
"django",
"fckeditor",
"mod-wsgi"
] |
Passing functions with arguments to another function in Python? | 803,616 | <p>Is it possible to pass functions with arguments to another function in Python?</p>
<p>Say for something like:</p>
<pre><code>def Perform ( function ) :
function
</code></pre>
<p>but the functions to be passed will be like:</p>
<pre><code>Action1()
Action2(p)
Action3(p,r)
</code></pre>
| 98 | 2009-04-29T18:24:26Z | 803,626 | <p>This is what lambda is for:</p>
<pre><code>def Perform(f):
f()
Perform(lambda: Action1())
Perform(lambda: Action2(p))
Perform(lambda: Action3(p, r))
</code></pre>
| 56 | 2009-04-29T18:27:44Z | [
"python",
"function"
] |
Passing functions with arguments to another function in Python? | 803,616 | <p>Is it possible to pass functions with arguments to another function in Python?</p>
<p>Say for something like:</p>
<pre><code>def Perform ( function ) :
function
</code></pre>
<p>but the functions to be passed will be like:</p>
<pre><code>Action1()
Action2(p)
Action3(p,r)
</code></pre>
| 98 | 2009-04-29T18:24:26Z | 803,632 | <p>Do you mean this?</p>
<pre><code>def perform( fun, *args ):
fun( *args )
def action1( args ):
something
def action2( args ):
something
perform( action1 )
perform( action2, p )
perform( action3, p, r )
</code></pre>
| 155 | 2009-04-29T18:28:24Z | [
"python",
"function"
] |
Passing functions with arguments to another function in Python? | 803,616 | <p>Is it possible to pass functions with arguments to another function in Python?</p>
<p>Say for something like:</p>
<pre><code>def Perform ( function ) :
function
</code></pre>
<p>but the functions to be passed will be like:</p>
<pre><code>Action1()
Action2(p)
Action3(p,r)
</code></pre>
| 98 | 2009-04-29T18:24:26Z | 803,668 | <p>Use functools.partial, not lambdas! And ofc Perform is a useless function, you can pass around functions directly.</p>
<p><pre><code>for func in [Action1, partial(Action2, p), partial(Action3, p, r)]:
func()
</pre></code></p>
| 13 | 2009-04-29T18:36:14Z | [
"python",
"function"
] |
Passing functions with arguments to another function in Python? | 803,616 | <p>Is it possible to pass functions with arguments to another function in Python?</p>
<p>Say for something like:</p>
<pre><code>def Perform ( function ) :
function
</code></pre>
<p>but the functions to be passed will be like:</p>
<pre><code>Action1()
Action2(p)
Action3(p,r)
</code></pre>
| 98 | 2009-04-29T18:24:26Z | 804,346 | <p>You can use the partial function from functools like so.</p>
<pre><code>from functools import partial
def perform(f):
f()
perform(Action1)
perform(partial(Action2, p))
perform(partial(Action3, p, r))
</code></pre>
<p>Also works with keywords</p>
<pre><code>perform(partial(Action4, param1=p))
</code></pre>
| 17 | 2009-04-29T21:23:23Z | [
"python",
"function"
] |
Passing functions with arguments to another function in Python? | 803,616 | <p>Is it possible to pass functions with arguments to another function in Python?</p>
<p>Say for something like:</p>
<pre><code>def Perform ( function ) :
function
</code></pre>
<p>but the functions to be passed will be like:</p>
<pre><code>Action1()
Action2(p)
Action3(p,r)
</code></pre>
| 98 | 2009-04-29T18:24:26Z | 1,053,007 | <p>(months later) a tiny real example where lambda is useful, partial not:<br />
say you want various 1-dimensional cross-sections through a 2-dimensional function,
like slices through a row of hills.<br />
<code>quadf( x, f )</code> takes a 1-d <code>f</code> and calls it for various <code>x</code>.<br />
To call it for vertical cuts at y = -1 0 1 and horizontal cuts at x = -1 0 1,</p>
<pre><code>fx1 = quadf( x, lambda x: f( x, 1 ))
fx0 = quadf( x, lambda x: f( x, 0 ))
fx_1 = quadf( x, lambda x: f( x, -1 ))
fxy = parabola( y, fx_1, fx0, fx1 )
f_1y = quadf( y, lambda y: f( -1, y ))
f0y = quadf( y, lambda y: f( 0, y ))
f1y = quadf( y, lambda y: f( 1, y ))
fyx = parabola( x, f_1y, f0y, f1y )
</code></pre>
<p>As far as I know, <code>partial</code> can't do this --</p>
<pre><code>quadf( y, partial( f, x=1 ))
TypeError: f() got multiple values for keyword argument 'x'
</code></pre>
<p>(How to add tags numpy, partial, lambda to this ?)</p>
| 5 | 2009-06-27T15:12:20Z | [
"python",
"function"
] |
Passing functions with arguments to another function in Python? | 803,616 | <p>Is it possible to pass functions with arguments to another function in Python?</p>
<p>Say for something like:</p>
<pre><code>def Perform ( function ) :
function
</code></pre>
<p>but the functions to be passed will be like:</p>
<pre><code>Action1()
Action2(p)
Action3(p,r)
</code></pre>
| 98 | 2009-04-29T18:24:26Z | 24,169,584 | <p>Here is a way to do it with a closure:</p>
<pre><code> def generate_add_mult_func(func):
def function_generator(x):
return reduce(func,range(1,x))
return function_generator
def add(x,y):
return x+y
def mult(x,y):
return x*y
adding=generate_add_mult_func(add)
multiplying=generate_add_mult_func(mult)
print adding(10)
print multiplying(10)
</code></pre>
| 0 | 2014-06-11T18:02:35Z | [
"python",
"function"
] |
Best way to convert a Unicode URL to ASCII (UTF-8 percent-escaped) in Python? | 804,336 | <p>I'm wondering what's the best way -- or if there's a simple way with the standard library -- to convert a URL with Unicode chars in the domain name and path to the equivalent ASCII URL, encoded with domain as IDNA and the path %-encoded, as per RFC 3986.</p>
<p>I get from the user a URL in UTF-8. So if they've typed in <code>http://â¡.ws/â¥</code> I get <code>'http://\xe2\x9e\xa1.ws/\xe2\x99\xa5'</code> in Python. And what I want out is the ASCII version: <code>'<a href="http://xn--hgi.ws/%E2%99%A5">http://xn--hgi.ws/%E2%99%A5</a>'</code>.</p>
<p>What I do at the moment is split the URL up into parts via a regex, and then manually IDNA-encode the domain, and separately encode the path and query string with different <code>urllib.quote()</code> calls.</p>
<pre><code># url is UTF-8 here, eg: url = u'http://â¡.ws/ã'.encode('utf-8')
match = re.match(r'([a-z]{3,5})://(.+\.[a-z0-9]{1,6})'
r'(:\d{1,5})?(/.*?)(\?.*)?$', url, flags=re.I)
if not match:
raise BadURLException(url)
protocol, domain, port, path, query = match.groups()
try:
domain = unicode(domain, 'utf-8')
except UnicodeDecodeError:
return '' # bad UTF-8 chars in domain
domain = domain.encode('idna')
if port is None:
port = ''
path = urllib.quote(path)
if query is None:
query = ''
else:
query = urllib.quote(query, safe='=&?/')
url = protocol + '://' + domain + port + path + query
# url is ASCII here, eg: url = 'http://xn--hgi.ws/%E3%89%8C'
</code></pre>
<p>Is this correct? Any better suggestions? Is there a simple standard-library function to do this?</p>
| 26 | 2009-04-29T21:21:25Z | 804,380 | <h2>Code:</h2>
<pre><code>import urlparse, urllib
def fixurl(url):
# turn string into unicode
if not isinstance(url,unicode):
url = url.decode('utf8')
# parse it
parsed = urlparse.urlsplit(url)
# divide the netloc further
userpass,at,hostport = parsed.netloc.rpartition('@')
user,colon1,pass_ = userpass.partition(':')
host,colon2,port = hostport.partition(':')
# encode each component
scheme = parsed.scheme.encode('utf8')
user = urllib.quote(user.encode('utf8'))
colon1 = colon1.encode('utf8')
pass_ = urllib.quote(pass_.encode('utf8'))
at = at.encode('utf8')
host = host.encode('idna')
colon2 = colon2.encode('utf8')
port = port.encode('utf8')
path = '/'.join( # could be encoded slashes!
urllib.quote(urllib.unquote(pce).encode('utf8'),'')
for pce in parsed.path.split('/')
)
query = urllib.quote(urllib.unquote(parsed.query).encode('utf8'),'=&?/')
fragment = urllib.quote(urllib.unquote(parsed.fragment).encode('utf8'))
# put it back together
netloc = ''.join((user,colon1,pass_,at,host,colon2,port))
return urlparse.urlunsplit((scheme,netloc,path,query,fragment))
print fixurl('http://\xe2\x9e\xa1.ws/\xe2\x99\xa5')
print fixurl('http://\xe2\x9e\xa1.ws/\xe2\x99\xa5/%2F')
print fixurl(u'http://Ã
sa:abc123@â¡.ws:81/admin')
print fixurl(u'http://â¡.ws/admin')
</code></pre>
<h2>Output:</h2>
<blockquote>
<p><code>http://xn--hgi.ws/%E2%99%A5</code><br>
<code>http://xn--hgi.ws/%E2%99%A5/%2F</code><br>
<code>http://%C3%85sa:abc123@xn--hgi.ws:81/admin</code><br>
<code>http://xn--hgi.ws/admin</code></p>
</blockquote>
<h2>Read more:</h2>
<ul>
<li><a href="http://docs.python.org/library/urllib.html#urllib.quote">urllib.quote()</a></li>
<li><a href="http://docs.python.org/library/urlparse.html#urlparse.urlparse">urlparse.urlparse()</a></li>
<li><a href="http://docs.python.org/library/urlparse.html#urlparse.urlunparse">urlparse.urlunparse()</a></li>
<li><a href="http://docs.python.org/library/urlparse.html#urlparse.urlsplit">urlparse.urlsplit()</a></li>
<li><a href="http://docs.python.org/library/urlparse.html#urlparse.urlunsplit">urlparse.urlunsplit()</a></li>
</ul>
<h2>Edits:</h2>
<ul>
<li>Fixed the case of already quoted characters in the string.</li>
<li>Changed <code>urlparse</code>/<code>urlunparse</code> to <code>urlsplit</code>/<code>urlunsplit</code>.</li>
<li>Don't encode user and port information with the hostname. (Thanks Jehiah)</li>
<li>When "@" is missing, don't treat the host/port as user/pass! (Thanks hupf)</li>
</ul>
| 43 | 2009-04-29T21:36:32Z | [
"python",
"url",
"unicode",
"utf-8"
] |
Best way to convert a Unicode URL to ASCII (UTF-8 percent-escaped) in Python? | 804,336 | <p>I'm wondering what's the best way -- or if there's a simple way with the standard library -- to convert a URL with Unicode chars in the domain name and path to the equivalent ASCII URL, encoded with domain as IDNA and the path %-encoded, as per RFC 3986.</p>
<p>I get from the user a URL in UTF-8. So if they've typed in <code>http://â¡.ws/â¥</code> I get <code>'http://\xe2\x9e\xa1.ws/\xe2\x99\xa5'</code> in Python. And what I want out is the ASCII version: <code>'<a href="http://xn--hgi.ws/%E2%99%A5">http://xn--hgi.ws/%E2%99%A5</a>'</code>.</p>
<p>What I do at the moment is split the URL up into parts via a regex, and then manually IDNA-encode the domain, and separately encode the path and query string with different <code>urllib.quote()</code> calls.</p>
<pre><code># url is UTF-8 here, eg: url = u'http://â¡.ws/ã'.encode('utf-8')
match = re.match(r'([a-z]{3,5})://(.+\.[a-z0-9]{1,6})'
r'(:\d{1,5})?(/.*?)(\?.*)?$', url, flags=re.I)
if not match:
raise BadURLException(url)
protocol, domain, port, path, query = match.groups()
try:
domain = unicode(domain, 'utf-8')
except UnicodeDecodeError:
return '' # bad UTF-8 chars in domain
domain = domain.encode('idna')
if port is None:
port = ''
path = urllib.quote(path)
if query is None:
query = ''
else:
query = urllib.quote(query, safe='=&?/')
url = protocol + '://' + domain + port + path + query
# url is ASCII here, eg: url = 'http://xn--hgi.ws/%E3%89%8C'
</code></pre>
<p>Is this correct? Any better suggestions? Is there a simple standard-library function to do this?</p>
| 26 | 2009-04-29T21:21:25Z | 804,382 | <p>You might use <a href="http://docs.python.org/library/urlparse.html#urlparse.urlsplit" rel="nofollow"><code>urlparse.urlsplit</code></a> instead, but otherwise you seem to have a very straightforward solution, there.</p>
<pre><code>protocol, domain, path, query, fragment = urlparse.urlsplit(url)
</code></pre>
<p>(You can access the domain and port separately by accessing the returned value's named properties, but as port syntax is always in ASCII it is unaffected by the IDNA encoding process.)</p>
| 1 | 2009-04-29T21:36:42Z | [
"python",
"url",
"unicode",
"utf-8"
] |
Best way to convert a Unicode URL to ASCII (UTF-8 percent-escaped) in Python? | 804,336 | <p>I'm wondering what's the best way -- or if there's a simple way with the standard library -- to convert a URL with Unicode chars in the domain name and path to the equivalent ASCII URL, encoded with domain as IDNA and the path %-encoded, as per RFC 3986.</p>
<p>I get from the user a URL in UTF-8. So if they've typed in <code>http://â¡.ws/â¥</code> I get <code>'http://\xe2\x9e\xa1.ws/\xe2\x99\xa5'</code> in Python. And what I want out is the ASCII version: <code>'<a href="http://xn--hgi.ws/%E2%99%A5">http://xn--hgi.ws/%E2%99%A5</a>'</code>.</p>
<p>What I do at the moment is split the URL up into parts via a regex, and then manually IDNA-encode the domain, and separately encode the path and query string with different <code>urllib.quote()</code> calls.</p>
<pre><code># url is UTF-8 here, eg: url = u'http://â¡.ws/ã'.encode('utf-8')
match = re.match(r'([a-z]{3,5})://(.+\.[a-z0-9]{1,6})'
r'(:\d{1,5})?(/.*?)(\?.*)?$', url, flags=re.I)
if not match:
raise BadURLException(url)
protocol, domain, port, path, query = match.groups()
try:
domain = unicode(domain, 'utf-8')
except UnicodeDecodeError:
return '' # bad UTF-8 chars in domain
domain = domain.encode('idna')
if port is None:
port = ''
path = urllib.quote(path)
if query is None:
query = ''
else:
query = urllib.quote(query, safe='=&?/')
url = protocol + '://' + domain + port + path + query
# url is ASCII here, eg: url = 'http://xn--hgi.ws/%E3%89%8C'
</code></pre>
<p>Is this correct? Any better suggestions? Is there a simple standard-library function to do this?</p>
| 26 | 2009-04-29T21:21:25Z | 804,408 | <p>there's some RFC-3896 <em>url parsing</em> work underway (e.g. as part of the Summer Of Code) but nothing in the standard library yet AFAIK -- and nothing much on the <em>uri encoding</em> side of things either, again AFAIK. So you might as well go with MizardX's elegant approach.</p>
| 2 | 2009-04-29T21:44:08Z | [
"python",
"url",
"unicode",
"utf-8"
] |
Best way to convert a Unicode URL to ASCII (UTF-8 percent-escaped) in Python? | 804,336 | <p>I'm wondering what's the best way -- or if there's a simple way with the standard library -- to convert a URL with Unicode chars in the domain name and path to the equivalent ASCII URL, encoded with domain as IDNA and the path %-encoded, as per RFC 3986.</p>
<p>I get from the user a URL in UTF-8. So if they've typed in <code>http://â¡.ws/â¥</code> I get <code>'http://\xe2\x9e\xa1.ws/\xe2\x99\xa5'</code> in Python. And what I want out is the ASCII version: <code>'<a href="http://xn--hgi.ws/%E2%99%A5">http://xn--hgi.ws/%E2%99%A5</a>'</code>.</p>
<p>What I do at the moment is split the URL up into parts via a regex, and then manually IDNA-encode the domain, and separately encode the path and query string with different <code>urllib.quote()</code> calls.</p>
<pre><code># url is UTF-8 here, eg: url = u'http://â¡.ws/ã'.encode('utf-8')
match = re.match(r'([a-z]{3,5})://(.+\.[a-z0-9]{1,6})'
r'(:\d{1,5})?(/.*?)(\?.*)?$', url, flags=re.I)
if not match:
raise BadURLException(url)
protocol, domain, port, path, query = match.groups()
try:
domain = unicode(domain, 'utf-8')
except UnicodeDecodeError:
return '' # bad UTF-8 chars in domain
domain = domain.encode('idna')
if port is None:
port = ''
path = urllib.quote(path)
if query is None:
query = ''
else:
query = urllib.quote(query, safe='=&?/')
url = protocol + '://' + domain + port + path + query
# url is ASCII here, eg: url = 'http://xn--hgi.ws/%E3%89%8C'
</code></pre>
<p>Is this correct? Any better suggestions? Is there a simple standard-library function to do this?</p>
| 26 | 2009-04-29T21:21:25Z | 805,166 | <p>Okay, with these comments and some bug-fixing in my own code (it didn't handle fragments at all), I've come up with the following <code>canonurl()</code> function -- returns a canonical, ASCII form of the URL:</p>
<pre><code>import re
import urllib
import urlparse
def canonurl(url):
r"""Return the canonical, ASCII-encoded form of a UTF-8 encoded URL, or ''
if the URL looks invalid.
>>> canonurl(' ')
''
>>> canonurl('www.google.com')
'http://www.google.com/'
>>> canonurl('bad-utf8.com/path\xff/file')
''
>>> canonurl('svn://blah.com/path/file')
'svn://blah.com/path/file'
>>> canonurl('1234://badscheme.com')
''
>>> canonurl('bad$scheme://google.com')
''
>>> canonurl('site.badtopleveldomain')
''
>>> canonurl('site.com:badport')
''
>>> canonurl('http://123.24.8.240/blah')
'http://123.24.8.240/blah'
>>> canonurl('http://123.24.8.240:1234/blah?q#f')
'http://123.24.8.240:1234/blah?q#f'
>>> canonurl('\xe2\x9e\xa1.ws') # tinyarro.ws
'http://xn--hgi.ws/'
>>> canonurl(' http://www.google.com:80/path/file;params?query#fragment ')
'http://www.google.com:80/path/file;params?query#fragment'
>>> canonurl('http://\xe2\x9e\xa1.ws/\xe2\x99\xa5')
'http://xn--hgi.ws/%E2%99%A5'
>>> canonurl('http://\xe2\x9e\xa1.ws/\xe2\x99\xa5/pa%2Fth')
'http://xn--hgi.ws/%E2%99%A5/pa/th'
>>> canonurl('http://\xe2\x9e\xa1.ws/\xe2\x99\xa5/pa%2Fth;par%2Fams?que%2Fry=a&b=c')
'http://xn--hgi.ws/%E2%99%A5/pa/th;par/ams?que/ry=a&b=c'
>>> canonurl('http://\xe2\x9e\xa1.ws/\xe2\x99\xa5?\xe2\x99\xa5#\xe2\x99\xa5')
'http://xn--hgi.ws/%E2%99%A5?%E2%99%A5#%E2%99%A5'
>>> canonurl('http://\xe2\x9e\xa1.ws/%e2%99%a5?%E2%99%A5#%E2%99%A5')
'http://xn--hgi.ws/%E2%99%A5?%E2%99%A5#%E2%99%A5'
>>> canonurl('http://badutf8pcokay.com/%FF?%FE#%FF')
'http://badutf8pcokay.com/%FF?%FE#%FF'
>>> len(canonurl('google.com/' + 'a' * 16384))
4096
"""
# strip spaces at the ends and ensure it's prefixed with 'scheme://'
url = url.strip()
if not url:
return ''
if not urlparse.urlsplit(url).scheme:
url = 'http://' + url
# turn it into Unicode
try:
url = unicode(url, 'utf-8')
except UnicodeDecodeError:
return '' # bad UTF-8 chars in URL
# parse the URL into its components
parsed = urlparse.urlsplit(url)
scheme, netloc, path, query, fragment = parsed
# ensure scheme is a letter followed by letters, digits, and '+-.' chars
if not re.match(r'[a-z][-+.a-z0-9]*$', scheme, flags=re.I):
return ''
scheme = str(scheme)
# ensure domain and port are valid, eg: sub.domain.<1-to-6-TLD-chars>[:port]
match = re.match(r'(.+\.[a-z0-9]{1,6})(:\d{1,5})?$', netloc, flags=re.I)
if not match:
return ''
domain, port = match.groups()
netloc = domain + (port if port else '')
netloc = netloc.encode('idna')
# ensure path is valid and convert Unicode chars to %-encoded
if not path:
path = '/' # eg: 'http://google.com' -> 'http://google.com/'
path = urllib.quote(urllib.unquote(path.encode('utf-8')), safe='/;')
# ensure query is valid
query = urllib.quote(urllib.unquote(query.encode('utf-8')), safe='=&?/')
# ensure fragment is valid
fragment = urllib.quote(urllib.unquote(fragment.encode('utf-8')))
# piece it all back together, truncating it to a maximum of 4KB
url = urlparse.urlunsplit((scheme, netloc, path, query, fragment))
return url[:4096]
if __name__ == '__main__':
import doctest
doctest.testmod()
</code></pre>
| 1 | 2009-04-30T02:43:17Z | [
"python",
"url",
"unicode",
"utf-8"
] |
Best way to convert a Unicode URL to ASCII (UTF-8 percent-escaped) in Python? | 804,336 | <p>I'm wondering what's the best way -- or if there's a simple way with the standard library -- to convert a URL with Unicode chars in the domain name and path to the equivalent ASCII URL, encoded with domain as IDNA and the path %-encoded, as per RFC 3986.</p>
<p>I get from the user a URL in UTF-8. So if they've typed in <code>http://â¡.ws/â¥</code> I get <code>'http://\xe2\x9e\xa1.ws/\xe2\x99\xa5'</code> in Python. And what I want out is the ASCII version: <code>'<a href="http://xn--hgi.ws/%E2%99%A5">http://xn--hgi.ws/%E2%99%A5</a>'</code>.</p>
<p>What I do at the moment is split the URL up into parts via a regex, and then manually IDNA-encode the domain, and separately encode the path and query string with different <code>urllib.quote()</code> calls.</p>
<pre><code># url is UTF-8 here, eg: url = u'http://â¡.ws/ã'.encode('utf-8')
match = re.match(r'([a-z]{3,5})://(.+\.[a-z0-9]{1,6})'
r'(:\d{1,5})?(/.*?)(\?.*)?$', url, flags=re.I)
if not match:
raise BadURLException(url)
protocol, domain, port, path, query = match.groups()
try:
domain = unicode(domain, 'utf-8')
except UnicodeDecodeError:
return '' # bad UTF-8 chars in domain
domain = domain.encode('idna')
if port is None:
port = ''
path = urllib.quote(path)
if query is None:
query = ''
else:
query = urllib.quote(query, safe='=&?/')
url = protocol + '://' + domain + port + path + query
# url is ASCII here, eg: url = 'http://xn--hgi.ws/%E3%89%8C'
</code></pre>
<p>Is this correct? Any better suggestions? Is there a simple standard-library function to do this?</p>
| 26 | 2009-04-29T21:21:25Z | 4,494,314 | <p>the code given by MizardX isnt 100% correct. This example wont work:</p>
<p>example.com/folder/?page=2</p>
<p>check out django.utils.encoding.iri_to_uri() to convert unicode URL to ASCII urls.</p>
<p><a href="http://docs.djangoproject.com/en/dev/ref/unicode/">http://docs.djangoproject.com/en/dev/ref/unicode/</a></p>
| 5 | 2010-12-20T21:52:29Z | [
"python",
"url",
"unicode",
"utf-8"
] |
Tool to determine what lowest version of Python required? | 804,538 | <p>Is there something similar to Pylint, that will look at a Python script (or run it), and determine which version of Python each line (or function) requires?</p>
<p>For example, theoretical usage:</p>
<pre><code>$ magic_tool <EOF
with something:
pass
EOF
1: 'with' statement requires Python 2.6 or greater
$ magic_tool <EOF
class Something:
@classmethod
def blah(cls):
pass
EOF
2: classmethod requires Python 2.2 or greater
$ magic_tool <EOF
print """Test
"""
EOF
1: Triple-quote requires Python 1.5 of later
</code></pre>
<p>Is such a thing possible? I suppose the simplest way would be to have all Python versions on disc, run the script with each one and see what errors occur..</p>
| 46 | 2009-04-29T22:21:12Z | 804,917 | <p>Not an actual useful answer but here it goes anyway.
I think this should be doable to make (though probably quite an exercise), for example you could make sure you have all the official grammars for the versions you want to check, like <a href="http://docs.python.org/reference/grammar.html">this one</a> .</p>
<p>Then parse the bit of code starting with the first grammar version.
Next you need a similar map of all the built-in module namespaces and parse the code again starting with the earliest version, though it might be tricky to differentiate between built-in modules and modules that are external or something in between like ElementTree.</p>
<p>The result should be an overview of versions that support the syntax of the code and an overview of the modules and which version (if at all) is needed to use it. With that result you could calculate the best lowest and highest version.</p>
| 8 | 2009-04-30T00:35:46Z | [
"python",
"code-analysis"
] |
Tool to determine what lowest version of Python required? | 804,538 | <p>Is there something similar to Pylint, that will look at a Python script (or run it), and determine which version of Python each line (or function) requires?</p>
<p>For example, theoretical usage:</p>
<pre><code>$ magic_tool <EOF
with something:
pass
EOF
1: 'with' statement requires Python 2.6 or greater
$ magic_tool <EOF
class Something:
@classmethod
def blah(cls):
pass
EOF
2: classmethod requires Python 2.2 or greater
$ magic_tool <EOF
print """Test
"""
EOF
1: Triple-quote requires Python 1.5 of later
</code></pre>
<p>Is such a thing possible? I suppose the simplest way would be to have all Python versions on disc, run the script with each one and see what errors occur..</p>
| 46 | 2009-04-29T22:21:12Z | 819,645 | <p>Inspired by this excellent question, I recently put together a script that tries to do this. You can find it on github at <a href="http://github.com/ghewgill/pyqver/tree/master">pyqver</a>.</p>
<p>It's reasonably complete but there are some aspects that are not yet handled (as mentioned in the README file). Feel free to fork and improve it!</p>
| 41 | 2009-05-04T10:55:39Z | [
"python",
"code-analysis"
] |
mod_wsgi yield output buffer instead of return | 804,898 | <p>Right now I've got a mod_wsgi script that's structured like this..</p>
<pre><code>def application(environ, start_response):
status = '200 OK'
output = 'Hello World!'
response_headers = [('Content-type', 'text/plain'),
('Content-Length', str(len(output)))]
start_response(status, response_headers)
return [output]
</code></pre>
<p>I was wondering if anyone knows of a way to change this to operate on a <code>yield</code> basis instead of <code>return</code>, that way I can send the page as it's being generated and not only once it's complete, so the page loading can go faster for the user.</p>
<p>However, whenever I swap the output for a list and yield it in the application(), it throws an error:</p>
<pre><code>TypeError: sequence of string values expected, value of type list found
</code></pre>
| 1 | 2009-04-30T00:22:32Z | 804,901 | <pre><code>def application(environ, start_response):
status = '200 OK'
output = 'Hello World!'
response_headers = [('Content-type', 'text/plain'),
('Content-Length', str(len(output)))]
start_response(status, response_headers)
yield output
</code></pre>
<blockquote>
<p><em>"However, whenever I swap the output for a list and yield it in the application(), it throws an error:"</em></p>
</blockquote>
<p>Well, don't yield the list. Yield each element instead:</p>
<pre><code>for part in mylist:
yield part
</code></pre>
<p>or if the list is the entire content, just:</p>
<pre><code>return mylist
</code></pre>
<p>Because the list is already an iterator and can yield by itself.</p>
| 7 | 2009-04-30T00:24:28Z | [
"python",
"mod-wsgi",
"yield"
] |
mod_wsgi yield output buffer instead of return | 804,898 | <p>Right now I've got a mod_wsgi script that's structured like this..</p>
<pre><code>def application(environ, start_response):
status = '200 OK'
output = 'Hello World!'
response_headers = [('Content-type', 'text/plain'),
('Content-Length', str(len(output)))]
start_response(status, response_headers)
return [output]
</code></pre>
<p>I was wondering if anyone knows of a way to change this to operate on a <code>yield</code> basis instead of <code>return</code>, that way I can send the page as it's being generated and not only once it's complete, so the page loading can go faster for the user.</p>
<p>However, whenever I swap the output for a list and yield it in the application(), it throws an error:</p>
<pre><code>TypeError: sequence of string values expected, value of type list found
</code></pre>
| 1 | 2009-04-30T00:22:32Z | 821,278 | <p>Don't send the content length and send the output as you derive it.
You don't need to know the size of the output if you simply don't
send the Content-Length header. That way can send part of the response
before you have computed the rest of it.</p>
<pre><code>def application(environ, start_response):
status = '200 OK'
output = 'Hello World!'
response_headers = [('Content-type', 'text/html')]
start_response(status, response_headers)
yield head()
yield part1()
yield part2()
yield part3()
yield "<!-- bye now! -->"
</code></pre>
<p>Otherwise you will get no benefit from sending in chunks,
since computing the output is probably the slow part and
the internet protocol will send the output in chunks anyway.</p>
<p>Sadly, this doesn't work in the case where, for example, the calculation
of part2() decides you really need to change a header (like a cookie)
or need to build other page-global data structures
-- if this ever happens, you need to compute the entire output before
sending the headers, and might as well use a <code>return [output]</code></p>
<p>For example <a href="http://aaron.oirt.rutgers.edu/myapp/docs/W1200_1200.config_template" rel="nofollow">
<a href="http://aaron.oirt.rutgers.edu/myapp/docs/W1200_1200.config_template" rel="nofollow">http://aaron.oirt.rutgers.edu/myapp/docs/W1200_1200.config_template</a></a>
Needs to build a page global data structure for the links to subsections
that show at the top of the page -- so the last subsection must be rendered
before the first chunk of output is delivered to the client.</p>
| 0 | 2009-05-04T18:20:40Z | [
"python",
"mod-wsgi",
"yield"
] |
mod_wsgi yield output buffer instead of return | 804,898 | <p>Right now I've got a mod_wsgi script that's structured like this..</p>
<pre><code>def application(environ, start_response):
status = '200 OK'
output = 'Hello World!'
response_headers = [('Content-type', 'text/plain'),
('Content-Length', str(len(output)))]
start_response(status, response_headers)
return [output]
</code></pre>
<p>I was wondering if anyone knows of a way to change this to operate on a <code>yield</code> basis instead of <code>return</code>, that way I can send the page as it's being generated and not only once it's complete, so the page loading can go faster for the user.</p>
<p>However, whenever I swap the output for a list and yield it in the application(), it throws an error:</p>
<pre><code>TypeError: sequence of string values expected, value of type list found
</code></pre>
| 1 | 2009-04-30T00:22:32Z | 1,037,785 | <p>Note that 'yield' should be avoided unless absolutely necessary. In particular 'yield' will be inefficient if yielding lots of small strings. This is because the WSGI specification requires that after each string yielded that the response must be flushed. For Apache/mod_wsgi, flushing means each string being forced out through the Apache output bucket brigade and filter system and onto the socket. Ignoring the overhead of the Apache output filter system, writing lots of small strings onto a socket is simply just bad to begin with.</p>
<p>This problem also exists where an array of strings is returned from an application as a flush also has to be performed between each string in the array. This is because the string is dealt with as an iterable and not a list. Thus for a preformed list of strings, it is much better to join the individual strings into one large string and return a list containing just that one string. Doing this also allows a WSGI implementation to automatically generate a Content-Length for the response if one wasn't explicitly provided.</p>
<p>Just make sure that when joining all the strings in a list into one, that the result is returned in a list. If this isn't done and instead the string is returned, that string is treated as an iterable, where each element in the string is a single character string. This results in a flush being done after every character, which is going to be even worse than if the strings hadn't been joined.</p>
| 6 | 2009-06-24T11:31:38Z | [
"python",
"mod-wsgi",
"yield"
] |
How to consume XML from RESTful web services using Django / Python? | 804,992 | <p>Should I use PyXML or what's in the standard library? </p>
| 3 | 2009-04-30T01:13:48Z | 804,997 | <p>I always prefer to use the standard library when possible. ElementTree is well known amongst pythonistas, so you should be able to find plenty of examples. Parts of it have also been optimized in C, so it's quite fast.</p>
<p><a href="http://docs.python.org/library/xml.etree.elementtree.html" rel="nofollow">http://docs.python.org/library/xml.etree.elementtree.html</a></p>
| 3 | 2009-04-30T01:17:45Z | [
"python",
"xml",
"django",
"rest"
] |
How to consume XML from RESTful web services using Django / Python? | 804,992 | <p>Should I use PyXML or what's in the standard library? </p>
| 3 | 2009-04-30T01:13:48Z | 804,998 | <p>ElementTree is provided as part of the standard Python libs. ElementTree is pure python, and cElementTree is the faster C implementation:</p>
<pre><code># Try to use the C implementation first, falling back to python
try:
from xml.etree import cElementTree as ElementTree
except ImportError, e:
from xml.etree import ElementTree
</code></pre>
<p>Here's an example usage, where I'm consuming xml from a RESTful web service:</p>
<pre><code>def find(*args, **kwargs):
"""Find a book in the collection specified"""
search_args = [('access_key', api_key),]
if not is_valid_collection(kwargs['collection']):
return None
kwargs.pop('collection')
for key in kwargs:
# Only the first keword is honored
if kwargs[key]:
search_args.append(('index1', key))
search_args.append(('value1', kwargs[key]))
break
url = urllib.basejoin(api_url, '%s.xml' % 'books')
data = urllib.urlencode(search_args)
req = urllib2.urlopen(url, data)
rdata = []
chunk = 'xx'
while chunk:
chunk = req.read()
if chunk:
rdata.append(chunk)
tree = ElementTree.fromstring(''.join(rdata))
results = []
for i, elem in enumerate(tree.getiterator('BookData')):
results.append(
{'isbn': elem.get('isbn'),
'isbn13': elem.get('isbn13'),
'title': elem.find('Title').text,
'author': elem.find('AuthorsText').text,
'publisher': elem.find('PublisherText').text,}
)
return results
</code></pre>
| 9 | 2009-04-30T01:19:00Z | [
"python",
"xml",
"django",
"rest"
] |
How to consume XML from RESTful web services using Django / Python? | 804,992 | <p>Should I use PyXML or what's in the standard library? </p>
| 3 | 2009-04-30T01:13:48Z | 2,347,520 | <p>There's also <a href="http://www.crummy.com/software/BeautifulSoup/" rel="nofollow">BeautifulSoup</a>, which has an API some might prefer. Here's an example on how you can extract all tweets that have been favorited from Twitter's Public Timeline:</p>
<pre><code>from BeautifulSoup import BeautifulStoneSoup
import urllib
url = urllib.urlopen('http://twitter.com/statuses/public_timeline.xml').read()
favorited = []
soup = BeautifulStoneSoup(url)
statuses = soup.findAll('status')
for status in statuses:
if status.find('favorited').contents != [u'false']:
favorited.append(status)
</code></pre>
| 0 | 2010-02-27T13:43:16Z | [
"python",
"xml",
"django",
"rest"
] |
How to use subprocess when multiple arguments contain spaces? | 804,995 | <p>I'm working on a wrapper script that will exercise a vmware executable, allowing for the automation of virtual machine startup/shutdown/register/deregister actions. I'm trying to use subprocess to handle invoking the executable, but the spaces in the executables path and in parameters of the executable are not being handled correctly by subprocess. Below is a code fragment:</p>
<pre><code>vmrun_cmd = r"c:/Program Files/VMware/VMware Server/vmware-cmd.bat"
def vm_start(target_vm):
list_arg = "start"
list_arg2 = "hard"
if vm_list(target_vm):
p = Popen([vmrun_cmd, target_vm, list_arg, list_arg2], stdout=PIPE).communicate()[0]
print p
else:
vm_register(target_vm)
vm_start(target_vm)
def vm_list2(target_vm):
list_arg = "-l"
p = Popen([vmrun_cmd, list_arg], stdout=PIPE).communicate()[0]
for line in p.split('\n'):
print line
</code></pre>
<p>If I call the vm_list2 function, I get the following output:</p>
<pre><code>$ ./vmware_control.py --list
C:\Virtual Machines\QAW2K3Server\Windows Server 2003 Standard Edition.vmx
C:\Virtual Machines\ubunturouter\Ubuntu.vmx
C:\Virtual Machines\vacc\vacc.vmx
C:\Virtual Machines\EdgeAS-4.4.x\Other Linux 2.4.x kernel.vmx
C:\Virtual Machines\UbuntuServer1\Ubuntu.vmx
C:\Virtual Machines\Other Linux 2.4.x kernel\Other Linux 2.4.x kernel.vmx
C:\Virtual Machines\QAClient\Windows XP Professional.vmx
</code></pre>
<p>If I call the vm_start function, which requires a path-to-vm parameter, I get the following output:</p>
<pre><code>$ ./vmware_control.py --start "C:\Virtual Machines\ubunturouter\Ubuntu.vmx"
'c:\Program' is not recognized as an internal or external command,
operable program or batch file.
</code></pre>
<p>Apparently, the presence of a second parameter with embedded spaces is altering the way that subprocess is interpreting the first parameter. Any suggestions on how to resolve this?</p>
<p>python2.5.2/cygwin/winxp</p>
| 13 | 2009-04-30T01:17:04Z | 805,008 | <p>Why are you using r""? I believe that if you remove the "r" from the beginning, it will be treated as a standard string which may contain spaces. Python should then properly quote the string when sending it to the shell.</p>
| -2 | 2009-04-30T01:22:40Z | [
"python",
"subprocess"
] |
How to use subprocess when multiple arguments contain spaces? | 804,995 | <p>I'm working on a wrapper script that will exercise a vmware executable, allowing for the automation of virtual machine startup/shutdown/register/deregister actions. I'm trying to use subprocess to handle invoking the executable, but the spaces in the executables path and in parameters of the executable are not being handled correctly by subprocess. Below is a code fragment:</p>
<pre><code>vmrun_cmd = r"c:/Program Files/VMware/VMware Server/vmware-cmd.bat"
def vm_start(target_vm):
list_arg = "start"
list_arg2 = "hard"
if vm_list(target_vm):
p = Popen([vmrun_cmd, target_vm, list_arg, list_arg2], stdout=PIPE).communicate()[0]
print p
else:
vm_register(target_vm)
vm_start(target_vm)
def vm_list2(target_vm):
list_arg = "-l"
p = Popen([vmrun_cmd, list_arg], stdout=PIPE).communicate()[0]
for line in p.split('\n'):
print line
</code></pre>
<p>If I call the vm_list2 function, I get the following output:</p>
<pre><code>$ ./vmware_control.py --list
C:\Virtual Machines\QAW2K3Server\Windows Server 2003 Standard Edition.vmx
C:\Virtual Machines\ubunturouter\Ubuntu.vmx
C:\Virtual Machines\vacc\vacc.vmx
C:\Virtual Machines\EdgeAS-4.4.x\Other Linux 2.4.x kernel.vmx
C:\Virtual Machines\UbuntuServer1\Ubuntu.vmx
C:\Virtual Machines\Other Linux 2.4.x kernel\Other Linux 2.4.x kernel.vmx
C:\Virtual Machines\QAClient\Windows XP Professional.vmx
</code></pre>
<p>If I call the vm_start function, which requires a path-to-vm parameter, I get the following output:</p>
<pre><code>$ ./vmware_control.py --start "C:\Virtual Machines\ubunturouter\Ubuntu.vmx"
'c:\Program' is not recognized as an internal or external command,
operable program or batch file.
</code></pre>
<p>Apparently, the presence of a second parameter with embedded spaces is altering the way that subprocess is interpreting the first parameter. Any suggestions on how to resolve this?</p>
<p>python2.5.2/cygwin/winxp</p>
| 13 | 2009-04-30T01:17:04Z | 805,052 | <pre><code>'c:\Program' is not recognized as an internal or external command,
operable program or batch file.
</code></pre>
<p>To get this message, you are either:</p>
<ol>
<li><p>Using <code>shell=True</code>:</p>
<pre><code>vmrun_cmd = r"c:\Program Files\VMware\VMware Server\vmware-cmd.bat"
subprocess.Popen(vmrun_cmd, shell=True)
</code></pre></li>
<li><p>Changing vmrun_cmd on other part of your code</p></li>
<li>Getting this error from something inside vmware-cmd.bat</li>
</ol>
<p>Things to try:</p>
<ul>
<li><p>Open a python prompt, run the following command:</p>
<pre><code>subprocess.Popen([r"c:\Program Files\VMware\VMware Server\vmware-cmd.bat"])
</code></pre></li>
</ul>
<p>If that works, then quoting issues are out of the question. If not, you've isolated the problem.</p>
| 3 | 2009-04-30T01:46:59Z | [
"python",
"subprocess"
] |
How to use subprocess when multiple arguments contain spaces? | 804,995 | <p>I'm working on a wrapper script that will exercise a vmware executable, allowing for the automation of virtual machine startup/shutdown/register/deregister actions. I'm trying to use subprocess to handle invoking the executable, but the spaces in the executables path and in parameters of the executable are not being handled correctly by subprocess. Below is a code fragment:</p>
<pre><code>vmrun_cmd = r"c:/Program Files/VMware/VMware Server/vmware-cmd.bat"
def vm_start(target_vm):
list_arg = "start"
list_arg2 = "hard"
if vm_list(target_vm):
p = Popen([vmrun_cmd, target_vm, list_arg, list_arg2], stdout=PIPE).communicate()[0]
print p
else:
vm_register(target_vm)
vm_start(target_vm)
def vm_list2(target_vm):
list_arg = "-l"
p = Popen([vmrun_cmd, list_arg], stdout=PIPE).communicate()[0]
for line in p.split('\n'):
print line
</code></pre>
<p>If I call the vm_list2 function, I get the following output:</p>
<pre><code>$ ./vmware_control.py --list
C:\Virtual Machines\QAW2K3Server\Windows Server 2003 Standard Edition.vmx
C:\Virtual Machines\ubunturouter\Ubuntu.vmx
C:\Virtual Machines\vacc\vacc.vmx
C:\Virtual Machines\EdgeAS-4.4.x\Other Linux 2.4.x kernel.vmx
C:\Virtual Machines\UbuntuServer1\Ubuntu.vmx
C:\Virtual Machines\Other Linux 2.4.x kernel\Other Linux 2.4.x kernel.vmx
C:\Virtual Machines\QAClient\Windows XP Professional.vmx
</code></pre>
<p>If I call the vm_start function, which requires a path-to-vm parameter, I get the following output:</p>
<pre><code>$ ./vmware_control.py --start "C:\Virtual Machines\ubunturouter\Ubuntu.vmx"
'c:\Program' is not recognized as an internal or external command,
operable program or batch file.
</code></pre>
<p>Apparently, the presence of a second parameter with embedded spaces is altering the way that subprocess is interpreting the first parameter. Any suggestions on how to resolve this?</p>
<p>python2.5.2/cygwin/winxp</p>
| 13 | 2009-04-30T01:17:04Z | 805,061 | <p>2 things</p>
<p>1) </p>
<p>You probably don't want to use Pipe
If the output of the subprogram is greater than 64KB it is likely your process will crash.
<a href="http://thraxil.org/users/anders/posts/2008/03/13/Subprocess-Hanging-PIPE-is-your-enemy/" rel="nofollow">http://thraxil.org/users/anders/posts/2008/03/13/Subprocess-Hanging-PIPE-is-your-enemy/</a></p>
<p>2)
Subprocess.Popen has a keyword argument shell, making it as if the shell has been parsing your arguments, setting shell=True should do what you want.</p>
| -1 | 2009-04-30T01:49:23Z | [
"python",
"subprocess"
] |
How to use subprocess when multiple arguments contain spaces? | 804,995 | <p>I'm working on a wrapper script that will exercise a vmware executable, allowing for the automation of virtual machine startup/shutdown/register/deregister actions. I'm trying to use subprocess to handle invoking the executable, but the spaces in the executables path and in parameters of the executable are not being handled correctly by subprocess. Below is a code fragment:</p>
<pre><code>vmrun_cmd = r"c:/Program Files/VMware/VMware Server/vmware-cmd.bat"
def vm_start(target_vm):
list_arg = "start"
list_arg2 = "hard"
if vm_list(target_vm):
p = Popen([vmrun_cmd, target_vm, list_arg, list_arg2], stdout=PIPE).communicate()[0]
print p
else:
vm_register(target_vm)
vm_start(target_vm)
def vm_list2(target_vm):
list_arg = "-l"
p = Popen([vmrun_cmd, list_arg], stdout=PIPE).communicate()[0]
for line in p.split('\n'):
print line
</code></pre>
<p>If I call the vm_list2 function, I get the following output:</p>
<pre><code>$ ./vmware_control.py --list
C:\Virtual Machines\QAW2K3Server\Windows Server 2003 Standard Edition.vmx
C:\Virtual Machines\ubunturouter\Ubuntu.vmx
C:\Virtual Machines\vacc\vacc.vmx
C:\Virtual Machines\EdgeAS-4.4.x\Other Linux 2.4.x kernel.vmx
C:\Virtual Machines\UbuntuServer1\Ubuntu.vmx
C:\Virtual Machines\Other Linux 2.4.x kernel\Other Linux 2.4.x kernel.vmx
C:\Virtual Machines\QAClient\Windows XP Professional.vmx
</code></pre>
<p>If I call the vm_start function, which requires a path-to-vm parameter, I get the following output:</p>
<pre><code>$ ./vmware_control.py --start "C:\Virtual Machines\ubunturouter\Ubuntu.vmx"
'c:\Program' is not recognized as an internal or external command,
operable program or batch file.
</code></pre>
<p>Apparently, the presence of a second parameter with embedded spaces is altering the way that subprocess is interpreting the first parameter. Any suggestions on how to resolve this?</p>
<p>python2.5.2/cygwin/winxp</p>
| 13 | 2009-04-30T01:17:04Z | 805,088 | <p>I believe that list2cmdline(), which is doing the processing of your list args, splits any string arg on whitespace unless the <em>string</em> contains double quotes. So I would expect</p>
<pre><code>vmrun_cmd = r'"c:/Program Files/VMware/VMware Server/vmware-cmd.bat"'
</code></pre>
<p>to be what you want.</p>
<p>You'll also likely want to surround the other arguments (like <code>target_vm</code>) in double quotes on the assumption that they, too, each represent a distinct arg to present to the command line. Something like</p>
<pre><code>r'"%s"' % target_vm
</code></pre>
<p>(for example) should suit.</p>
<p>See <a href="http://codespeak.net/py/dist/apigen/api/compat.subprocess.list2cmdline.html" rel="nofollow">the list2cmdline documentation</a></p>
<p>D'A</p>
| 0 | 2009-04-30T02:00:13Z | [
"python",
"subprocess"
] |
How to use subprocess when multiple arguments contain spaces? | 804,995 | <p>I'm working on a wrapper script that will exercise a vmware executable, allowing for the automation of virtual machine startup/shutdown/register/deregister actions. I'm trying to use subprocess to handle invoking the executable, but the spaces in the executables path and in parameters of the executable are not being handled correctly by subprocess. Below is a code fragment:</p>
<pre><code>vmrun_cmd = r"c:/Program Files/VMware/VMware Server/vmware-cmd.bat"
def vm_start(target_vm):
list_arg = "start"
list_arg2 = "hard"
if vm_list(target_vm):
p = Popen([vmrun_cmd, target_vm, list_arg, list_arg2], stdout=PIPE).communicate()[0]
print p
else:
vm_register(target_vm)
vm_start(target_vm)
def vm_list2(target_vm):
list_arg = "-l"
p = Popen([vmrun_cmd, list_arg], stdout=PIPE).communicate()[0]
for line in p.split('\n'):
print line
</code></pre>
<p>If I call the vm_list2 function, I get the following output:</p>
<pre><code>$ ./vmware_control.py --list
C:\Virtual Machines\QAW2K3Server\Windows Server 2003 Standard Edition.vmx
C:\Virtual Machines\ubunturouter\Ubuntu.vmx
C:\Virtual Machines\vacc\vacc.vmx
C:\Virtual Machines\EdgeAS-4.4.x\Other Linux 2.4.x kernel.vmx
C:\Virtual Machines\UbuntuServer1\Ubuntu.vmx
C:\Virtual Machines\Other Linux 2.4.x kernel\Other Linux 2.4.x kernel.vmx
C:\Virtual Machines\QAClient\Windows XP Professional.vmx
</code></pre>
<p>If I call the vm_start function, which requires a path-to-vm parameter, I get the following output:</p>
<pre><code>$ ./vmware_control.py --start "C:\Virtual Machines\ubunturouter\Ubuntu.vmx"
'c:\Program' is not recognized as an internal or external command,
operable program or batch file.
</code></pre>
<p>Apparently, the presence of a second parameter with embedded spaces is altering the way that subprocess is interpreting the first parameter. Any suggestions on how to resolve this?</p>
<p>python2.5.2/cygwin/winxp</p>
| 13 | 2009-04-30T01:17:04Z | 805,228 | <p>Possibly stupid suggestion, but perhaps try the following, to remove subprocess + spaces from the equation:</p>
<pre><code>import os
from subprocess Popen, PIPE
os.chdir(
os.path.join("C:", "Program Files", "VMware", "VMware Server")
)
p = Popen(
["vmware-cmd.bat", target_vm, list_arg, list_arg2],
stdout=PIPE
).communicate()[0]
</code></pre>
<p>It might also be worth trying..</p>
<pre><code>p = Popen(
[os.path.join("C:", "Program Files", "VMware", "VMware Server", "vmware-cmd.bat"), ...
</code></pre>
| -2 | 2009-04-30T03:17:30Z | [
"python",
"subprocess"
] |
How to use subprocess when multiple arguments contain spaces? | 804,995 | <p>I'm working on a wrapper script that will exercise a vmware executable, allowing for the automation of virtual machine startup/shutdown/register/deregister actions. I'm trying to use subprocess to handle invoking the executable, but the spaces in the executables path and in parameters of the executable are not being handled correctly by subprocess. Below is a code fragment:</p>
<pre><code>vmrun_cmd = r"c:/Program Files/VMware/VMware Server/vmware-cmd.bat"
def vm_start(target_vm):
list_arg = "start"
list_arg2 = "hard"
if vm_list(target_vm):
p = Popen([vmrun_cmd, target_vm, list_arg, list_arg2], stdout=PIPE).communicate()[0]
print p
else:
vm_register(target_vm)
vm_start(target_vm)
def vm_list2(target_vm):
list_arg = "-l"
p = Popen([vmrun_cmd, list_arg], stdout=PIPE).communicate()[0]
for line in p.split('\n'):
print line
</code></pre>
<p>If I call the vm_list2 function, I get the following output:</p>
<pre><code>$ ./vmware_control.py --list
C:\Virtual Machines\QAW2K3Server\Windows Server 2003 Standard Edition.vmx
C:\Virtual Machines\ubunturouter\Ubuntu.vmx
C:\Virtual Machines\vacc\vacc.vmx
C:\Virtual Machines\EdgeAS-4.4.x\Other Linux 2.4.x kernel.vmx
C:\Virtual Machines\UbuntuServer1\Ubuntu.vmx
C:\Virtual Machines\Other Linux 2.4.x kernel\Other Linux 2.4.x kernel.vmx
C:\Virtual Machines\QAClient\Windows XP Professional.vmx
</code></pre>
<p>If I call the vm_start function, which requires a path-to-vm parameter, I get the following output:</p>
<pre><code>$ ./vmware_control.py --start "C:\Virtual Machines\ubunturouter\Ubuntu.vmx"
'c:\Program' is not recognized as an internal or external command,
operable program or batch file.
</code></pre>
<p>Apparently, the presence of a second parameter with embedded spaces is altering the way that subprocess is interpreting the first parameter. Any suggestions on how to resolve this?</p>
<p>python2.5.2/cygwin/winxp</p>
| 13 | 2009-04-30T01:17:04Z | 805,309 | <p>In Python on MS Windows, the subprocess.Popen class uses the CreateProcess API to started the process. CreateProcess takes a string rather than something like an array of arguments. Python uses subprocess.list2cmdline to convert the list of args to a string for CreateProcess.</p>
<p>If I were you, I'd see what subprocess.list2cmdline(args) returns (where args is the first argument of Popen). It would be interesting to see if it is putting quotes around the first argument.</p>
<p>Of course, this explanation might not apply in a Cygwin environment.</p>
<p>Having said all this, I don't have MS Windows.</p>
| 2 | 2009-04-30T04:11:00Z | [
"python",
"subprocess"
] |
How to use subprocess when multiple arguments contain spaces? | 804,995 | <p>I'm working on a wrapper script that will exercise a vmware executable, allowing for the automation of virtual machine startup/shutdown/register/deregister actions. I'm trying to use subprocess to handle invoking the executable, but the spaces in the executables path and in parameters of the executable are not being handled correctly by subprocess. Below is a code fragment:</p>
<pre><code>vmrun_cmd = r"c:/Program Files/VMware/VMware Server/vmware-cmd.bat"
def vm_start(target_vm):
list_arg = "start"
list_arg2 = "hard"
if vm_list(target_vm):
p = Popen([vmrun_cmd, target_vm, list_arg, list_arg2], stdout=PIPE).communicate()[0]
print p
else:
vm_register(target_vm)
vm_start(target_vm)
def vm_list2(target_vm):
list_arg = "-l"
p = Popen([vmrun_cmd, list_arg], stdout=PIPE).communicate()[0]
for line in p.split('\n'):
print line
</code></pre>
<p>If I call the vm_list2 function, I get the following output:</p>
<pre><code>$ ./vmware_control.py --list
C:\Virtual Machines\QAW2K3Server\Windows Server 2003 Standard Edition.vmx
C:\Virtual Machines\ubunturouter\Ubuntu.vmx
C:\Virtual Machines\vacc\vacc.vmx
C:\Virtual Machines\EdgeAS-4.4.x\Other Linux 2.4.x kernel.vmx
C:\Virtual Machines\UbuntuServer1\Ubuntu.vmx
C:\Virtual Machines\Other Linux 2.4.x kernel\Other Linux 2.4.x kernel.vmx
C:\Virtual Machines\QAClient\Windows XP Professional.vmx
</code></pre>
<p>If I call the vm_start function, which requires a path-to-vm parameter, I get the following output:</p>
<pre><code>$ ./vmware_control.py --start "C:\Virtual Machines\ubunturouter\Ubuntu.vmx"
'c:\Program' is not recognized as an internal or external command,
operable program or batch file.
</code></pre>
<p>Apparently, the presence of a second parameter with embedded spaces is altering the way that subprocess is interpreting the first parameter. Any suggestions on how to resolve this?</p>
<p>python2.5.2/cygwin/winxp</p>
| 13 | 2009-04-30T01:17:04Z | 806,857 | <p>Here's what I don't like</p>
<pre><code>vmrun_cmd = r"c:/Program Files/VMware/VMware Server/vmware-cmd.bat"
</code></pre>
<p>You've got spaces in the name of the command itself -- which is baffling your shell. Hence the "'c:\Program' is not recognized as an internal or external command,
operable program or batch file."</p>
<p>Option 1 -- put your .BAT file somewhere else. Indeed, put all your VMWare somewhere else. Here's the rule: <strong>Do Not Use "Program Files" Directory For Anything.</strong> It's just wrong.</p>
<p>Option 2 -- quote the <code>vmrun_cmd</code> value</p>
<pre><code>vmrun_cmd = r'"c:/Program Files/VMware/VMware Server/vmware-cmd.bat"'
</code></pre>
| -2 | 2009-04-30T13:19:59Z | [
"python",
"subprocess"
] |
How to use subprocess when multiple arguments contain spaces? | 804,995 | <p>I'm working on a wrapper script that will exercise a vmware executable, allowing for the automation of virtual machine startup/shutdown/register/deregister actions. I'm trying to use subprocess to handle invoking the executable, but the spaces in the executables path and in parameters of the executable are not being handled correctly by subprocess. Below is a code fragment:</p>
<pre><code>vmrun_cmd = r"c:/Program Files/VMware/VMware Server/vmware-cmd.bat"
def vm_start(target_vm):
list_arg = "start"
list_arg2 = "hard"
if vm_list(target_vm):
p = Popen([vmrun_cmd, target_vm, list_arg, list_arg2], stdout=PIPE).communicate()[0]
print p
else:
vm_register(target_vm)
vm_start(target_vm)
def vm_list2(target_vm):
list_arg = "-l"
p = Popen([vmrun_cmd, list_arg], stdout=PIPE).communicate()[0]
for line in p.split('\n'):
print line
</code></pre>
<p>If I call the vm_list2 function, I get the following output:</p>
<pre><code>$ ./vmware_control.py --list
C:\Virtual Machines\QAW2K3Server\Windows Server 2003 Standard Edition.vmx
C:\Virtual Machines\ubunturouter\Ubuntu.vmx
C:\Virtual Machines\vacc\vacc.vmx
C:\Virtual Machines\EdgeAS-4.4.x\Other Linux 2.4.x kernel.vmx
C:\Virtual Machines\UbuntuServer1\Ubuntu.vmx
C:\Virtual Machines\Other Linux 2.4.x kernel\Other Linux 2.4.x kernel.vmx
C:\Virtual Machines\QAClient\Windows XP Professional.vmx
</code></pre>
<p>If I call the vm_start function, which requires a path-to-vm parameter, I get the following output:</p>
<pre><code>$ ./vmware_control.py --start "C:\Virtual Machines\ubunturouter\Ubuntu.vmx"
'c:\Program' is not recognized as an internal or external command,
operable program or batch file.
</code></pre>
<p>Apparently, the presence of a second parameter with embedded spaces is altering the way that subprocess is interpreting the first parameter. Any suggestions on how to resolve this?</p>
<p>python2.5.2/cygwin/winxp</p>
| 13 | 2009-04-30T01:17:04Z | 806,971 | <p>If you have spaces in the path, the easiest way I've found to get them interpreted properly is this.</p>
<pre><code>subprocess.call('""' + path + '""')
</code></pre>
<p>I don't know why exactly it needs double double quotes, but that is what works.</p>
| 4 | 2009-04-30T13:41:10Z | [
"python",
"subprocess"
] |
How to use subprocess when multiple arguments contain spaces? | 804,995 | <p>I'm working on a wrapper script that will exercise a vmware executable, allowing for the automation of virtual machine startup/shutdown/register/deregister actions. I'm trying to use subprocess to handle invoking the executable, but the spaces in the executables path and in parameters of the executable are not being handled correctly by subprocess. Below is a code fragment:</p>
<pre><code>vmrun_cmd = r"c:/Program Files/VMware/VMware Server/vmware-cmd.bat"
def vm_start(target_vm):
list_arg = "start"
list_arg2 = "hard"
if vm_list(target_vm):
p = Popen([vmrun_cmd, target_vm, list_arg, list_arg2], stdout=PIPE).communicate()[0]
print p
else:
vm_register(target_vm)
vm_start(target_vm)
def vm_list2(target_vm):
list_arg = "-l"
p = Popen([vmrun_cmd, list_arg], stdout=PIPE).communicate()[0]
for line in p.split('\n'):
print line
</code></pre>
<p>If I call the vm_list2 function, I get the following output:</p>
<pre><code>$ ./vmware_control.py --list
C:\Virtual Machines\QAW2K3Server\Windows Server 2003 Standard Edition.vmx
C:\Virtual Machines\ubunturouter\Ubuntu.vmx
C:\Virtual Machines\vacc\vacc.vmx
C:\Virtual Machines\EdgeAS-4.4.x\Other Linux 2.4.x kernel.vmx
C:\Virtual Machines\UbuntuServer1\Ubuntu.vmx
C:\Virtual Machines\Other Linux 2.4.x kernel\Other Linux 2.4.x kernel.vmx
C:\Virtual Machines\QAClient\Windows XP Professional.vmx
</code></pre>
<p>If I call the vm_start function, which requires a path-to-vm parameter, I get the following output:</p>
<pre><code>$ ./vmware_control.py --start "C:\Virtual Machines\ubunturouter\Ubuntu.vmx"
'c:\Program' is not recognized as an internal or external command,
operable program or batch file.
</code></pre>
<p>Apparently, the presence of a second parameter with embedded spaces is altering the way that subprocess is interpreting the first parameter. Any suggestions on how to resolve this?</p>
<p>python2.5.2/cygwin/winxp</p>
| 13 | 2009-04-30T01:17:04Z | 992,989 | <p>One problem is that if the command is surrounded with quotes and doesn't have spaces, that could also confuse the shell.</p>
<p>So I do this:</p>
<pre><code>if ' ' in raw_cmd:
fmt = '"%s"'
else:
fmt = '%s'
cmd = fmt % raw_cmd
</code></pre>
| 0 | 2009-06-14T15:09:33Z | [
"python",
"subprocess"
] |
How to use subprocess when multiple arguments contain spaces? | 804,995 | <p>I'm working on a wrapper script that will exercise a vmware executable, allowing for the automation of virtual machine startup/shutdown/register/deregister actions. I'm trying to use subprocess to handle invoking the executable, but the spaces in the executables path and in parameters of the executable are not being handled correctly by subprocess. Below is a code fragment:</p>
<pre><code>vmrun_cmd = r"c:/Program Files/VMware/VMware Server/vmware-cmd.bat"
def vm_start(target_vm):
list_arg = "start"
list_arg2 = "hard"
if vm_list(target_vm):
p = Popen([vmrun_cmd, target_vm, list_arg, list_arg2], stdout=PIPE).communicate()[0]
print p
else:
vm_register(target_vm)
vm_start(target_vm)
def vm_list2(target_vm):
list_arg = "-l"
p = Popen([vmrun_cmd, list_arg], stdout=PIPE).communicate()[0]
for line in p.split('\n'):
print line
</code></pre>
<p>If I call the vm_list2 function, I get the following output:</p>
<pre><code>$ ./vmware_control.py --list
C:\Virtual Machines\QAW2K3Server\Windows Server 2003 Standard Edition.vmx
C:\Virtual Machines\ubunturouter\Ubuntu.vmx
C:\Virtual Machines\vacc\vacc.vmx
C:\Virtual Machines\EdgeAS-4.4.x\Other Linux 2.4.x kernel.vmx
C:\Virtual Machines\UbuntuServer1\Ubuntu.vmx
C:\Virtual Machines\Other Linux 2.4.x kernel\Other Linux 2.4.x kernel.vmx
C:\Virtual Machines\QAClient\Windows XP Professional.vmx
</code></pre>
<p>If I call the vm_start function, which requires a path-to-vm parameter, I get the following output:</p>
<pre><code>$ ./vmware_control.py --start "C:\Virtual Machines\ubunturouter\Ubuntu.vmx"
'c:\Program' is not recognized as an internal or external command,
operable program or batch file.
</code></pre>
<p>Apparently, the presence of a second parameter with embedded spaces is altering the way that subprocess is interpreting the first parameter. Any suggestions on how to resolve this?</p>
<p>python2.5.2/cygwin/winxp</p>
| 13 | 2009-04-30T01:17:04Z | 36,502,646 | <p>That was quite a hard problem for the last three ours....nothing stated so far did work, neither using r"" or Popen with a list and so on. What did work in the end was a combination of format string and r"". So my solution is this:</p>
<pre><code>subprocess.Popen("{0} -f {1}".format(pathToExe, r'"%s"' % pathToVideoFileOrDir))
</code></pre>
<p>where both variables pathToExe and pathToVideoFileOrDir have whitespaces in their path. Using \" within the formatted string did not work and resulted in the same error that the first path is not detected any longer correctly. </p>
| 0 | 2016-04-08T14:48:01Z | [
"python",
"subprocess"
] |
Call a parent class's method from child class in Python? | 805,066 | <p>When creating a simple object hierarchy in Python, I'd like to be able to invoke methods of the parent class from a derived class. In Perl and Java, there is a keyword for this (super). In Perl, I might do this:</p>
<pre><code>package Foo;
sub frotz {
return "Bamf";
}
package Bar;
@ISA = qw(Foo);
sub frotz {
my $str = SUPER::frotz();
return uc($str);
}
</code></pre>
<p>In python, it appears that I have to name the parent class explicitly from the child.
In the example above, I'd have to do something like Foo::frotz(). </p>
<p>This doesn't seem right, since this behavior makes it hard to make deep hierarchies. If children need to know what class defined an inherited method, then all sorts of information pain is created. </p>
<p>Is this an actual limitation in python, a gap in my understanding or both?</p>
| 271 | 2009-04-30T01:52:31Z | 805,081 | <p>Yes, but only with <a href="http://docs.python.org/glossary.html#term-new-style-class">new-style classes</a>. Use the <a href="http://docs.python.org/library/functions.html#super"><code>super()</code></a> function:</p>
<pre><code>class Foo(Bar):
def baz(self, arg):
return super(Foo, self).baz(arg)
</code></pre>
| 328 | 2009-04-30T01:58:28Z | [
"python",
"inheritance",
"class",
"object"
] |
Call a parent class's method from child class in Python? | 805,066 | <p>When creating a simple object hierarchy in Python, I'd like to be able to invoke methods of the parent class from a derived class. In Perl and Java, there is a keyword for this (super). In Perl, I might do this:</p>
<pre><code>package Foo;
sub frotz {
return "Bamf";
}
package Bar;
@ISA = qw(Foo);
sub frotz {
my $str = SUPER::frotz();
return uc($str);
}
</code></pre>
<p>In python, it appears that I have to name the parent class explicitly from the child.
In the example above, I'd have to do something like Foo::frotz(). </p>
<p>This doesn't seem right, since this behavior makes it hard to make deep hierarchies. If children need to know what class defined an inherited method, then all sorts of information pain is created. </p>
<p>Is this an actual limitation in python, a gap in my understanding or both?</p>
| 271 | 2009-04-30T01:52:31Z | 805,082 | <p>Python also has <a href="http://docs.python.org/library/functions.html" rel="nofollow">super</a> as well: </p>
<p><code><strong>super</strong>(type[, object-or-type])</code></p>
<blockquote>
<p>Return a proxy object that delegates method calls to a parent or sibling class of type.
This is useful for accessing inherited methods that have been overridden in a class.
The search order is same as that used by getattr() except that the type itself is skipped.</p>
</blockquote>
<p>Example: </p>
<pre><code>class A(object): # deriving from 'object' declares A as a 'new-style-class'
def foo(self):
print "foo"
class B(A):
def foo(self):
super(B, self).foo() # calls 'A.foo()'
myB = B()
myB.foo()
</code></pre>
| 46 | 2009-04-30T01:58:31Z | [
"python",
"inheritance",
"class",
"object"
] |
Call a parent class's method from child class in Python? | 805,066 | <p>When creating a simple object hierarchy in Python, I'd like to be able to invoke methods of the parent class from a derived class. In Perl and Java, there is a keyword for this (super). In Perl, I might do this:</p>
<pre><code>package Foo;
sub frotz {
return "Bamf";
}
package Bar;
@ISA = qw(Foo);
sub frotz {
my $str = SUPER::frotz();
return uc($str);
}
</code></pre>
<p>In python, it appears that I have to name the parent class explicitly from the child.
In the example above, I'd have to do something like Foo::frotz(). </p>
<p>This doesn't seem right, since this behavior makes it hard to make deep hierarchies. If children need to know what class defined an inherited method, then all sorts of information pain is created. </p>
<p>Is this an actual limitation in python, a gap in my understanding or both?</p>
| 271 | 2009-04-30T01:52:31Z | 805,085 | <p>There's a super() in Python too. It's a bit wonky, because of Python's old- and new-style classes, but is quite commonly used e.g. in constructors:</p>
<pre><code>class Foo(Bar):
def __init__(self):
super(Foo, self).__init__()
self.baz = 5
</code></pre>
| 9 | 2009-04-30T01:58:46Z | [
"python",
"inheritance",
"class",
"object"
] |
Call a parent class's method from child class in Python? | 805,066 | <p>When creating a simple object hierarchy in Python, I'd like to be able to invoke methods of the parent class from a derived class. In Perl and Java, there is a keyword for this (super). In Perl, I might do this:</p>
<pre><code>package Foo;
sub frotz {
return "Bamf";
}
package Bar;
@ISA = qw(Foo);
sub frotz {
my $str = SUPER::frotz();
return uc($str);
}
</code></pre>
<p>In python, it appears that I have to name the parent class explicitly from the child.
In the example above, I'd have to do something like Foo::frotz(). </p>
<p>This doesn't seem right, since this behavior makes it hard to make deep hierarchies. If children need to know what class defined an inherited method, then all sorts of information pain is created. </p>
<p>Is this an actual limitation in python, a gap in my understanding or both?</p>
| 271 | 2009-04-30T01:52:31Z | 805,091 | <pre><code>ImmediateParentClass.frotz(self)
</code></pre>
<p>will be just fine, whether the immediate parent class defined <code>frotz</code> itself or inherited it. <code>super</code> is only needed for proper support of <em>multiple</em> inheritance (and then it only works if every class uses it properly). In general, <code>AnyClass.whatever</code> is going to look up <code>whatever</code> in <code>AnyClass</code>'s ancestors if <code>AnyClass</code> doesn't define/override it, and this holds true for "child class calling parent's method" as for any other occurrence!</p>
| 50 | 2009-04-30T02:02:38Z | [
"python",
"inheritance",
"class",
"object"
] |
Call a parent class's method from child class in Python? | 805,066 | <p>When creating a simple object hierarchy in Python, I'd like to be able to invoke methods of the parent class from a derived class. In Perl and Java, there is a keyword for this (super). In Perl, I might do this:</p>
<pre><code>package Foo;
sub frotz {
return "Bamf";
}
package Bar;
@ISA = qw(Foo);
sub frotz {
my $str = SUPER::frotz();
return uc($str);
}
</code></pre>
<p>In python, it appears that I have to name the parent class explicitly from the child.
In the example above, I'd have to do something like Foo::frotz(). </p>
<p>This doesn't seem right, since this behavior makes it hard to make deep hierarchies. If children need to know what class defined an inherited method, then all sorts of information pain is created. </p>
<p>Is this an actual limitation in python, a gap in my understanding or both?</p>
| 271 | 2009-04-30T01:52:31Z | 6,407,157 | <p>This is a more abstract method:</p>
<pre class="lang-py prettyprint-override"><code>super(self.__class__,self).baz(arg)
</code></pre>
| -19 | 2011-06-20T05:32:22Z | [
"python",
"inheritance",
"class",
"object"
] |
Call a parent class's method from child class in Python? | 805,066 | <p>When creating a simple object hierarchy in Python, I'd like to be able to invoke methods of the parent class from a derived class. In Perl and Java, there is a keyword for this (super). In Perl, I might do this:</p>
<pre><code>package Foo;
sub frotz {
return "Bamf";
}
package Bar;
@ISA = qw(Foo);
sub frotz {
my $str = SUPER::frotz();
return uc($str);
}
</code></pre>
<p>In python, it appears that I have to name the parent class explicitly from the child.
In the example above, I'd have to do something like Foo::frotz(). </p>
<p>This doesn't seem right, since this behavior makes it hard to make deep hierarchies. If children need to know what class defined an inherited method, then all sorts of information pain is created. </p>
<p>Is this an actual limitation in python, a gap in my understanding or both?</p>
| 271 | 2009-04-30T01:52:31Z | 8,689,564 | <p>I would recommend using <code>CLASS.__bases__</code>
something like this</p>
<pre><code>class A:
def __init__(self):
print "I am Class %s"%self.__class__.__name__
for parentClass in self.__class__.__bases__:
print " I am inherited from:",parentClass.__name__
#parentClass.foo(self) <- call parents function with self as first param
class B(A):pass
class C(B):pass
a,b,c = A(),B(),C()
</code></pre>
| 5 | 2011-12-31T17:39:11Z | [
"python",
"inheritance",
"class",
"object"
] |
Call a parent class's method from child class in Python? | 805,066 | <p>When creating a simple object hierarchy in Python, I'd like to be able to invoke methods of the parent class from a derived class. In Perl and Java, there is a keyword for this (super). In Perl, I might do this:</p>
<pre><code>package Foo;
sub frotz {
return "Bamf";
}
package Bar;
@ISA = qw(Foo);
sub frotz {
my $str = SUPER::frotz();
return uc($str);
}
</code></pre>
<p>In python, it appears that I have to name the parent class explicitly from the child.
In the example above, I'd have to do something like Foo::frotz(). </p>
<p>This doesn't seem right, since this behavior makes it hard to make deep hierarchies. If children need to know what class defined an inherited method, then all sorts of information pain is created. </p>
<p>Is this an actual limitation in python, a gap in my understanding or both?</p>
| 271 | 2009-04-30T01:52:31Z | 16,057,968 | <p>Here is an example of using <strong>super()</strong>:</p>
<pre><code>#New-style classes inherit from object, or from another new-style class
class Dog(object):
name = ''
moves = []
def __init__(self, name):
self.name = name
def moves_setup(self):
self.moves.append('walk')
self.moves.append('run')
def get_moves(self):
return self.moves
class Superdog(Dog):
#Let's try to append new fly ability to our Superdog
def moves_setup(self):
#Set default moves by calling method of parent class
super(Superdog, self).moves_setup()
self.moves.append('fly')
dog = Superdog('Freddy')
print dog.name # Freddy
dog.moves_setup()
print dog.get_moves() # ['walk', 'run', 'fly'].
#As you can see our Superdog has all moves defined in the base Dog class
</code></pre>
| 17 | 2013-04-17T10:44:59Z | [
"python",
"inheritance",
"class",
"object"
] |
Call a parent class's method from child class in Python? | 805,066 | <p>When creating a simple object hierarchy in Python, I'd like to be able to invoke methods of the parent class from a derived class. In Perl and Java, there is a keyword for this (super). In Perl, I might do this:</p>
<pre><code>package Foo;
sub frotz {
return "Bamf";
}
package Bar;
@ISA = qw(Foo);
sub frotz {
my $str = SUPER::frotz();
return uc($str);
}
</code></pre>
<p>In python, it appears that I have to name the parent class explicitly from the child.
In the example above, I'd have to do something like Foo::frotz(). </p>
<p>This doesn't seem right, since this behavior makes it hard to make deep hierarchies. If children need to know what class defined an inherited method, then all sorts of information pain is created. </p>
<p>Is this an actual limitation in python, a gap in my understanding or both?</p>
| 271 | 2009-04-30T01:52:31Z | 27,402,103 | <p>There is a super() in python also.</p>
<p>Example for how a super class method is called from a sub class method</p>
<pre><code>class Dog(object):
name = ''
moves = []
def __init__(self, name):
self.name = name
def moves_setup(self,x):
self.moves.append('walk')
self.moves.append('run')
self.moves.append(x)
def get_moves(self):
return self.moves
class Superdog(Dog):
#Let's try to append new fly ability to our Superdog
def moves_setup(self):
#Set default moves by calling method of parent class
super().moves_setup("hello world")
self.moves.append('fly')
dog = Superdog('Freddy')
print (dog.name)
dog.moves_setup()
print (dog.get_moves())
</code></pre>
<p>This example is similar to the one explained above.However there is one difference that super doesn't have any arguments passed to it.This above code is executable in python 3.4 version. </p>
| 4 | 2014-12-10T13:21:18Z | [
"python",
"inheritance",
"class",
"object"
] |
Call a parent class's method from child class in Python? | 805,066 | <p>When creating a simple object hierarchy in Python, I'd like to be able to invoke methods of the parent class from a derived class. In Perl and Java, there is a keyword for this (super). In Perl, I might do this:</p>
<pre><code>package Foo;
sub frotz {
return "Bamf";
}
package Bar;
@ISA = qw(Foo);
sub frotz {
my $str = SUPER::frotz();
return uc($str);
}
</code></pre>
<p>In python, it appears that I have to name the parent class explicitly from the child.
In the example above, I'd have to do something like Foo::frotz(). </p>
<p>This doesn't seem right, since this behavior makes it hard to make deep hierarchies. If children need to know what class defined an inherited method, then all sorts of information pain is created. </p>
<p>Is this an actual limitation in python, a gap in my understanding or both?</p>
| 271 | 2009-04-30T01:52:31Z | 34,183,634 | <p>If you don't know how many arguments you might get, and want to pass them all through to the child as well:</p>
<pre><code>class Foo(bar)
def baz(self, arg, *args, **kwargs):
# ... Do your thing
return super(Foo, self).baz(arg, *args, **kwargs)
</code></pre>
<p>(From: <a href="http://stackoverflow.com/questions/5031711/python-cleanest-way-to-override-init-where-an-optional-kwarg-must-be-used">Python - Cleanest way to override __init__ where an optional kwarg must be used after the super() call?</a>)</p>
| 2 | 2015-12-09T16:13:51Z | [
"python",
"inheritance",
"class",
"object"
] |
Call a parent class's method from child class in Python? | 805,066 | <p>When creating a simple object hierarchy in Python, I'd like to be able to invoke methods of the parent class from a derived class. In Perl and Java, there is a keyword for this (super). In Perl, I might do this:</p>
<pre><code>package Foo;
sub frotz {
return "Bamf";
}
package Bar;
@ISA = qw(Foo);
sub frotz {
my $str = SUPER::frotz();
return uc($str);
}
</code></pre>
<p>In python, it appears that I have to name the parent class explicitly from the child.
In the example above, I'd have to do something like Foo::frotz(). </p>
<p>This doesn't seem right, since this behavior makes it hard to make deep hierarchies. If children need to know what class defined an inherited method, then all sorts of information pain is created. </p>
<p>Is this an actual limitation in python, a gap in my understanding or both?</p>
| 271 | 2009-04-30T01:52:31Z | 38,587,017 | <p>Python 3 has a different and simpler syntax for calling parent method.
If <code>Foo</code> class inherits from <code>Bar</code>, then from <code>Bar.__init__</code> can be invoked from <code>Foo</code> using <code>super().__init__()</code>:</p>
<pre><code>class Foo(Bar):
def __init__(self):
super().__init__()
</code></pre>
| 3 | 2016-07-26T10:12:08Z | [
"python",
"inheritance",
"class",
"object"
] |
Server Side Google Markers Clustering - Python/Django | 805,072 | <p>After experimenting with client side approach to clustering large numbers of Google markers I decided that it won't be possible for my project (social network with 28,000+ users).</p>
<p>Are there any examples of clustering the coordinates on the server side - preferably in Python/Django?</p>
<p>The way I would like this to work is to gradually index the markers based on their proximity (radius) and zoom level. </p>
<p>In another words when a new user registers he/she is automatically assigned to a certain 'group' of markers that are close to each other thus increasing the 'group's' counter. What's being send to the server is just a small number of 'groups'. Only when the zoom level/scale of map is 1:1 - actual users are shown on the map. </p>
<p>That way the client side will have to deal only with 10-50 markers per request/zoom level. </p>
| 7 | 2009-04-30T01:55:16Z | 806,705 | <p>One way to do it would be to define a grid with a unit size based on the zoom level. So you collect up all the items within a grid by lat,lon to one decimal place. An example is 42.2x73.4. So a point at 42.2003x73.4021 falls in that grid cell. That cell is bounded by 42.2x73.3 and 42.2x73.5.</p>
<p>If there are one or more points in a grid cell, you place a marker in the center of that grid.</p>
<p>You then hook up the zoomend event and change your grid size accordingly, and redraw the markers.</p>
<p><a href="http://code.google.com/apis/maps/documentation/reference.html#GMap2.zoomend" rel="nofollow">http://code.google.com/apis/maps/documentation/reference.html#GMap2.zoomend</a></p>
| 0 | 2009-04-30T12:40:04Z | [
"python",
"django",
"json",
"google-maps",
"markerclusterer"
] |
Server Side Google Markers Clustering - Python/Django | 805,072 | <p>After experimenting with client side approach to clustering large numbers of Google markers I decided that it won't be possible for my project (social network with 28,000+ users).</p>
<p>Are there any examples of clustering the coordinates on the server side - preferably in Python/Django?</p>
<p>The way I would like this to work is to gradually index the markers based on their proximity (radius) and zoom level. </p>
<p>In another words when a new user registers he/she is automatically assigned to a certain 'group' of markers that are close to each other thus increasing the 'group's' counter. What's being send to the server is just a small number of 'groups'. Only when the zoom level/scale of map is 1:1 - actual users are shown on the map. </p>
<p>That way the client side will have to deal only with 10-50 markers per request/zoom level. </p>
| 7 | 2009-04-30T01:55:16Z | 807,349 | <p><a href="http://www.maptimize.com/" rel="nofollow">This</a> is a paid service that uses server-side clustering, but I'm not sure how it works. I'm guessing that they just use your data to generate the markers to be shown at each zoom level.</p>
<p><strong>Update:</strong> <a href="http://www.appelsiini.net/2008/11/introduction-to-marker-clustering-with-google-maps" rel="nofollow">This tutorial</a> demonstrates a basic server-side clustering function. It's written in PHP for the Static Maps API, but you could use it as a starting point.</p>
| 2 | 2009-04-30T14:57:03Z | [
"python",
"django",
"json",
"google-maps",
"markerclusterer"
] |
Server Side Google Markers Clustering - Python/Django | 805,072 | <p>After experimenting with client side approach to clustering large numbers of Google markers I decided that it won't be possible for my project (social network with 28,000+ users).</p>
<p>Are there any examples of clustering the coordinates on the server side - preferably in Python/Django?</p>
<p>The way I would like this to work is to gradually index the markers based on their proximity (radius) and zoom level. </p>
<p>In another words when a new user registers he/she is automatically assigned to a certain 'group' of markers that are close to each other thus increasing the 'group's' counter. What's being send to the server is just a small number of 'groups'. Only when the zoom level/scale of map is 1:1 - actual users are shown on the map. </p>
<p>That way the client side will have to deal only with 10-50 markers per request/zoom level. </p>
| 7 | 2009-04-30T01:55:16Z | 840,756 | <p>You could simply drop decimals based on the zoom level. Would that work for you?</p>
<p>Our geo indexes are based on morton numbers:
<a href="http://www.rooftopsolutions.nl/article/231" rel="nofollow">http://www.rooftopsolutions.nl/article/231</a> (shameless self promotion).</p>
<p>If you want more precision than the 10-base system, the morton number will allow you to increase the zoom level on a 2-base number system, simply by doing something like:</p>
<p>GROUP BY (morton XOR (-precision))</p>
<p>The higher precision gets, the more items will be clustered.</p>
| 0 | 2009-05-08T16:43:30Z | [
"python",
"django",
"json",
"google-maps",
"markerclusterer"
] |
Server Side Google Markers Clustering - Python/Django | 805,072 | <p>After experimenting with client side approach to clustering large numbers of Google markers I decided that it won't be possible for my project (social network with 28,000+ users).</p>
<p>Are there any examples of clustering the coordinates on the server side - preferably in Python/Django?</p>
<p>The way I would like this to work is to gradually index the markers based on their proximity (radius) and zoom level. </p>
<p>In another words when a new user registers he/she is automatically assigned to a certain 'group' of markers that are close to each other thus increasing the 'group's' counter. What's being send to the server is just a small number of 'groups'. Only when the zoom level/scale of map is 1:1 - actual users are shown on the map. </p>
<p>That way the client side will have to deal only with 10-50 markers per request/zoom level. </p>
| 7 | 2009-04-30T01:55:16Z | 1,626,690 | <p>I am using Django and Python to cluster real estate and rental listings, and the source can be found <a href="http://forum.mapaplace.com/discussion/3/server-side-marker-clustering-python-source-code/" rel="nofollow">here</a>.</p>
<p>Hope it helps!</p>
| 0 | 2009-10-26T18:58:57Z | [
"python",
"django",
"json",
"google-maps",
"markerclusterer"
] |
Server Side Google Markers Clustering - Python/Django | 805,072 | <p>After experimenting with client side approach to clustering large numbers of Google markers I decided that it won't be possible for my project (social network with 28,000+ users).</p>
<p>Are there any examples of clustering the coordinates on the server side - preferably in Python/Django?</p>
<p>The way I would like this to work is to gradually index the markers based on their proximity (radius) and zoom level. </p>
<p>In another words when a new user registers he/she is automatically assigned to a certain 'group' of markers that are close to each other thus increasing the 'group's' counter. What's being send to the server is just a small number of 'groups'. Only when the zoom level/scale of map is 1:1 - actual users are shown on the map. </p>
<p>That way the client side will have to deal only with 10-50 markers per request/zoom level. </p>
| 7 | 2009-04-30T01:55:16Z | 1,989,520 | <p>I wrote a blog post about my approach using Python and Django here:</p>
<p><a href="http://www.quanative.com/2010/01/01/server-side-marker-clustering-for-google-maps-with-python/" rel="nofollow">http://www.quanative.com/2010/01/01/server-side-marker-clustering-for-google-maps-with-python/</a></p>
| -4 | 2010-01-01T20:22:54Z | [
"python",
"django",
"json",
"google-maps",
"markerclusterer"
] |
Server Side Google Markers Clustering - Python/Django | 805,072 | <p>After experimenting with client side approach to clustering large numbers of Google markers I decided that it won't be possible for my project (social network with 28,000+ users).</p>
<p>Are there any examples of clustering the coordinates on the server side - preferably in Python/Django?</p>
<p>The way I would like this to work is to gradually index the markers based on their proximity (radius) and zoom level. </p>
<p>In another words when a new user registers he/she is automatically assigned to a certain 'group' of markers that are close to each other thus increasing the 'group's' counter. What's being send to the server is just a small number of 'groups'. Only when the zoom level/scale of map is 1:1 - actual users are shown on the map. </p>
<p>That way the client side will have to deal only with 10-50 markers per request/zoom level. </p>
| 7 | 2009-04-30T01:55:16Z | 3,486,779 | <p>You might want to take a look at the <a href="http://en.wikipedia.org/wiki/DBSCAN" rel="nofollow">DBSCAN</a> and <a href="http://en.wikipedia.org/wiki/OPTICS_algorithm" rel="nofollow">OPTICS</a> pages on wikipedia, these looks very suitable for clustering places on a map. There is also a page about <a href="http://en.wikipedia.org/wiki/Cluster_analysis" rel="nofollow">Cluster Analysis</a> that shows all the possible algorithms you can use, most would be trivial to implement using the language of your choice.</p>
<p>With 28k+ points, you might want to skip django and just jump into C/C++ directly, and surely not expect this to get calculated in real-time in response to web requests.</p>
| 1 | 2010-08-15T09:02:07Z | [
"python",
"django",
"json",
"google-maps",
"markerclusterer"
] |
Server Side Google Markers Clustering - Python/Django | 805,072 | <p>After experimenting with client side approach to clustering large numbers of Google markers I decided that it won't be possible for my project (social network with 28,000+ users).</p>
<p>Are there any examples of clustering the coordinates on the server side - preferably in Python/Django?</p>
<p>The way I would like this to work is to gradually index the markers based on their proximity (radius) and zoom level. </p>
<p>In another words when a new user registers he/she is automatically assigned to a certain 'group' of markers that are close to each other thus increasing the 'group's' counter. What's being send to the server is just a small number of 'groups'. Only when the zoom level/scale of map is 1:1 - actual users are shown on the map. </p>
<p>That way the client side will have to deal only with 10-50 markers per request/zoom level. </p>
| 7 | 2009-04-30T01:55:16Z | 15,533,333 | <p>You can try my server-side clustering django app:</p>
<p><a href="https://github.com/biodiv/anycluster" rel="nofollow">https://github.com/biodiv/anycluster</a></p>
<p>It prvides a kmeans and a grid cluster.</p>
| 0 | 2013-03-20T19:50:51Z | [
"python",
"django",
"json",
"google-maps",
"markerclusterer"
] |
Python "Task Server" | 805,120 | <p>My question is: which python framework should I use to build my server?</p>
<p>Notes:</p>
<ul>
<li>This server talks HTTP with it's clients: GET and POST (via pyAMF)</li>
<li>Clients "submit" "tasks" for processing and, then, sometime later, retrieve the associated "task_result" </li>
<li>submit and retrieve might be separated by days - different HTTP connections</li>
<li>The "task" is a lump of XML describing a problem to be solved, and a "task_result" is a lump of XML describing an answer. </li>
<li>When a server gets a "task", it queues it for processing</li>
<li>The server manages this queue and, when tasks get to the top, organises that they are processed. </li>
<li>the processing is performed by a long running (15 mins?) external program (via subprocess) which is feed the task XML and which produces a "task_result" lump of XML which the server picks up and stores (for later Client retrieval).</li>
<li>it serves a couple of basic HTML pages showing the Queue and processing status (admin purposes only)</li>
</ul>
<p>I've experimented with twisted.web, using SQLite as the database and threads to handle the long running processes. </p>
<p>But I can't help feeling that I'm missing a simpler solution. Am I? If you were faced with this, what technology mix would you use? </p>
| 4 | 2009-04-30T02:19:43Z | 805,126 | <p>It seems any python web framework will suit your needs. I work with a similar system on a daily basis and I can tell you, your solution with threads and SQLite for queue storage is about as simple as you're going to get. </p>
<p>Assuming order doesn't matter in your queue, then threads should be acceptable. It's important to make sure you don't create race conditions with your queues or, for example, have two of the same job type running simultaneously. If this is the case, I'd suggest a single threaded application to do the items in the queue one by one.</p>
| 0 | 2009-04-30T02:24:56Z | [
"python"
] |
Python "Task Server" | 805,120 | <p>My question is: which python framework should I use to build my server?</p>
<p>Notes:</p>
<ul>
<li>This server talks HTTP with it's clients: GET and POST (via pyAMF)</li>
<li>Clients "submit" "tasks" for processing and, then, sometime later, retrieve the associated "task_result" </li>
<li>submit and retrieve might be separated by days - different HTTP connections</li>
<li>The "task" is a lump of XML describing a problem to be solved, and a "task_result" is a lump of XML describing an answer. </li>
<li>When a server gets a "task", it queues it for processing</li>
<li>The server manages this queue and, when tasks get to the top, organises that they are processed. </li>
<li>the processing is performed by a long running (15 mins?) external program (via subprocess) which is feed the task XML and which produces a "task_result" lump of XML which the server picks up and stores (for later Client retrieval).</li>
<li>it serves a couple of basic HTML pages showing the Queue and processing status (admin purposes only)</li>
</ul>
<p>I've experimented with twisted.web, using SQLite as the database and threads to handle the long running processes. </p>
<p>But I can't help feeling that I'm missing a simpler solution. Am I? If you were faced with this, what technology mix would you use? </p>
| 4 | 2009-04-30T02:19:43Z | 805,149 | <p>I'd suggest the following. (Since it's what we're doing.)</p>
<p>A simple WSGI server (<a href="http://docs.python.org/library/wsgiref.html" rel="nofollow">wsgiref</a> or <a href="http://werkzeug.pocoo.org/" rel="nofollow">werkzeug</a>). The HTTP requests coming in will naturally form a queue. No further queueing needed. You get a request, you spawn the subprocess as a child and wait for it to finish. A simple list of children is about all you need. </p>
<p>I used a modification of the main "serve forever" loop in <code>wsgiref</code> to periodically poll all of the children to see how they're doing. </p>
<p>A simple SQLite database can track request status. Even this may be overkill because your XML inputs and results can just lay around in the file system.</p>
<p>That's it. Queueing and threads don't really enter into it. A single long-running external process is too complex to coordinate. It's simplest if each request is a separate, stand-alone, child process. </p>
<p>If you get immense bursts of requests, you might want a simple governor to prevent creating thousands of children. The governor could be a simple queue, built using a list with append() and pop(). Every request goes in, but only requests that fit will in some "max number of children" limit are taken out.</p>
| 1 | 2009-04-30T02:35:54Z | [
"python"
] |
Python "Task Server" | 805,120 | <p>My question is: which python framework should I use to build my server?</p>
<p>Notes:</p>
<ul>
<li>This server talks HTTP with it's clients: GET and POST (via pyAMF)</li>
<li>Clients "submit" "tasks" for processing and, then, sometime later, retrieve the associated "task_result" </li>
<li>submit and retrieve might be separated by days - different HTTP connections</li>
<li>The "task" is a lump of XML describing a problem to be solved, and a "task_result" is a lump of XML describing an answer. </li>
<li>When a server gets a "task", it queues it for processing</li>
<li>The server manages this queue and, when tasks get to the top, organises that they are processed. </li>
<li>the processing is performed by a long running (15 mins?) external program (via subprocess) which is feed the task XML and which produces a "task_result" lump of XML which the server picks up and stores (for later Client retrieval).</li>
<li>it serves a couple of basic HTML pages showing the Queue and processing status (admin purposes only)</li>
</ul>
<p>I've experimented with twisted.web, using SQLite as the database and threads to handle the long running processes. </p>
<p>But I can't help feeling that I'm missing a simpler solution. Am I? If you were faced with this, what technology mix would you use? </p>
| 4 | 2009-04-30T02:19:43Z | 805,175 | <p>I'd recommend using an existing message queue. There are many to choose from (see below), and they vary in complexity and robustness. </p>
<p>Also, avoid threads: let your processing tasks run in a different process (why do they have to run in the webserver?)</p>
<p>By using an existing message queue, you only need to worry about producing messages (in your webserver) and consuming them (in your long running tasks). As your system grows you'll be able to scale up by just adding webservers and consumers, and worry less about your queuing infrastructure.</p>
<p>Some popular python implementations of message queues:</p>
<ul>
<li><a href="http://code.google.com/p/stomper/" rel="nofollow">http://code.google.com/p/stomper/</a></li>
<li><a href="http://code.google.com/p/pyactivemq/" rel="nofollow">http://code.google.com/p/pyactivemq/</a></li>
<li><a href="http://xph.us/software/beanstalkd/" rel="nofollow">http://xph.us/software/beanstalkd/</a></li>
</ul>
| 2 | 2009-04-30T02:46:10Z | [
"python"
] |
Python "Task Server" | 805,120 | <p>My question is: which python framework should I use to build my server?</p>
<p>Notes:</p>
<ul>
<li>This server talks HTTP with it's clients: GET and POST (via pyAMF)</li>
<li>Clients "submit" "tasks" for processing and, then, sometime later, retrieve the associated "task_result" </li>
<li>submit and retrieve might be separated by days - different HTTP connections</li>
<li>The "task" is a lump of XML describing a problem to be solved, and a "task_result" is a lump of XML describing an answer. </li>
<li>When a server gets a "task", it queues it for processing</li>
<li>The server manages this queue and, when tasks get to the top, organises that they are processed. </li>
<li>the processing is performed by a long running (15 mins?) external program (via subprocess) which is feed the task XML and which produces a "task_result" lump of XML which the server picks up and stores (for later Client retrieval).</li>
<li>it serves a couple of basic HTML pages showing the Queue and processing status (admin purposes only)</li>
</ul>
<p>I've experimented with twisted.web, using SQLite as the database and threads to handle the long running processes. </p>
<p>But I can't help feeling that I'm missing a simpler solution. Am I? If you were faced with this, what technology mix would you use? </p>
| 4 | 2009-04-30T02:19:43Z | 805,229 | <p>My reaction is to suggest Twisted, but you've already looked at this. Still, I stick by my answer. Without knowing you personal pain-points, I can at least share some things that helped me reduce almost all of the deferred-madness that arises when you have several dependent, blocking actions you need to perform for a client.</p>
<p>Inline callbacks (lightly documented here: <a href="http://twistedmatrix.com/documents/8.2.0/api/twisted.internet.defer.html" rel="nofollow">http://twistedmatrix.com/documents/8.2.0/api/twisted.internet.defer.html</a>) provide a means to make long chains of deferreds much more readable (to the point of looking like straight-line code). There is an excellent example of the complexity reduction this affords here: <a href="http://blog.mekk.waw.pl/archives/14-Twisted-inlineCallbacks-and-deferredGenerator.html" rel="nofollow">http://blog.mekk.waw.pl/archives/14-Twisted-inlineCallbacks-and-deferredGenerator.html</a></p>
<p>You don't always have to get your bulk processing to integrate nicely with Twisted. Sometimes it is easier to break a large piece of your program off into a stand-alone, easily testable/tweakable/implementable command line tool and have Twisted invoke this tool in another process. Twisted's <code>ProcessProtocol</code> provides a fairly flexible way of launching and interacting with external helper programs. Furthermore, if you suddenly decide you want to <em>cloudify</em> your application, it is not all that big of a deal to use a <code>ProcessProtocol</code> to simply run your bulk processing on a remote server (random EC2 instances perhaps) via <code>ssh</code>, assuming you have the keys setup already.</p>
| 1 | 2009-04-30T03:20:11Z | [
"python"
] |
Python "Task Server" | 805,120 | <p>My question is: which python framework should I use to build my server?</p>
<p>Notes:</p>
<ul>
<li>This server talks HTTP with it's clients: GET and POST (via pyAMF)</li>
<li>Clients "submit" "tasks" for processing and, then, sometime later, retrieve the associated "task_result" </li>
<li>submit and retrieve might be separated by days - different HTTP connections</li>
<li>The "task" is a lump of XML describing a problem to be solved, and a "task_result" is a lump of XML describing an answer. </li>
<li>When a server gets a "task", it queues it for processing</li>
<li>The server manages this queue and, when tasks get to the top, organises that they are processed. </li>
<li>the processing is performed by a long running (15 mins?) external program (via subprocess) which is feed the task XML and which produces a "task_result" lump of XML which the server picks up and stores (for later Client retrieval).</li>
<li>it serves a couple of basic HTML pages showing the Queue and processing status (admin purposes only)</li>
</ul>
<p>I've experimented with twisted.web, using SQLite as the database and threads to handle the long running processes. </p>
<p>But I can't help feeling that I'm missing a simpler solution. Am I? If you were faced with this, what technology mix would you use? </p>
| 4 | 2009-04-30T02:19:43Z | 1,556,571 | <p>You can have a look at celery</p>
| 1 | 2009-10-12T20:03:54Z | [
"python"
] |
How can I get the results of a Perl script in Python script? | 805,160 | <p>I have one script in Perl and the other in Python. I need to get the results of Perl in Python and then give the final report. The results from Perl can be scalar variable, hash variable, or an array.</p>
<p>Please let me know as soon as possible regarding this.</p>
| 1 | 2009-04-30T02:39:31Z | 805,185 | <p>You could serialize the results to some sort of a string format, print this to standard output in the Perl script. Then, from python call the perl script and redirect the results of stdout to a variable in python. </p>
| 2 | 2009-04-30T02:51:05Z | [
"python",
"perl"
] |
How can I get the results of a Perl script in Python script? | 805,160 | <p>I have one script in Perl and the other in Python. I need to get the results of Perl in Python and then give the final report. The results from Perl can be scalar variable, hash variable, or an array.</p>
<p>Please let me know as soon as possible regarding this.</p>
| 1 | 2009-04-30T02:39:31Z | 805,220 | <p>Take a look at <a href="http://pyyaml.org/" rel="nofollow">PyYAML</a> in Python and <a href="http://search.cpan.org/dist/YAML/lib/YAML.pm" rel="nofollow">YAML</a> in Perl.</p>
| 3 | 2009-04-30T03:12:25Z | [
"python",
"perl"
] |
How can I get the results of a Perl script in Python script? | 805,160 | <p>I have one script in Perl and the other in Python. I need to get the results of Perl in Python and then give the final report. The results from Perl can be scalar variable, hash variable, or an array.</p>
<p>Please let me know as soon as possible regarding this.</p>
| 1 | 2009-04-30T02:39:31Z | 805,226 | <p>Use the <a href="http://docs.python.org/library/subprocess.html" rel="nofollow">subprocess</a> module to run your Perl script to capture its output:</p>
<p>You can format the output however you choose in either script, and use Python to print the final report. For example: your Perl script can output XML which can be parsed by the Python script and then printed using a different format.</p>
| 6 | 2009-04-30T03:15:36Z | [
"python",
"perl"
] |
How can I get the results of a Perl script in Python script? | 805,160 | <p>I have one script in Perl and the other in Python. I need to get the results of Perl in Python and then give the final report. The results from Perl can be scalar variable, hash variable, or an array.</p>
<p>Please let me know as soon as possible regarding this.</p>
| 1 | 2009-04-30T02:39:31Z | 808,866 | <p>Perhaps you want the answer to <a href="http://stackoverflow.com/questions/389945/how-can-i-read-perl-data-structures-from-python">How can I read Perl data structures from Python?</a>.</p>
| 2 | 2009-04-30T20:25:20Z | [
"python",
"perl"
] |
Learning Graphical Layout Algorithms | 805,356 | <p>During my day-to-day work, I tend to come across data that I want to visualize in a custom manner. For example, automatically creating a call graph similar to a UML sequence diagram, display digraphs, or visualizing data from a database (scatter plots, 3D contours, etc).</p>
<p>For graphs, I tend to use GraphViz. For UML-like plots and 3D plots, I would like to write my own software to run under Linux.</p>
<p>I typically program in C++ and prototype in Python.</p>
<p>What books have people used to learn these basic graphical algorithms? I've seen some nice posts on force-directed layout and various block-style layout algorithms based upon the Cutting and Packing problems -- these are great starts, but I would like a more beginners guide and overview before I jump in.</p>
<ul>
<li><a href="http://stackoverflow.com/questions/552854/how-to-use-a-bgl-directed-graph-as-an-undirected-one-for-use-in-layout-algorithm">Directed Graph Layout</a></li>
<li><a href="http://stackoverflow.com/questions/713701/force-directed-layout-in-c">Force directed layout</a></li>
</ul>
| 7 | 2009-04-30T04:39:47Z | 805,435 | <p>Here are some sources,</p>
<ul>
<li><a href="http://rads.stackoverflow.com/amzn/click/0827313748" rel="nofollow">Graphic Layout and Design (Paperback)</a>.</li>
<li><a href="http://www.hpl.hp.com/techreports/2005/HPL-2005-6R1.pdf" rel="nofollow">Active Layout Engine: Algorithms and Applications in Variable
Data Printing</a></li>
</ul>
| 2 | 2009-04-30T05:22:13Z | [
"c++",
"python",
"layout",
"graphics",
"visualization"
] |
What is the best way to access stored procedures in Django's ORM | 805,393 | <p>I am designing a fairly complex database, and know that some of my queries will be far outside the scope of Django's ORM. Has anyone integrated SP's with Django's ORM successfully? If so, what RDBMS and how did you do it? </p>
| 17 | 2009-04-30T05:03:05Z | 805,419 | <p><a href="http://www.djangosnippets.org/snippets/118">Django Using Stored Procedure</a> - will give some idea. </p>
| 5 | 2009-04-30T05:13:57Z | [
"python",
"sql",
"django",
"stored-procedures",
"django-models"
] |
What is the best way to access stored procedures in Django's ORM | 805,393 | <p>I am designing a fairly complex database, and know that some of my queries will be far outside the scope of Django's ORM. Has anyone integrated SP's with Django's ORM successfully? If so, what RDBMS and how did you do it? </p>
| 17 | 2009-04-30T05:03:05Z | 805,423 | <p>You have to use the connection utility in Django:</p>
<pre><code>from django.db import connection
cursor = connection.cursor()
cursor.execute("SQL STATEMENT CAN BE ANYTHING")
</code></pre>
<p>then you can fetch the data:</p>
<pre><code>cursor.fetchone()
</code></pre>
<p>or:</p>
<pre><code>cursor.fetchall()
</code></pre>
<p>More info here: <a href="http://docs.djangoproject.com/en/dev/topics/db/sql/">http://docs.djangoproject.com/en/dev/topics/db/sql/</a></p>
| 15 | 2009-04-30T05:16:28Z | [
"python",
"sql",
"django",
"stored-procedures",
"django-models"
] |
What is the best way to access stored procedures in Django's ORM | 805,393 | <p>I am designing a fairly complex database, and know that some of my queries will be far outside the scope of Django's ORM. Has anyone integrated SP's with Django's ORM successfully? If so, what RDBMS and how did you do it? </p>
| 17 | 2009-04-30T05:03:05Z | 805,444 | <p>If you want to look at an actual running project that uses SP, check out <a href="https://secure.caktusgroup.com/projects/minibooks/wiki" rel="nofollow">minibooks</a>. A good deal of custom SQL and uses Postgres pl/pgsql for SP. I think they're going to remove the SP eventually though (justification in <a href="https://secure.caktusgroup.com/projects/minibooks/ticket/92" rel="nofollow">trac ticket 92</a>).</p>
| 2 | 2009-04-30T05:25:06Z | [
"python",
"sql",
"django",
"stored-procedures",
"django-models"
] |
What is the best way to access stored procedures in Django's ORM | 805,393 | <p>I am designing a fairly complex database, and know that some of my queries will be far outside the scope of Django's ORM. Has anyone integrated SP's with Django's ORM successfully? If so, what RDBMS and how did you do it? </p>
| 17 | 2009-04-30T05:03:05Z | 806,302 | <p>Don't.</p>
<p>Seriously.</p>
<p>Move the stored procedure logic into your model where it belongs. </p>
<p>Putting some code in Django and some code in the database is a maintenance nightmare. I've spent too many of my 30+ years in IT trying to clean up this kind of mess. </p>
| 0 | 2009-04-30T10:25:37Z | [
"python",
"sql",
"django",
"stored-procedures",
"django-models"
] |
What is the best way to access stored procedures in Django's ORM | 805,393 | <p>I am designing a fairly complex database, and know that some of my queries will be far outside the scope of Django's ORM. Has anyone integrated SP's with Django's ORM successfully? If so, what RDBMS and how did you do it? </p>
| 17 | 2009-04-30T05:03:05Z | 820,130 | <p>We (musicpictures.com / eviscape.com) wrote that django snippet but its not the whole story (actually that code was only tested on Oracle at that time).</p>
<p>Stored procedures make sense when you want to reuse tried and tested SP code or where one SP call will be faster than multiple calls to the database - or where security requires moderated access to the database - or where the queries are very complicated / multistep. We're using a hybrid model/SP approach against both Oracle and Postgres databases.</p>
<p>The trick is to make it easy to use and keep it "django" like. We use a make_instance function which takes the result of cursor and creates instances of a model populated from the cursor. This is nice because the cursor might return additional fields. Then you can use those instances in your code / templates much like normal django model objects.</p>
<pre class="lang-py prettyprint-override"><code>def make_instance(instance, values):
'''
Copied from eviscape.com
generates an instance for dict data coming from an sp
expects:
instance - empty instance of the model to generate
values - dictionary from a stored procedure with keys that are named like the
model's attributes
use like:
evis = InstanceGenerator(Evis(), evis_dict_from_SP)
>>> make_instance(Evis(), {'evi_id': '007', 'evi_subject': 'J. Bond, Architect'})
<Evis: J. Bond, Architect>
'''
attributes = filter(lambda x: not x.startswith('_'), instance.__dict__.keys())
for a in attributes:
try:
# field names from oracle sp are UPPER CASE
# we want to put PIC_ID in pic_id etc.
setattr(instance, a, values[a.upper()])
del values[a.upper()]
except:
pass
#add any values that are not in the model as well
for v in values.keys():
setattr(instance, v, values[v])
#print 'setting %s to %s' % (v, values[v])
return instance
</code></pre>
<p># Use it like this: </p>
<pre><code>pictures = [make_instance(Pictures(), item) for item in picture_dict]
</code></pre>
<p># And here are some helper functions:</p>
<pre><code>def call_an_sp(self, var):
cursor = connection.cursor()
cursor.callproc("fn_sp_name", (var,))
return self.fn_generic(cursor)
def fn_generic(self, cursor):
msg = cursor.fetchone()[0]
cursor.execute('FETCH ALL IN "%s"' % msg)
thing = create_dict_from_cursor(cursor)
cursor.close()
return thing
def create_dict_from_cursor(cursor):
rows = cursor.fetchall()
# DEBUG settings (used to) affect what gets returned.
if DEBUG:
desc = [item[0] for item in cursor.cursor.description]
else:
desc = [item[0] for item in cursor.description]
return [dict(zip(desc, item)) for item in rows]
</code></pre>
<p>cheers, Simon.</p>
| 15 | 2009-05-04T13:36:05Z | [
"python",
"sql",
"django",
"stored-procedures",
"django-models"
] |
What is the best way to access stored procedures in Django's ORM | 805,393 | <p>I am designing a fairly complex database, and know that some of my queries will be far outside the scope of Django's ORM. Has anyone integrated SP's with Django's ORM successfully? If so, what RDBMS and how did you do it? </p>
| 17 | 2009-04-30T05:03:05Z | 3,675,814 | <p>I guess the improved raw sql queryset support in Django 1.2 can make this easier as you wouldn't have to roll your own make_instance type code. </p>
| 0 | 2010-09-09T10:44:45Z | [
"python",
"sql",
"django",
"stored-procedures",
"django-models"
] |
What is the best way to access stored procedures in Django's ORM | 805,393 | <p>I am designing a fairly complex database, and know that some of my queries will be far outside the scope of Django's ORM. Has anyone integrated SP's with Django's ORM successfully? If so, what RDBMS and how did you do it? </p>
| 17 | 2009-04-30T05:03:05Z | 31,044,993 | <p>There is a good example :
<a href="https://djangosnippets.org/snippets/118/" rel="nofollow">https://djangosnippets.org/snippets/118/</a></p>
<pre><code>from django.db import connection
cursor = connection.cursor()
ret = cursor.callproc("MY_UTIL.LOG_MESSAGE", (control_in, message_in))# calls PROCEDURE named LOG_MESSAGE which resides in MY_UTIL Package
cursor.close()
</code></pre>
| 1 | 2015-06-25T08:27:15Z | [
"python",
"sql",
"django",
"stored-procedures",
"django-models"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.