content stringlengths 85 101k | title stringlengths 0 150 | question stringlengths 15 48k | answers list | answers_scores list | non_answers list | non_answers_scores list | tags list | name stringlengths 35 137 |
|---|---|---|---|---|---|---|---|---|
Q:
How to retrieve result of calling a script from within a Python function?
I would like to call an external script from within a function, for instance:
import subprocess
def call_script(script):
subprocess.call(script)
return answer #retrieving the result is the part I'm struggling with
call_script("/user/bin/calc_delta.py")
The script calc_delta.py simply prints the result when it is done. How can I assign the result of that script to a variable? Thank you.
A:
Instead of using subprocess.call you should use Popen and call communicate on it.
That will allow you to read stdout and stderr. You can also input data with stdin.
Example from the docs http://docs.python.org/library/subprocess.html#replacing-bin-sh-shell-backquote:
output = Popen(["mycmd", "myarg"], stdout=PIPE).communicate()[0]
A:
You should look at subprocess.Popen (popen)
For instance (from the docs):
pipe = Popen("cmd", shell=True, bufsize=bufsize, stdout=PIPE).stdout
subprocess.call is only going to give you the response code for your call, not the output.
A:
In Python ≥2.7 / ≥3.1, you could use subprocess.check_output to convert the stdout into a str.
answer = subprocess.check_output(['/usr/bin/calc_delta.py'])
There are also subprocess.getstatusoutput and subprocess.getoutput on Python ≥3.1 if what you need is a shell command, but it is supported on *NIX only.
| How to retrieve result of calling a script from within a Python function? | I would like to call an external script from within a function, for instance:
import subprocess
def call_script(script):
subprocess.call(script)
return answer #retrieving the result is the part I'm struggling with
call_script("/user/bin/calc_delta.py")
The script calc_delta.py simply prints the result when it is done. How can I assign the result of that script to a variable? Thank you.
| [
"Instead of using subprocess.call you should use Popen and call communicate on it.\nThat will allow you to read stdout and stderr. You can also input data with stdin.\nExample from the docs http://docs.python.org/library/subprocess.html#replacing-bin-sh-shell-backquote:\noutput = Popen([\"mycmd\", \"myarg\"], stdou... | [
3,
2,
2
] | [] | [] | [
"function",
"python"
] | stackoverflow_0003302202_function_python.txt |
Q:
Create a reference to a variable (similar to PHP's "=&")?
In PHP one can create a reference variable, so that two named variables can look at the same value:
$a = 1;
$b =& $a;
echo $a; // 1
echo $b; // 1
$b = 2;
echo $a; // 2
I'm looking to achieve something similar in Python. Specifically, I want to create a reference to an object's property, eg:
class Foo(object):
@property
def bar(self): return some_calculated_value
foo_instance = Foo()
ref = foo_instance.bar
# so that 'ref' is referencing the bar property on foo, calculated when used.
Is this possible?
A:
There is some more magic that can be done in Python (not that I would recommend it and it will require digging on your part ;-), but using a closure may be sufficient for your needs:
get_x = lambda: foo_instance.bar
get_x() # yahoo!
Edit, for those wanting "update support", it's all about the closures:
def wrap_prop (obj, pname):
def _access(*v):
if not len(v):
return getattr(obj, pname)
else
setattr(obj, pname, v[0])
return _access
class z (object):
pass
access = wrap_prop(z(), "bar")
access(20)
access() # yahoo! \o/
Without stepping outside the bounds of normally (for me :-) accepted Python, this could also be written to return an object with forwarding/proxy property, imagine:
access = wrap_prop(z(), "bar")
access.val = 20
access.val # yahoo \o/
Some interesting links:
http://adam.gomaa.us/blog/2008/aug/11/the-python-property-builtin/
http://code.activestate.com/recipes/576787-alias-class-attributes/
http://www.faqs.org/docs/diveintopython/dialect_locals.html
http://docs.python.org/library/functions.html (see "property")
A:
No, what you want is not possible. All names in Python are references, but they're always references to objects, not references to variables. You can't have a local name that, when "used", re-evaluates what it was that created it.
What you can have is an object that acts as a proxy, delegating most operations on it to operations on whatever it is referencing internally. However, this isn't always transparent, and there are some operations you can't proxy. It's also frequently inconvenient to have the thing you are proxying to change during an operation. All in all, it's usually better to not try this.
Instead, in your particular case, you would keep around foo_instance instead of ref, and just use foo_instance.bar whenever you wanted to "use" the ref reference. For more complicated situations, or for more abstraction, you could have a separate type with a property that did just what you wanted, or a function (usually a closure) that knew what to get and where.
A:
As others have pointed out, in order to do this, you'll need to keep a reference to the parent object around. Without questioning your reasons for wanting to do this, here's a possible solution:
class PropertyRef:
def __init__(self, obj, prop_name):
klass = obj.__class__
prop = getattr(klass, prop_name)
if not hasattr(prop, 'fget'):
raise TypeError('%s is not a property of %r' % (prop_name, klass))
self.get = lambda: prop.fget(obj)
if getattr(prop, 'fset'):
self.set = lambda value: prop.fset(obj, value))
class Foo(object):
@property
def bar(self):
return some_calculated_value
>>> foo_instance = Foo()
>>> ref = PropertyRef(foo_instance, 'bar')
>>> ref.get() # Returns some_calculated_value
>>> ref.set(some_other_value)
>>> ref.get() # Returns some_other_value
BTW, in the example you gave, the property is read-only (doesn't have a setter), so you wouldn't be able to set it anyway.
If this feels like an unnecessary amount of code, it probably is -- I'm almost certain you can find a better solution to your use case, whatever it is.
| Create a reference to a variable (similar to PHP's "=&")? | In PHP one can create a reference variable, so that two named variables can look at the same value:
$a = 1;
$b =& $a;
echo $a; // 1
echo $b; // 1
$b = 2;
echo $a; // 2
I'm looking to achieve something similar in Python. Specifically, I want to create a reference to an object's property, eg:
class Foo(object):
@property
def bar(self): return some_calculated_value
foo_instance = Foo()
ref = foo_instance.bar
# so that 'ref' is referencing the bar property on foo, calculated when used.
Is this possible?
| [
"There is some more magic that can be done in Python (not that I would recommend it and it will require digging on your part ;-), but using a closure may be sufficient for your needs:\nget_x = lambda: foo_instance.bar\nget_x() # yahoo!\n\nEdit, for those wanting \"update support\", it's all about the closures:\ndef... | [
5,
1,
1
] | [] | [] | [
"python",
"reference"
] | stackoverflow_0003301805_python_reference.txt |
Q:
Locality Sensitive Hashing - finding probabilities and values for R
Thanks to those who've answered my previous questions and gotten me this far.
I have a table of about 25,000 vectors, each with 48 dimensions, with values ranging from 0-255.
I am attempting to develop a Locality Sensitive Hash (http://en.wikipedia.org/wiki/Locality-sensitive_hashing) algorithm for finding a near neighbor or closest neighbor points.
My current LSH function is this:
def lsh(vector, r = 1.0, a = None, b = None):
if not a:
a = [normalvariate(10, 4) for i in range(48)]
if not b:
b = uniform(0, r)
hashVal = floor((sum([a[i]*vector[i] for i in range(48)]) + b)/r)
return int(hashVal)
My questions at this point are:
A: Are there optimal values for "normalvariate(10, 4)" portion of my code. This is pythons built in random.normalvariate (http://docs.python.org/library/random.html#random.normalvariate) function which I am using to produce the "d dimensional vector with entries chosen independently from a stable distribution". From my experimenting, the values don't seem to matter too much.
B: At the top of the wikipedia article it states:
if d(p,q) <= R, then h(p) = h(q) with probability at least P1
if d(p,q) >= cR, then h(p) = h(q) with probability at most P2
Is the R value mentioned here also the R value mentioned under the Stable Distributions section. (http://en.wikipedia.org/wiki/Locality-sensitive_hashing#Stable_distributions)
C: Related to my previous question(B). I've discovered that using higher values of R in my hasing function map my vectors into a smaller range of hash values. Is there a way to optimize my R value.
D: Approximately how many tables might one use?
A:
To those interested. I've found this document (http://web.mit.edu/andoni/www/papers/cSquared.pdf which has a very detailed, albeit complicated explanation of how to use LSH for high dimensional spaces.
A:
You might want to check out "MetaOptimize" -- like Stack Overflow for machine learning.
http://metaoptimize.com/qa
Your question isn't really a python or programing question, and that community might be able to help a bit more.
| Locality Sensitive Hashing - finding probabilities and values for R | Thanks to those who've answered my previous questions and gotten me this far.
I have a table of about 25,000 vectors, each with 48 dimensions, with values ranging from 0-255.
I am attempting to develop a Locality Sensitive Hash (http://en.wikipedia.org/wiki/Locality-sensitive_hashing) algorithm for finding a near neighbor or closest neighbor points.
My current LSH function is this:
def lsh(vector, r = 1.0, a = None, b = None):
if not a:
a = [normalvariate(10, 4) for i in range(48)]
if not b:
b = uniform(0, r)
hashVal = floor((sum([a[i]*vector[i] for i in range(48)]) + b)/r)
return int(hashVal)
My questions at this point are:
A: Are there optimal values for "normalvariate(10, 4)" portion of my code. This is pythons built in random.normalvariate (http://docs.python.org/library/random.html#random.normalvariate) function which I am using to produce the "d dimensional vector with entries chosen independently from a stable distribution". From my experimenting, the values don't seem to matter too much.
B: At the top of the wikipedia article it states:
if d(p,q) <= R, then h(p) = h(q) with probability at least P1
if d(p,q) >= cR, then h(p) = h(q) with probability at most P2
Is the R value mentioned here also the R value mentioned under the Stable Distributions section. (http://en.wikipedia.org/wiki/Locality-sensitive_hashing#Stable_distributions)
C: Related to my previous question(B). I've discovered that using higher values of R in my hasing function map my vectors into a smaller range of hash values. Is there a way to optimize my R value.
D: Approximately how many tables might one use?
| [
"To those interested. I've found this document (http://web.mit.edu/andoni/www/papers/cSquared.pdf which has a very detailed, albeit complicated explanation of how to use LSH for high dimensional spaces.\n",
"You might want to check out \"MetaOptimize\" -- like Stack Overflow for machine learning.\nhttp://metaopt... | [
2,
2
] | [] | [] | [
"nearest_neighbor",
"python"
] | stackoverflow_0003267166_nearest_neighbor_python.txt |
Q:
What this line doing? Django-template
Explain me please what this line doing:
<a href="{% url video.media.views.channel_browse slug=slug%}">Archie Channel</a>
Actually this:
{% url video.media.views.channel_browse slug=slug%}
I know that it give me URL, but what from, or how it is making this URL?
does this url depend from context? if it depend from context so which context - this where is this line or which gives channel_browse function?
A:
The url template tag uses the reverse() function to look up which url dispatch line has a name=channel_browse, including if it needs to fill in slug=whatever because that particular url dispatch line has a (?P<slug>.*) argument in it that needs to be filled in order to recreate the actual url.
Here's a complete explanation of the whole request system.
| What this line doing? Django-template | Explain me please what this line doing:
<a href="{% url video.media.views.channel_browse slug=slug%}">Archie Channel</a>
Actually this:
{% url video.media.views.channel_browse slug=slug%}
I know that it give me URL, but what from, or how it is making this URL?
does this url depend from context? if it depend from context so which context - this where is this line or which gives channel_browse function?
| [
"The url template tag uses the reverse() function to look up which url dispatch line has a name=channel_browse, including if it needs to fill in slug=whatever because that particular url dispatch line has a (?P<slug>.*) argument in it that needs to be filled in order to recreate the actual url.\nHere's a complete e... | [
4
] | [] | [] | [
"django",
"django_templates",
"python"
] | stackoverflow_0003302307_django_django_templates_python.txt |
Q:
Splicing NumPy arrays
I am having a problem splicing together two arrays. Let's assume I have two arrays:
a = array([1,2,3])
b = array([4,5,6])
When I do vstack((a,b)) I get
[[1,2,3],[4,5,6]]
and if I do hstack((a,b)) I get:
[1,2,3,4,5,6]
But what I really want is:
[[1,4],[2,5],[3,6]]
How do I accomplish this without using for loops (it needs to be fast)?
A:
Try column_stack()?
http://docs.scipy.org/doc/numpy/reference/generated/numpy.column_stack.html
Alternatively,
vstack((a,b)).T
A:
column_stack.
A:
I forgot how to transpose NumPy arrays, but you could do:
at = transpose(a)
bt = transpose(b)
result = vstack((a,b))
A:
>>> c = [list(x) for x in zip(a,b)]
>>> c
[[1, 4], [2, 5], [3, 6]]
or
>>> c = np.array([list(x) for x in zip(a,b)])
>>> c
array([[1, 4],
[2, 5],
[3, 6]])
depending on what you're looking for.
A:
numpy.vstack((a, b)).T
| Splicing NumPy arrays | I am having a problem splicing together two arrays. Let's assume I have two arrays:
a = array([1,2,3])
b = array([4,5,6])
When I do vstack((a,b)) I get
[[1,2,3],[4,5,6]]
and if I do hstack((a,b)) I get:
[1,2,3,4,5,6]
But what I really want is:
[[1,4],[2,5],[3,6]]
How do I accomplish this without using for loops (it needs to be fast)?
| [
"Try column_stack()?\nhttp://docs.scipy.org/doc/numpy/reference/generated/numpy.column_stack.html\nAlternatively,\nvstack((a,b)).T\n\n",
"column_stack.\n",
"I forgot how to transpose NumPy arrays, but you could do:\nat = transpose(a)\nbt = transpose(b)\n\nresult = vstack((a,b))\n\n",
">>> c = [list(x) for x i... | [
7,
4,
0,
0,
0
] | [
"You are probably looking for shape manipulation of the array. You can look in the \"Tentative NumPy Tutorial, Array Creation\".\n"
] | [
-1
] | [
"numpy",
"python"
] | stackoverflow_0003302459_numpy_python.txt |
Q:
Python parsing: lxml to get just part of a tag's text
I'm working in Python with HTML that looks like this. I'm parsing with lxml, but could equally happily use pyquery:
<p><span class="Title">Name</span>Dave Davies</p>
<p><span class="Title">Address</span>123 Greyfriars Road, London</p>
Pulling out 'Name' and 'Address' is dead easy, whatever library I use, but how do I get the remainder of the text - i.e. 'Dave Davies'?
A:
Another method -- using xpath:
>>> from lxml import html
>>> doc = html.parse( file )
>>> doc.xpath( '//span[@class="Title"][text()="Name"]/../self::p/text()' )
['Dave Davies']
>>> doc.xpath( '//span[@class="Title"][text()="Address"]/../self::p/text()' )
['123 Greyfriars Road, London']
A:
Each Element can have a text and a tail attribute (in the link, search for the word "tail"):
import lxml.etree
content='''\
<p><span class="Title">Name</span>Dave Davies</p>
<p><span class="Title">Address</span>123 Greyfriars Road, London</p>'''
root=lxml.etree.fromstring(content,parser=lxml.etree.HTMLParser())
for elt in root.findall('**/span'):
print(elt.text, elt.tail)
# ('Name', 'Dave Davies')
# ('Address', '123 Greyfriars Road, London')
A:
Have a look at BeautifulSoup. I've just started using it, so I'm no expert. Off the top of my head:
import BeautifulSoup
text = '''<p><span class="Title">Name</span>Dave Davies</p>
<p><span class="Title">Address</span>123 Greyfriars Road, London</p>'''
soup = BeautifulSoup.BeautifulSoup(text)
paras = soup.findAll('p')
for para in paras:
spantext = para.span.text
othertext = para.span.nextSibling
print spantext, othertext
[Out]: Name Dave Davies
Address 123 Greyfriars Road, London
| Python parsing: lxml to get just part of a tag's text | I'm working in Python with HTML that looks like this. I'm parsing with lxml, but could equally happily use pyquery:
<p><span class="Title">Name</span>Dave Davies</p>
<p><span class="Title">Address</span>123 Greyfriars Road, London</p>
Pulling out 'Name' and 'Address' is dead easy, whatever library I use, but how do I get the remainder of the text - i.e. 'Dave Davies'?
| [
"Another method -- using xpath:\n>>> from lxml import html\n>>> doc = html.parse( file )\n>>> doc.xpath( '//span[@class=\"Title\"][text()=\"Name\"]/../self::p/text()' )\n['Dave Davies']\n>>> doc.xpath( '//span[@class=\"Title\"][text()=\"Address\"]/../self::p/text()' )\n['123 Greyfriars Road, London']\n\n",
"Each ... | [
2,
1,
0
] | [] | [] | [
"lxml",
"python",
"screen_scraping"
] | stackoverflow_0003302248_lxml_python_screen_scraping.txt |
Q:
Matplotlib legend help
I am writing a script that plot's several points. I am also trying to create a legend from these points. To sum up my script, I am plotting several 'types' of points (call them 'a', 'b', 'c'). These points have different colors and shapes: 'a'-'go' 'b'-'rh' 'c'-'k^'.
This is a shortened version of the relevant parts of my script:
lbl = #the type of point x,y is (a,b,c)
for x,y in coords:
if lbl in LABELS:
plot(x, y, color)
else:
LABELS.add(lbl)
plot(x, y, color, label=lbl)
legend()
What I am doing here is just plotting a bunch of points and assigning a label to them. However, I found out if I added a label to each point, then the legend will contain an entry for each point. I only want one entry per type of point (a, b, c). So, I changed my script to look like the above. Is there a better way to do this? If I have a million different types of points, then the data structure LABELS (a set) will take up a lot of space.
A:
Group x and y according to the type of point.
Plot all the points of the same type with one call to plot:
import pylab
import numpy as np
lbl=np.array(('a','b','c','c','b','a','b','c','a','c'))
x=np.random.random(10)
y=np.random.random(10)
for t,color in zip(('a','b','c'),('go','rh','k^')):
pylab.plot(x[lbl==t],y[lbl==t],color,label=t)
pylab.legend()
pylab.show()
| Matplotlib legend help | I am writing a script that plot's several points. I am also trying to create a legend from these points. To sum up my script, I am plotting several 'types' of points (call them 'a', 'b', 'c'). These points have different colors and shapes: 'a'-'go' 'b'-'rh' 'c'-'k^'.
This is a shortened version of the relevant parts of my script:
lbl = #the type of point x,y is (a,b,c)
for x,y in coords:
if lbl in LABELS:
plot(x, y, color)
else:
LABELS.add(lbl)
plot(x, y, color, label=lbl)
legend()
What I am doing here is just plotting a bunch of points and assigning a label to them. However, I found out if I added a label to each point, then the legend will contain an entry for each point. I only want one entry per type of point (a, b, c). So, I changed my script to look like the above. Is there a better way to do this? If I have a million different types of points, then the data structure LABELS (a set) will take up a lot of space.
| [
"Group x and y according to the type of point.\nPlot all the points of the same type with one call to plot:\nimport pylab\nimport numpy as np\n\nlbl=np.array(('a','b','c','c','b','a','b','c','a','c'))\nx=np.random.random(10)\ny=np.random.random(10)\nfor t,color in zip(('a','b','c'),('go','rh','k^')):\n pylab.plo... | [
2
] | [] | [] | [
"legend",
"matplotlib",
"python"
] | stackoverflow_0003302586_legend_matplotlib_python.txt |
Q:
Python - Using cPickle to load a previously saved pickle uses too much memory?
Python - Using cPickle to load a previously saved pickle uses too much memory?
My pickle file is about 340MB but takes up 29% of 6gb of memory when loaded. This seems a bit too much. The pickle file is a dictionary of dictionaries. Is this appropriate?
Code used:
import cPickle as pickle
file = pickle.load( file_handle )
Thanks
A:
I always had memory problems with big pickels and sub dicts. So i ended up writing my objects via pprint into files and later i import that files via a a custom module loader to get the data back in the process scope. Works fine and doesn't waste memory.
A:
About 1.7GB seems a bit much, but not impossible. How much memory did the data take before it was pickled?
After unpickling the data should take about the same amount of memory as it took before it was pickled, how big it is in it's on-disk format is not really that significant.
| Python - Using cPickle to load a previously saved pickle uses too much memory? | Python - Using cPickle to load a previously saved pickle uses too much memory?
My pickle file is about 340MB but takes up 29% of 6gb of memory when loaded. This seems a bit too much. The pickle file is a dictionary of dictionaries. Is this appropriate?
Code used:
import cPickle as pickle
file = pickle.load( file_handle )
Thanks
| [
"I always had memory problems with big pickels and sub dicts. So i ended up writing my objects via pprint into files and later i import that files via a a custom module loader to get the data back in the process scope. Works fine and doesn't waste memory.\n",
"About 1.7GB seems a bit much, but not impossible. How... | [
1,
0
] | [] | [] | [
"memory_management",
"pickle",
"python"
] | stackoverflow_0003302632_memory_management_pickle_python.txt |
Q:
Adding server certificate validation to httplib.HTTPSConnection
I've found that httplib.HTTPSConnection doesn't perform an automatic server certificate check. As far as I've understood the problem, I need to add that functionality manually, e.g. by subclassing this class as described here.
As I'm using Python2.4.5 and an upgrade is not possible under the given circumstances, I cannot use the workaround given in this blog post, because the ssl module has not been introduced until Py2.6.
I've been trying to avoid the usage of the ssl module by using M2Crypto. A promising approach for doing so is contained in this blog post (in the "Clients" section). But I haven't yet managed to override httplib.HTTPSConnection.connect appropriately by using that approach.
Any ideas or hints?
A:
Try this site maybe: http://www.cs.technion.ac.il/~danken/xmlrpc-ssl.html
It requires SSL but doesn't require the Python SSL module. It only requires Open SSL library.
| Adding server certificate validation to httplib.HTTPSConnection | I've found that httplib.HTTPSConnection doesn't perform an automatic server certificate check. As far as I've understood the problem, I need to add that functionality manually, e.g. by subclassing this class as described here.
As I'm using Python2.4.5 and an upgrade is not possible under the given circumstances, I cannot use the workaround given in this blog post, because the ssl module has not been introduced until Py2.6.
I've been trying to avoid the usage of the ssl module by using M2Crypto. A promising approach for doing so is contained in this blog post (in the "Clients" section). But I haven't yet managed to override httplib.HTTPSConnection.connect appropriately by using that approach.
Any ideas or hints?
| [
"Try this site maybe: http://www.cs.technion.ac.il/~danken/xmlrpc-ssl.html\nIt requires SSL but doesn't require the Python SSL module. It only requires Open SSL library.\n"
] | [
2
] | [] | [] | [
"certificate",
"httplib",
"m2crypto",
"python",
"validation"
] | stackoverflow_0003280603_certificate_httplib_m2crypto_python_validation.txt |
Q:
how to loop through httprequest post variables in python
How can you loop through the HttpRequest post variables in Django?
I have
for k,v in request.POST:
print k,v
which is not working properly.
Thanks!
A:
request.POST is a dictionary-like object containing all given HTTP POST parameters.
When you loop through request.POST, you only get the keys.
for key in request.POST:
print(key)
value = request.POST[key]
print(value)
To retrieve the keys and values together, use the items method.
for key, value in request.POST.items():
print(key, value)
Note that request.POST can contain multiple items for each key. If you are expecting multiple items for each key, you can use lists, which returns all values as a list.
for key, values in request.POST.lists():
print(key, values)
For more information see the Django docs for QueryDict.
| how to loop through httprequest post variables in python | How can you loop through the HttpRequest post variables in Django?
I have
for k,v in request.POST:
print k,v
which is not working properly.
Thanks!
| [
"request.POST is a dictionary-like object containing all given HTTP POST parameters.\nWhen you loop through request.POST, you only get the keys.\nfor key in request.POST:\n print(key)\n value = request.POST[key]\n print(value)\n\nTo retrieve the keys and values together, use the items method.\nfor key, val... | [
107
] | [] | [] | [
"django",
"httprequest",
"post",
"python"
] | stackoverflow_0003303336_django_httprequest_post_python.txt |
Q:
Pythonic way to turn a list of strings into a dictionary with the odd-indexed strings as keys and even-indexed ones as values?
I have a list of strings parsed from somewhere, in the following format:
[key1, value1, key2, value2, key3, value3, ...]
I'd like to create a dictionary based on this list, like so:
{key1:value1, key2:value2, key3:value3, ...}
An ordinary for loop with index offsets would probably do the trick, but I wonder if there's a Pythonic way of doing this. List comprehensions seem interesting, but I can't seem to find out how to apply them to this particular problem.
Any ideas?
A:
You can try:
dict(zip(l[::2], l[1::2]))
Explanation: we split the list into two lists, one of the even and one of the odd elements, by taking them by steps of two starting from either the first or the second element (that's the l[::2] and l[1::2]). Then we use the zip builtin to the two lists into one list of pairs. Finally, we call dict to create a dictionary from these key-value pairs.
This is ~4n in time and ~4n in space, including the final dictionary. It is probably faster than a loop, though, since the zip, dict, and slicing operators are written in C.
A:
A nice opportunity to display my favorite python idiom:
>>> S = [1,2,3,4,5,6]
>>> dict(zip(*[iter(S)]*2))
{1: 2, 3: 4, 5: 6}
That tricky line passes two arguments to zip() where each argument is the same iterator over S. zip() creates 2-item tuples, pulling from the iterator over zip each time. dict() then converts those tuples to a dictionary.
To extrapolate:
S = [1,2,3,4,5,6]
I = iter(S)
dict(zip(I,I))
A:
In [71]: alist=['key1', 'value1', 'key2', 'value2', 'key3', 'value3']
In [72]: dict(alist[i:i+2] for i in range(0,len(alist),2))
Out[72]: {'key1': 'value1', 'key2': 'value2', 'key3': 'value3'}
A:
In addition to pavpanchekha's short and perfectly fine solution, you could use a generator expression (a list comprehensions is just a generator expression fed to the list constructor - it's actually more powerful und universal) for extra goodness:
dict((l[i], l[l+1]) for i in range(0, len(l)-1, 2))
Apart from being really cool and functional, it's also a better algorithm: Unless the implementation of dict is especially stupid (unlikely considered it's a built-in), this will consume the same amount of memory for every size of l (i.e. runs in constant aka O(1) space) since it processes one pair at a time instead of creating a whole new list of tuples first.
A:
result = dict(grouper(2, L))
grouper is a function that forms pairs in a list, it's given in the itertools receipes:
def grouper(n, iterable, fillvalue=None):
"grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx"
args = [iter(iterable)] * n
return izip_longest(fillvalue=fillvalue, *args)
dict takes a list of (key,value) pairs and makes a dict from them.
You could also write result = dict(zip(*[iter(L)]*2)) and confuse most readers :-)
| Pythonic way to turn a list of strings into a dictionary with the odd-indexed strings as keys and even-indexed ones as values? | I have a list of strings parsed from somewhere, in the following format:
[key1, value1, key2, value2, key3, value3, ...]
I'd like to create a dictionary based on this list, like so:
{key1:value1, key2:value2, key3:value3, ...}
An ordinary for loop with index offsets would probably do the trick, but I wonder if there's a Pythonic way of doing this. List comprehensions seem interesting, but I can't seem to find out how to apply them to this particular problem.
Any ideas?
| [
"You can try:\ndict(zip(l[::2], l[1::2]))\n\nExplanation: we split the list into two lists, one of the even and one of the odd elements, by taking them by steps of two starting from either the first or the second element (that's the l[::2] and l[1::2]). Then we use the zip builtin to the two lists into one list of ... | [
16,
5,
3,
2,
0
] | [] | [] | [
"list_comprehension",
"python"
] | stackoverflow_0003303213_list_comprehension_python.txt |
Q:
Best way to schedule a task to run in next iteration of a Twisted reactor loop
I want to schedule a task to run in the next iteration of the reactor loop. What's the best way to do it?
reactor.callLater(0, ...)?
A:
You are correct:
reactor.callLater(0, ...)
A:
From the docs, the correct way seems to be as you said, reactor.callLater(0, ...).
| Best way to schedule a task to run in next iteration of a Twisted reactor loop | I want to schedule a task to run in the next iteration of the reactor loop. What's the best way to do it?
reactor.callLater(0, ...)?
| [
"You are correct:\nreactor.callLater(0, ...)\n\n",
"From the docs, the correct way seems to be as you said, reactor.callLater(0, ...).\n"
] | [
1,
0
] | [] | [] | [
"python",
"twisted"
] | stackoverflow_0002321453_python_twisted.txt |
Q:
How do I set a default page in Pylons?
I've created a new Pylons application and added a controller ("main.py") with a template ("index.mako"). Now the URL http://myserver/main/index works. How do I make this the default page, ie. the one returned when I browse to http://myserver/ ?
I've already added a default route in routing.py:
def make_map():
"""Create, configure and return the routes Mapper"""
map = Mapper(directory=config['pylons.paths']['controllers'],
always_scan=config['debug'])
map.minimization = False
# The ErrorController route (handles 404/500 error pages); it should
# likely stay at the top, ensuring it can always be resolved
map.connect('/error/{action}', controller='error')
map.connect('/error/{action}/{id}', controller='error')
# CUSTOM ROUTES HERE
map.connect('', controller='main', action='index')
map.connect('/{controller}/{action}')
map.connect('/{controller}/{action}/{id}')
return map
I've also deleted the contents of the public directory (except for favicon.ico), following the answer to Default route doesn't work Now I just get error 404.
What else do I need to do to get such a basic thing to work?
A:
Try this: map.connect('/', controller='main', action='index')
A:
You need to remove public/index.html file to make / routing rule work. Otherwise it is served directly.
| How do I set a default page in Pylons? | I've created a new Pylons application and added a controller ("main.py") with a template ("index.mako"). Now the URL http://myserver/main/index works. How do I make this the default page, ie. the one returned when I browse to http://myserver/ ?
I've already added a default route in routing.py:
def make_map():
"""Create, configure and return the routes Mapper"""
map = Mapper(directory=config['pylons.paths']['controllers'],
always_scan=config['debug'])
map.minimization = False
# The ErrorController route (handles 404/500 error pages); it should
# likely stay at the top, ensuring it can always be resolved
map.connect('/error/{action}', controller='error')
map.connect('/error/{action}/{id}', controller='error')
# CUSTOM ROUTES HERE
map.connect('', controller='main', action='index')
map.connect('/{controller}/{action}')
map.connect('/{controller}/{action}/{id}')
return map
I've also deleted the contents of the public directory (except for favicon.ico), following the answer to Default route doesn't work Now I just get error 404.
What else do I need to do to get such a basic thing to work?
| [
"Try this: map.connect('/', controller='main', action='index')\n",
"You need to remove public/index.html file to make / routing rule work. Otherwise it is served directly.\n"
] | [
3,
2
] | [] | [] | [
"pylons",
"python",
"routes"
] | stackoverflow_0002406630_pylons_python_routes.txt |
Q:
Help with Python code
I need some help understanding what's happening here. This code is from a models/log.py module in web2py, and is meant to allow for global logging.
def _init_log():
logger=logging.getLogger(request.application)
...
return logger
logging=cache.ram('mylog',lambda:_init_log(),time_expire=99999999)
Can someone explain how this might work, and what the last line is doing?
Thanks--
A:
This is not a standard web2py file. Sombody wrote it but I can see what it does:
In web2py a single installation can run multiple apps. Some users want different app running under the same web2py to have separate logs, therefore they need different logger objects. In web2py there are not global settings and all user code is executed on every request so in order to avoid re-creating the logger at every request, the logger objects is created only ones and stores in cache with large expiration time 999...9. When an http request arrives, if it need to log, it finds the logger in cache. Look at the docs for cache.ram.
I have used this trick but never for logging.
A:
What it does, I think, is that the logging function is "memoized". That means that if it is called with the same arguments multiple times in a row, it will return the old results from its cache.
It might be based on the plone.memoize module, but I couldn't check because the link doesn't work for me.
| Help with Python code | I need some help understanding what's happening here. This code is from a models/log.py module in web2py, and is meant to allow for global logging.
def _init_log():
logger=logging.getLogger(request.application)
...
return logger
logging=cache.ram('mylog',lambda:_init_log(),time_expire=99999999)
Can someone explain how this might work, and what the last line is doing?
Thanks--
| [
"This is not a standard web2py file. Sombody wrote it but I can see what it does:\nIn web2py a single installation can run multiple apps. Some users want different app running under the same web2py to have separate logs, therefore they need different logger objects. In web2py there are not global settings and all u... | [
2,
1
] | [] | [] | [
"logging",
"python",
"web2py"
] | stackoverflow_0003303026_logging_python_web2py.txt |
Q:
Writing a CherryPy Decorator for Authorization
I have a cherrypy application and on some of the views I want to start only allowing certain users to view them, and sending anyone else to an authorization required page.
Is there a way I can do this with a custom decorator? I think that would be the most elegant option.
Here's a basic example of what I want to do:
class MyApp:
@authorization_required
def view_page1(self,appID):
... do some stuff ...
return html
def authorization_required(func):
#what do I put here?
Also can the authorization_required function when called as a decorator accept parameters like allow_group1, allow_group2? Or do I need a separate decorator for each group?
A:
You really don't want to be writing custom decorators for CherryPy. Instead, you want to write a new Tool:
def myauth(allowed_groups=None, debug=False):
# Do your auth here...
authlib.auth(...)
cherrypy.tools.myauth = cherrypy.Tool("on_start_resource", myauth)
See http://docs.cherrypy.org/en/latest/extend.html#tools for more discussion. This has several benefits over writing a custom decorator:
You get the decorator for free from the Tool: @cherrypy.tools.myauth(allowed_groups=['me']), and it already knows how to not clobber cherrypy.exposed on the same function.
You can apply Tools either per-handler (with the decorator), per-controller-tree (via _cp_config) or per-URI-tree (in config files or dicts). You can even mix them and provide a base feature via decorators and then override their behavior in config files.
If a config file turns your feature off, you don't pay the performance penalty of calling the decorator function just to see if it's off.
You'll remember to add a 'debug' arg like all the builtin Tools have. ;)
Your feature can run earlier (or later, if that's what you need) than a custom decorator can, by selecting a different "point".
Your feature can run at multiple hook points, if needed.
A:
Ok, in that case your decorator would look something like this:
# without any parameters
def authentication_required(f):
@functools.wraps(f)
def _authentication_required(*args, **kwargs):
# Do you login stuff here
return f(*args, **kwargs)
return _authentication_required
# With parameters
def authentication_required(*allowed_groups):
def _authentication_required(f):
@functools.wraps(f)
def __authentication_required(*args, **kwargs):
# Do you login stuff here
return f(*args, **kwargs)
return __authentication_required
return _authentication_required
| Writing a CherryPy Decorator for Authorization | I have a cherrypy application and on some of the views I want to start only allowing certain users to view them, and sending anyone else to an authorization required page.
Is there a way I can do this with a custom decorator? I think that would be the most elegant option.
Here's a basic example of what I want to do:
class MyApp:
@authorization_required
def view_page1(self,appID):
... do some stuff ...
return html
def authorization_required(func):
#what do I put here?
Also can the authorization_required function when called as a decorator accept parameters like allow_group1, allow_group2? Or do I need a separate decorator for each group?
| [
"You really don't want to be writing custom decorators for CherryPy. Instead, you want to write a new Tool:\ndef myauth(allowed_groups=None, debug=False):\n # Do your auth here...\n authlib.auth(...)\ncherrypy.tools.myauth = cherrypy.Tool(\"on_start_resource\", myauth)\n\nSee http://docs.cherrypy.org/en/lates... | [
15,
4
] | [] | [] | [
"authorization",
"cherrypy",
"decorator",
"permissions",
"python"
] | stackoverflow_0003302844_authorization_cherrypy_decorator_permissions_python.txt |
Q:
Terminating android ASE shell from within the script
I'm using android scripting environment with python (ASE), and I'd like to terminate the shell executing the script when the script terminates.
Is there a good way to do this?
I have tried executing on the last line:
os.system( 'kill %d' % os.getppid() )
but to no avail.
A:
You should use android.exit().
A:
My guess is that the above answer ought to be android.Android().exit()
| Terminating android ASE shell from within the script | I'm using android scripting environment with python (ASE), and I'd like to terminate the shell executing the script when the script terminates.
Is there a good way to do this?
I have tried executing on the last line:
os.system( 'kill %d' % os.getppid() )
but to no avail.
| [
"You should use android.exit().\n",
"My guess is that the above answer ought to be android.Android().exit()\n"
] | [
1,
0
] | [] | [] | [
"android",
"ase",
"python"
] | stackoverflow_0003125325_android_ase_python.txt |
Q:
Git "failed to push some refs to..." with custom Git bridge
I have been working on setting up a git server by using Paramiko to act as an SSH bridge for Git. I am able to clone my repository without issue, and even push changes up, however I get an annoying error message.
Pushing to git@localhost:/pckprojects/heyworld
Counting objects: 5, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 262 bytes, done.
Total 3 (delta 1), reused 0 (delta 0)
To git@localhost:/pckprojects/heyworld
348dfdc..1c0468e master -> master
updating local tracking ref 'refs/remotes/origin/master'
error: failed to push some refs to 'git@localhost:/pckprojects/heyworld'
My git's config looks like this:
[core]
repositoryformatversion = 0
filemode = true
bare = true
sharedRepository = all
[receive]
denyNonFastForwards = false
denyCurrentBranch = false
denyDeletes = false
The odd thing is "master" actually does get updated, and I have no other branches in the repository. In addition, if I clone / push the repository from disk rather than via SSH, I don't see any errors.
Anyone have any thoughts on why I'm seeing this error?
Thanks...
EDIT:
Since it seems likely my issues are related to my SSH server, the main loop is below:
proc = subprocess.Popen(command, stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
while True:
try:
r_ready, w_ready, x_ready = select.select(
[channel, proc.stdout, proc.stderr], [proc.stdin], [])
except Exception, e:
print e
print 'channel: ' + str(channel)
print 'proc: ' + str(proc)
if channel in r_ready and channel.recv_ready():
data = channel.recv(128)
if len(data) > 0:
print 'IN>channel ' + repr(data)
proc.stdin.write(data)
else:
pass
if proc.stdout in r_ready:
data = proc.stdout.read(1)
channel.sendall(data)
if proc.stderr in r_ready:
data = proc.stdout.read(1)
if len(data) > 0:
channel.sendall(data)
else:
print "Encountered empty stderr, breaking"
break
print 'will close'
channel.shutdown(2)
channel.close()
More Information
I thought it might be helpful to see the actual communication. This is at it appears on the server side, since the git client doesn't allow you to see nearly this much.
git-receive-pack /home/www/data/project/heyworld/
OUT >>
00721ee2436e45c80236878132dc87d9e9fee6a81de5 refs/heads/master\x00 report-status delete-refs side-band-64k ofs-delta\n0000
IN >>
00841ee2436e45c80236878132dc87d9e9fee6a81de5 6054b3358787bafd1d96c0fdfbf016d620ccdf09 refs/heads/master\x00 report-status side-band-64k0000
IN >>
PACK\x00\x00\x00\x02\x00\x00\x00\x03\x96\x0ex\x9c\xa5\x8cM\x0e\xc2 \x14\x06\xf7\x9c\x82\x0b\xd8<(?\x8f\xc4\x18\xf7n\xbc\x02\xc2\x87%\x16\xdb4\xb8\xf0\xf66\xbd\x82\xcb\x99d\xa6o\x80\x846\xd9!)\x1b\x0b\xb1\r1$dO\x05\xa6\xb0\xa3@\x06%D<\xb2\x16k\xdc\xf0\xeeRa/F\x07c\x13\x93\x1e\x1d{V\xa3\xce\x89}\x0e\x08\x05p\x91U\x86\x15\xf1\xd3\xa7e\x93\xf7\xa9\xceu\x95\xb7\xda\x1a\xbe\xf2\xbc\x1e8\xbc\x0e\xbc>[\xac\xf3\x90\x96v\x91J\xfb`X\xb3V\xf2D\x96H\xec\xb6\xd5\xde\xf1\xc7B4,\xe2\x07\xff\x8aF\xba\xaf\x01x\x9c340031Q\xc8H\xaddP\xd8P\xfcmzGg\x8aY\xc4\x8e\xad\xb1<\xca\x1b\xa3\x93\xee\xbd\x05\x00\xa8\xb4\x0c\x9by\xd3\xfe\xa0C\x86fU\x18\xbe\xa5\x86\xac5*\xf7\x11\x89\x8b9$x\x9c\x0b\x8b\x9a\x10\xc6\x92\x9b\x9a\xcf\x05\x00\x0f\xb2\x02\xe6=\x12?\xde\x1f\x9a=v\x0c3c\xf66\xc6\xcc1y\xe4\xb8\xa0
OUT >>
0030\x01000eunpack ok\n009krf/ed/atr0000
CLOSE CONNECTION
A:
As you probably know, this could be a bunch of issues!
My initial guess is that some permissions are not correct on the server and thus could not update some non-critical information
Could also some other issues as well...
Couple quick questions/suggestions:
Can you run the command manually successfully?
Try adding the --verbose flag to the push command (ie. git push --verbose origin/master)
Adding the verbose flag might put you on the fast track to figuring out the issue.
A:
A very helpful user on the git mailing list took what he thought was a stab in the dark and got it right.
I wasn't returning the exit code, which Git expects on pushes. That solved the problem.
| Git "failed to push some refs to..." with custom Git bridge | I have been working on setting up a git server by using Paramiko to act as an SSH bridge for Git. I am able to clone my repository without issue, and even push changes up, however I get an annoying error message.
Pushing to git@localhost:/pckprojects/heyworld
Counting objects: 5, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 262 bytes, done.
Total 3 (delta 1), reused 0 (delta 0)
To git@localhost:/pckprojects/heyworld
348dfdc..1c0468e master -> master
updating local tracking ref 'refs/remotes/origin/master'
error: failed to push some refs to 'git@localhost:/pckprojects/heyworld'
My git's config looks like this:
[core]
repositoryformatversion = 0
filemode = true
bare = true
sharedRepository = all
[receive]
denyNonFastForwards = false
denyCurrentBranch = false
denyDeletes = false
The odd thing is "master" actually does get updated, and I have no other branches in the repository. In addition, if I clone / push the repository from disk rather than via SSH, I don't see any errors.
Anyone have any thoughts on why I'm seeing this error?
Thanks...
EDIT:
Since it seems likely my issues are related to my SSH server, the main loop is below:
proc = subprocess.Popen(command, stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
while True:
try:
r_ready, w_ready, x_ready = select.select(
[channel, proc.stdout, proc.stderr], [proc.stdin], [])
except Exception, e:
print e
print 'channel: ' + str(channel)
print 'proc: ' + str(proc)
if channel in r_ready and channel.recv_ready():
data = channel.recv(128)
if len(data) > 0:
print 'IN>channel ' + repr(data)
proc.stdin.write(data)
else:
pass
if proc.stdout in r_ready:
data = proc.stdout.read(1)
channel.sendall(data)
if proc.stderr in r_ready:
data = proc.stdout.read(1)
if len(data) > 0:
channel.sendall(data)
else:
print "Encountered empty stderr, breaking"
break
print 'will close'
channel.shutdown(2)
channel.close()
More Information
I thought it might be helpful to see the actual communication. This is at it appears on the server side, since the git client doesn't allow you to see nearly this much.
git-receive-pack /home/www/data/project/heyworld/
OUT >>
00721ee2436e45c80236878132dc87d9e9fee6a81de5 refs/heads/master\x00 report-status delete-refs side-band-64k ofs-delta\n0000
IN >>
00841ee2436e45c80236878132dc87d9e9fee6a81de5 6054b3358787bafd1d96c0fdfbf016d620ccdf09 refs/heads/master\x00 report-status side-band-64k0000
IN >>
PACK\x00\x00\x00\x02\x00\x00\x00\x03\x96\x0ex\x9c\xa5\x8cM\x0e\xc2 \x14\x06\xf7\x9c\x82\x0b\xd8<(?\x8f\xc4\x18\xf7n\xbc\x02\xc2\x87%\x16\xdb4\xb8\xf0\xf66\xbd\x82\xcb\x99d\xa6o\x80\x846\xd9!)\x1b\x0b\xb1\r1$dO\x05\xa6\xb0\xa3@\x06%D<\xb2\x16k\xdc\xf0\xeeRa/F\x07c\x13\x93\x1e\x1d{V\xa3\xce\x89}\x0e\x08\x05p\x91U\x86\x15\xf1\xd3\xa7e\x93\xf7\xa9\xceu\x95\xb7\xda\x1a\xbe\xf2\xbc\x1e8\xbc\x0e\xbc>[\xac\xf3\x90\x96v\x91J\xfb`X\xb3V\xf2D\x96H\xec\xb6\xd5\xde\xf1\xc7B4,\xe2\x07\xff\x8aF\xba\xaf\x01x\x9c340031Q\xc8H\xaddP\xd8P\xfcmzGg\x8aY\xc4\x8e\xad\xb1<\xca\x1b\xa3\x93\xee\xbd\x05\x00\xa8\xb4\x0c\x9by\xd3\xfe\xa0C\x86fU\x18\xbe\xa5\x86\xac5*\xf7\x11\x89\x8b9$x\x9c\x0b\x8b\x9a\x10\xc6\x92\x9b\x9a\xcf\x05\x00\x0f\xb2\x02\xe6=\x12?\xde\x1f\x9a=v\x0c3c\xf66\xc6\xcc1y\xe4\xb8\xa0
OUT >>
0030\x01000eunpack ok\n009krf/ed/atr0000
CLOSE CONNECTION
| [
"As you probably know, this could be a bunch of issues!\n\nMy initial guess is that some permissions are not correct on the server and thus could not update some non-critical information\nCould also some other issues as well...\n\nCouple quick questions/suggestions:\n\nCan you run the command manually successfully?... | [
0,
0
] | [] | [] | [
"git",
"paramiko",
"python"
] | stackoverflow_0003262161_git_paramiko_python.txt |
Q:
django application configuration
I'm dying to get started Django but I'm really struggling with the initial setup. I have Python/MySql/Apache2.2/mod_python installed. Now I'm trying to create a proper directory structure and then update Django and Apache settings.py/httpd docs respectively. Specifically the location tag in the latter. Django and Python are based on simplicity but this is one huge oversight from the Django folks to not provide more guidance in this area. I had a basic page running in the Django dev server but could not get the stylesheet to load. So i decided to install mod_python and try to use apache in my dev environment and I'm even more frustrated. I can't seem find a good example anywhere on the web or in books regarding how to create a realistic directory structure and then based on that strucure, how to configure neccessary settings. Everything in tutorials is as usual not realistic or very helpful. Too simple. If someone here could share how they have their Django directory and settings configured that would be FANTASTIC!
A:
Don't use Apache for development, that'll make you tear your hair out restarting Apache every fifteen seconds (EDIT: or you could just use PythonDebug On).
This technique is how to get your media (stylesheets, etc) loading via the development server. If you used that exact snippet, you'd need to set MEDIA_URL to '/site_media/' and MEDIA_ROOT to '/path/to/media' (obviously this latter is likely to need changing to wherever your media files actually are).
A:
Thanks guys. After doing some more searching I found exactly what i was looking for here. It's an example project directory structure and settings.py. If you view the comments there you can see a lot of others were confused about this as well and found the example helpful. It would be nice if Django created a recommended dir structure so you know where to store css, js, django app files, template files, etc.
A:
We just built and released (under Apache2) Djenesis at OSCON2010:
Djenesis on Google Code
The two goals of Djenesis are to
ease bootstrapping new projects based on your
provide a default project template based on dozens of Django-based, web-specific applications
More details about the features/benefits/tutorial as well as the code are on Google Code.
A:
There's a lot to your question, so I'll try to boil it down to this:
The tutorial is aimed at getting you to use the framework and to be up and running with as little configuration as possible. No server to configure, etc. If you are trying to load CSS with the dev server, you will need to pull the CSS from somewhere "beyond" the dev server. For example, on my Mac, I launch the dev server, but load the CSS from the built-in apache server.
There is more info available about using Apache and mod-python here: mod_python and apache setup info
I'm not sure what you mean by "creating the directory structure", but most of the core application files are typically created by running the django-admin.py script, by running startproject and startapp. This is demonstrated in the tutorial.
You can also ask questions on the IRC #django channel! If you are looking for a book on the subject, you can also check out the Django Book.
| django application configuration | I'm dying to get started Django but I'm really struggling with the initial setup. I have Python/MySql/Apache2.2/mod_python installed. Now I'm trying to create a proper directory structure and then update Django and Apache settings.py/httpd docs respectively. Specifically the location tag in the latter. Django and Python are based on simplicity but this is one huge oversight from the Django folks to not provide more guidance in this area. I had a basic page running in the Django dev server but could not get the stylesheet to load. So i decided to install mod_python and try to use apache in my dev environment and I'm even more frustrated. I can't seem find a good example anywhere on the web or in books regarding how to create a realistic directory structure and then based on that strucure, how to configure neccessary settings. Everything in tutorials is as usual not realistic or very helpful. Too simple. If someone here could share how they have their Django directory and settings configured that would be FANTASTIC!
| [
"Don't use Apache for development, that'll make you tear your hair out restarting Apache every fifteen seconds (EDIT: or you could just use PythonDebug On).\nThis technique is how to get your media (stylesheets, etc) loading via the development server. If you used that exact snippet, you'd need to set MEDIA_URL to... | [
1,
1,
1,
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0000464010_django_python.txt |
Q:
How to stop a python subprocess which is running unit tests right away? Terminate and kill not working
I have a Tkinter GUI running two threads, the main tread for the GUI and a worker thread. The worker thread creates a subprocess using the following code:
myProcess = subprocess.Popen(['python', '-u', 'runTests.py'],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
The file runTests.py does some setup and then runs a unit test file using the following command:
execfile('myUnitTests.py')
The file myUnitTests.py has several unit tests some that take over five minutes to run.
From the GUI I click a button to stop running the tests. This in turn makes the worker thread send a signal to stop the subprocess:
myProcess.terminate()
The terminate command does not stop the process right away, it waits until the current unit test finishes running and then it terminates the process?
I have tried to use os.kill but I get the same results as with terminate().
Any idea of how can I make my program more responsive so that it kill the subprocess right away?
A:
The Python documentation [ http://docs.python.org/library/signal.html ] says:
Although Python signal handlers are called asynchronously as far as the Python user is concerned, they can only occur between the “atomic” instructions of the Python interpreter. This means that signals arriving during long calculations implemented purely in C (such as regular expression matches on large bodies of text) may be delayed for an arbitrary amount of time.
So if your five-minute unit test is doing "a long calculation implemented purely in C", and your unit test harness installs a handler for SIGTERM, that's your problem. If so, try myProcess.kill instead of myProcess.terminate (or, if you haven't got 2.6, myProcess.send_signal(9)). SIGKILL is uncatchable from user space and should have immediate effect.
Warning: any clean-up actions that are supposed to run on the way out of your unit test framework will not be executed.
| How to stop a python subprocess which is running unit tests right away? Terminate and kill not working | I have a Tkinter GUI running two threads, the main tread for the GUI and a worker thread. The worker thread creates a subprocess using the following code:
myProcess = subprocess.Popen(['python', '-u', 'runTests.py'],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
The file runTests.py does some setup and then runs a unit test file using the following command:
execfile('myUnitTests.py')
The file myUnitTests.py has several unit tests some that take over five minutes to run.
From the GUI I click a button to stop running the tests. This in turn makes the worker thread send a signal to stop the subprocess:
myProcess.terminate()
The terminate command does not stop the process right away, it waits until the current unit test finishes running and then it terminates the process?
I have tried to use os.kill but I get the same results as with terminate().
Any idea of how can I make my program more responsive so that it kill the subprocess right away?
| [
"The Python documentation [ http://docs.python.org/library/signal.html ] says:\n\n\nAlthough Python signal handlers are called asynchronously as far as the Python user is concerned, they can only occur between the “atomic” instructions of the Python interpreter. This means that signals arriving during long calculat... | [
2
] | [] | [] | [
"execfile",
"python",
"subprocess",
"terminate"
] | stackoverflow_0003302910_execfile_python_subprocess_terminate.txt |
Q:
how do I run a python script that requires pyparsing?
I got a python file which using something called pyparsing but when I run it It showed an error that pyparsing is required can any one pls tel me what to do
not that I am a dump in that thing called pything I need to run that script only :)thanks
A:
If pyparsing is required, and you haven't got it, you need to install it. See https://pypi.org/project/pyparsing/ and/or https://github.com/pyparsing/pyparsing for instructions.
| how do I run a python script that requires pyparsing? | I got a python file which using something called pyparsing but when I run it It showed an error that pyparsing is required can any one pls tel me what to do
not that I am a dump in that thing called pything I need to run that script only :)thanks
| [
"If pyparsing is required, and you haven't got it, you need to install it. See https://pypi.org/project/pyparsing/ and/or https://github.com/pyparsing/pyparsing for instructions.\n"
] | [
3
] | [] | [] | [
"pyparsing",
"python"
] | stackoverflow_0003304855_pyparsing_python.txt |
Q:
XML parsing gives me empty values
Having trouble getting this to work. What's strange is that I have 10 bookmarks in Delicious and it prints out 10 blank strings so it must be close to working.
import urllib
from xml.dom.minidom import parse
FEED = 'http://feeds.delicious.com/v2/rss/migrantgeek'
dom = parse(urllib.urlopen(FEED))
for item in dom.getElementsByTagName('item'):
print item.getAttribute('title')
A:
Title isn't an attribute, it's another tag within item.
| XML parsing gives me empty values | Having trouble getting this to work. What's strange is that I have 10 bookmarks in Delicious and it prints out 10 blank strings so it must be close to working.
import urllib
from xml.dom.minidom import parse
FEED = 'http://feeds.delicious.com/v2/rss/migrantgeek'
dom = parse(urllib.urlopen(FEED))
for item in dom.getElementsByTagName('item'):
print item.getAttribute('title')
| [
"Title isn't an attribute, it's another tag within item.\n"
] | [
3
] | [] | [] | [
"minidom",
"python",
"xml"
] | stackoverflow_0003305323_minidom_python_xml.txt |
Q:
How to link as .so instead of .dylib on OSX 10.6 using qmake
I am trying to use SWIG to wrap some C++ code for the use with Python. As described here it seems to be necessary to link my C++ code against an .so file, not a .dylib file. The thread suggests to use libtool in combination with the -module flag to link, but I am using qmake and need more precise instructions on how I do that.
My .pro file (for qmake) looks like this:
TEMPLATE = lib
TARGET =
CONFIG += qt debug console plugin no_plugin_name_prefix
QT += opengl
DEPENDPATH += . libgm src
INCLUDEPATH += . libgm src /usr/include/python2.6/
LIBS += -L/usr/lib -lpython
MOC_DIR = .moc
OBJECTS_DIR = .tmp
DESTDIR = bin
TARGET = _mylibname
DEFINES += COMPILE_DL=1
HEADERS += <my_headers> \
src/swig_wrap.h
SOURCES += <my_sources>
src/swig_wrap.cxx
I am invoking first swig, then qmake, then make using Eclipse builders:
cd src/
swig -c++ -python swig.i
cd ..
qmake -spec macx-g++
make all
Compile and Link works fine, but leaves me with bin/_mylibname.dylib, and when I run my python application, the statement import _mylibname fails.
Does anyone know how I can get qmake to build a *.so instead of a *.dylib, possibly using libtool (or any other way for all I care)? My platform:
Mac OS X Snow Leopard (10.6)
Python 2.6
SWIG 1.3.31
qmake 2.01a with Qt 4.6.2
g++ i686-apple-darwin10-g++-4.2.1
I am not very knowledgable with linking issues, so please be gentle :-)
Ole
A:
I think you might have two issues.
To get the output file to be a .so file, I set the compile option -o mylib.so and that forced gcc to name the file correctly.
The other issue I think you might have is that on the Mac, linking against the python lib is not the same as on a Linux machine. What I found that was on the mac you don't use -lpython to link to the python lib, but rather use -framework python and then the linker will find the correct lib.
| How to link as .so instead of .dylib on OSX 10.6 using qmake | I am trying to use SWIG to wrap some C++ code for the use with Python. As described here it seems to be necessary to link my C++ code against an .so file, not a .dylib file. The thread suggests to use libtool in combination with the -module flag to link, but I am using qmake and need more precise instructions on how I do that.
My .pro file (for qmake) looks like this:
TEMPLATE = lib
TARGET =
CONFIG += qt debug console plugin no_plugin_name_prefix
QT += opengl
DEPENDPATH += . libgm src
INCLUDEPATH += . libgm src /usr/include/python2.6/
LIBS += -L/usr/lib -lpython
MOC_DIR = .moc
OBJECTS_DIR = .tmp
DESTDIR = bin
TARGET = _mylibname
DEFINES += COMPILE_DL=1
HEADERS += <my_headers> \
src/swig_wrap.h
SOURCES += <my_sources>
src/swig_wrap.cxx
I am invoking first swig, then qmake, then make using Eclipse builders:
cd src/
swig -c++ -python swig.i
cd ..
qmake -spec macx-g++
make all
Compile and Link works fine, but leaves me with bin/_mylibname.dylib, and when I run my python application, the statement import _mylibname fails.
Does anyone know how I can get qmake to build a *.so instead of a *.dylib, possibly using libtool (or any other way for all I care)? My platform:
Mac OS X Snow Leopard (10.6)
Python 2.6
SWIG 1.3.31
qmake 2.01a with Qt 4.6.2
g++ i686-apple-darwin10-g++-4.2.1
I am not very knowledgable with linking issues, so please be gentle :-)
Ole
| [
"I think you might have two issues. \nTo get the output file to be a .so file, I set the compile option -o mylib.so and that forced gcc to name the file correctly.\nThe other issue I think you might have is that on the Mac, linking against the python lib is not the same as on a Linux machine. What I found that was ... | [
0
] | [] | [] | [
"dylib",
"python",
"qmake",
"shared_libraries",
"swig"
] | stackoverflow_0002601965_dylib_python_qmake_shared_libraries_swig.txt |
Q:
Cannot make cProfile work in IPython
I'm missing something very basic.
class C:
def __init__(self):
self.N = 100
pass
def f(self, param):
print 'C.f -- param'
for k in xrange(param):
for i in xrange(self.N):
for j in xrange(self.N):
a = float(i)/(1+float(j)) + float(i/self.N) ** float(j/self.N)
import cProfile
c = C()
cProfile.run('c.f(3)')
When I run the above code in IPython, I get:
NameError: name 'c' is not defined
What am I missing?
UPDATE the exact paste of my session is here: http://pastebin.com/f3e1b9946
UPDATE I didn't mention that the problem occurs in IPython, which (at it turns out) is the source of the problem
A:
While inside IPython, you can use the %prun magic function:
In [9]: %prun c.f(3)
C.f -- param
3 function calls in 0.066 CPU seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.066 0.066 0.066 0.066 <string>:6(f)
1 0.000 0.000 0.066 0.066 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
A:
Not the original poster's problem, but you can also get this same error if you are invoking cProfile.run() in something other than the __main__ namespace (from within a function or an import). In that case you need to use the following instead of the run() method:
cProfile.runctx("your code", globals(), locals())
Kudos to this post for helping me figure this out.
A:
Although IPython is very handy, there is a lot of rare cases when it breaks working code or masks errors. So it's useful to try code in standard interpreter when you get such mystical errors.
| Cannot make cProfile work in IPython | I'm missing something very basic.
class C:
def __init__(self):
self.N = 100
pass
def f(self, param):
print 'C.f -- param'
for k in xrange(param):
for i in xrange(self.N):
for j in xrange(self.N):
a = float(i)/(1+float(j)) + float(i/self.N) ** float(j/self.N)
import cProfile
c = C()
cProfile.run('c.f(3)')
When I run the above code in IPython, I get:
NameError: name 'c' is not defined
What am I missing?
UPDATE the exact paste of my session is here: http://pastebin.com/f3e1b9946
UPDATE I didn't mention that the problem occurs in IPython, which (at it turns out) is the source of the problem
| [
"While inside IPython, you can use the %prun magic function:\nIn [9]: %prun c.f(3)\nC.f -- param\n 3 function calls in 0.066 CPU seconds\n\n Ordered by: internal time\n\n ncalls tottime percall cumtime percall filename:lineno(function)\n 1 0.066 0.066 0.066 0.066 <string>:6(f)\n ... | [
26,
16,
3
] | [] | [] | [
"ipython",
"profiler",
"profiling",
"python"
] | stackoverflow_0001819448_ipython_profiler_profiling_python.txt |
Q:
How do I get Python's Mechanize to POST an ajax request?
The site I'm trying to spider is using the javascript:
request.open("POST", url, true);
To pull in extra information over ajax that I need to spider. I've tried various permutations of:
r = mechanize.urlopen("https://site.tld/dir/" + url, urllib.urlencode({'none' : 'none'}))
to get Mechanize to get the page but it always results in me getting the login HTML again, indicating that something is wrong. Firefox doesn't seem to add any HTTP data to the POST according to Firebug, and I'm adding an empty field to try and force the urlopen to use "POST" instead of "GET" hoping the site ignores the field. I thought that Mechanize's urlopen DOES include cookies. But being HTTPS it's hard to wireshark the transaction to debug.
Is there a better way?
Also there doesn't seem to be decent API documentation for Mechanize, just examples. This is annoying.
A:
This was what I came up with:
req = mechanize.Request("https://www.site.com/path/" + url, " ")
req.add_header("User-Agent", "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.7) Gecko/20100713 Firefox/3.6.7")
req.add_header("Referer", "https://www.site.com/path")
cj.add_cookie_header(req)
res = mechanize.urlopen(req)
Whats interesting is the " " in the call to mechanize.Request forces it into "POST" mode. Obviously the site didn't choke on a single space :)
It needed the cookies as well. I debugged the headers using:
hh = mechanize.HTTPHandler()
hsh = mechanize.HTTPSHandler()
hh.set_http_debuglevel(1)
hsh.set_http_debuglevel(1)
opener = mechanize.build_opener(hh, hsh)
logger = logging.getLogger()
logger.addHandler(logging.StreamHandler(sys.stdout))
logger.setLevel(logging.NOTSET)
mechanize.install_opener(opener)
Against what Firebug was showing.
| How do I get Python's Mechanize to POST an ajax request? | The site I'm trying to spider is using the javascript:
request.open("POST", url, true);
To pull in extra information over ajax that I need to spider. I've tried various permutations of:
r = mechanize.urlopen("https://site.tld/dir/" + url, urllib.urlencode({'none' : 'none'}))
to get Mechanize to get the page but it always results in me getting the login HTML again, indicating that something is wrong. Firefox doesn't seem to add any HTTP data to the POST according to Firebug, and I'm adding an empty field to try and force the urlopen to use "POST" instead of "GET" hoping the site ignores the field. I thought that Mechanize's urlopen DOES include cookies. But being HTTPS it's hard to wireshark the transaction to debug.
Is there a better way?
Also there doesn't seem to be decent API documentation for Mechanize, just examples. This is annoying.
| [
"This was what I came up with:\nreq = mechanize.Request(\"https://www.site.com/path/\" + url, \" \")\nreq.add_header(\"User-Agent\", \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.7) Gecko/20100713 Firefox/3.6.7\")\nreq.add_header(\"Referer\", \"https://www.site.com/path\")\ncj.add_cookie_header(req)\nr... | [
8
] | [] | [] | [
"mechanize",
"python"
] | stackoverflow_0003225569_mechanize_python.txt |
Q:
Static memory in python: do loops create new instances of variables in memory?
I've been running Python scripts that make several calls to some functions, say F1(x) and F2(x), that look a bit like this:
x = LoadData()
for j in range(N):
y = F1(x[j])
z[j] = F2(y)
del y
SaveData(z)
Performance is a lot faster if I keep the "del y" line. But I don't understand why this is true. If I don't use "del y", then I quickly run out of RAM and have to resort to virtual memory, and everything slows to a crawl. Buy if I use "del y", then I am repeatedly flushing and re-allocating the memory for y. What I would like to do is have y sit as static memory, and reuse the memory on every F1(x) call. But from what I can tell, that isn't what's happening.
Also, not sure if it's relevant, but my data consists of numpy arrays.
A:
Without the del y you might need twice as much memory. This is because for each pass through the loop, y is bound to the previous value of F1 while the next one is calculated.
once F1 returns y is rebound to that new value and the old F1 result can be released.
This would mean that the object returned by F1 occupies quite a lot of memory
Unrolling the loop for the first couple of iterations would look like this
y = F1(x[0]) # F1(x[0]) is calculated, then y is bound to it
z[j] = F2(y)
y = F1(x[1]) # y is still bound to F1(x[0]) while F1(x[1]) is computed
# The memory for F1(X[0]) is finally freed when y is rebound
z[j] = F2(y)
using del y is a good solution if this is what is happening in your case.
A:
what you actually want is something that's weird to do in python -- you want to allocate a region of memory for y and pass the pointer to that region to F1() so it can use that region to build up the next value of y. this avoid having F1() do it's own allocation for the new value of y, the reference to which is then written into your own variable y (which is actually not the value of whatever F1() calculated but a reference to it)
There's already an SO question about passing by reference in python: How do I pass a variable by reference?
A:
For very large values of N use xrange instead of range for memory save. Also you can nest functions but I don't know if this will help you. : \
x = LoadData()
for j in xrange(N):
z[j] = F2(F1(x[j]))
SaveData(z)
Maybe F1 and F2 are making unnecessary copies of objects, the best way would be in-place, something like:
x = LoadData()
for item in x:
item.F1()
item.F2()
SaveData(x)
Sorry if may answer is not helpful
| Static memory in python: do loops create new instances of variables in memory? | I've been running Python scripts that make several calls to some functions, say F1(x) and F2(x), that look a bit like this:
x = LoadData()
for j in range(N):
y = F1(x[j])
z[j] = F2(y)
del y
SaveData(z)
Performance is a lot faster if I keep the "del y" line. But I don't understand why this is true. If I don't use "del y", then I quickly run out of RAM and have to resort to virtual memory, and everything slows to a crawl. Buy if I use "del y", then I am repeatedly flushing and re-allocating the memory for y. What I would like to do is have y sit as static memory, and reuse the memory on every F1(x) call. But from what I can tell, that isn't what's happening.
Also, not sure if it's relevant, but my data consists of numpy arrays.
| [
"Without the del y you might need twice as much memory. This is because for each pass through the loop, y is bound to the previous value of F1 while the next one is calculated.\nonce F1 returns y is rebound to that new value and the old F1 result can be released.\nThis would mean that the object returned by F1 occu... | [
17,
2,
0
] | [] | [] | [
"memory",
"python"
] | stackoverflow_0003305870_memory_python.txt |
Q:
Google App engine(python) Updating a db.StringListProperty contention/concurrency issues
Ive been looking at the principles of fan out of messages as described in the google IO "building scalable complex apps"
In it it suggests that using a list property for say a list of receivers is a scalable solution.
In this scenario how does one update the list property so that contention issues don't step in, If the app is handling many users
using the IO example:
class message(db.model)
sender=db.stringproperty()
body=db.textproperty()
class messageindex(db.model)
receivers=db.stringlistproperty()
To adapt their example i would also need
class followers(db.model)
user=db.userproperty()
followers=db.stringlistproperty()
(code is just in for an example and is not typed correctly - sorry)
The concept is if someone follows you, you add their key to your followers list in the followers model, If you make a message, you store your followers list in the message list - and using a simple query all users get the message - pretty simple stuff.
The issue is updating someones list of followers. Assuming that some accounts could have millions of followers - if i simply update the entity their is going to be contention issues - One would also need more than one entry as i think their is a limit of like 5000 entries per list. And of course requests may be sent to "add" or "remove" a person. What would be the best way to do this. I was thinking about using the task_queue service. I was thinking about a work model that stores each follow request and triggers a task to run in say 60 seconds. The task gets all the work to be done for a persons followers list - and builds the new list. Not sure how this would work - but it would stop contention issues as only one thread could execute in one min.
Does anyone have any code examples good advice,help on how i can do this in a scalable manner - i don't think m cache can be used in the method as any loss would mean a follow request could be lost.
A:
I have now found the solution for this using a fork-join-queue. Their is a post on google IO 2010 - regarding how this is done:
link text
| Google App engine(python) Updating a db.StringListProperty contention/concurrency issues | Ive been looking at the principles of fan out of messages as described in the google IO "building scalable complex apps"
In it it suggests that using a list property for say a list of receivers is a scalable solution.
In this scenario how does one update the list property so that contention issues don't step in, If the app is handling many users
using the IO example:
class message(db.model)
sender=db.stringproperty()
body=db.textproperty()
class messageindex(db.model)
receivers=db.stringlistproperty()
To adapt their example i would also need
class followers(db.model)
user=db.userproperty()
followers=db.stringlistproperty()
(code is just in for an example and is not typed correctly - sorry)
The concept is if someone follows you, you add their key to your followers list in the followers model, If you make a message, you store your followers list in the message list - and using a simple query all users get the message - pretty simple stuff.
The issue is updating someones list of followers. Assuming that some accounts could have millions of followers - if i simply update the entity their is going to be contention issues - One would also need more than one entry as i think their is a limit of like 5000 entries per list. And of course requests may be sent to "add" or "remove" a person. What would be the best way to do this. I was thinking about using the task_queue service. I was thinking about a work model that stores each follow request and triggers a task to run in say 60 seconds. The task gets all the work to be done for a persons followers list - and builds the new list. Not sure how this would work - but it would stop contention issues as only one thread could execute in one min.
Does anyone have any code examples good advice,help on how i can do this in a scalable manner - i don't think m cache can be used in the method as any loss would mean a follow request could be lost.
| [
"I have now found the solution for this using a fork-join-queue. Their is a post on google IO 2010 - regarding how this is done:\nlink text\n"
] | [
1
] | [] | [] | [
"google_app_engine",
"python"
] | stackoverflow_0003184411_google_app_engine_python.txt |
Q:
How can I map non-English Windows timezone names to Olsen names in Python?
If I call win32timezone.TimeZoneInfo.local().timeZoneName, it gives me the time zone name in the current locale (for example, on a Japanese machine, it returns u"東京 (標準時)").
I would like to map this name to an Olsen database timezone name for use with pytz. CLDR windowZones.xml helps me to map English names, but can't handle the Japanese name.
How can I convert the name back to English (it should be Tokyo Standard Time in this case)?
A:
dict(win32timezone.TimeZoneInfo._get_indexed_time_zone_keys()) returns exactly the mapping I need from the current locale's name to the English name. The following code solves it:
import win32timezone
win32tz_name = win32timezone.TimeZoneInfo.local().timeZoneName
win32timezone_to_en = dict(win32timezone.TimeZoneInfo._get_indexed_time_zone_keys())
win32timezone_name_en = win32timezone_to_en.get(win32tz_name, win32tz_name)
olsen_name = win32timezones.get(win32timezone_name_en, None)
if not olsen_name:
raise ValueError(u"Could not map win32 timezone name %s (English %s) to Olsen timezone name" % (win32tz_name, win32timezone_name_en))
return pytz.timezone(olsen_name)
It would be nice if this was accessible in the win32timezone.TimeZoneInfo object, though, instead of having to call a private method.
| How can I map non-English Windows timezone names to Olsen names in Python? | If I call win32timezone.TimeZoneInfo.local().timeZoneName, it gives me the time zone name in the current locale (for example, on a Japanese machine, it returns u"東京 (標準時)").
I would like to map this name to an Olsen database timezone name for use with pytz. CLDR windowZones.xml helps me to map English names, but can't handle the Japanese name.
How can I convert the name back to English (it should be Tokyo Standard Time in this case)?
| [
"dict(win32timezone.TimeZoneInfo._get_indexed_time_zone_keys()) returns exactly the mapping I need from the current locale's name to the English name. The following code solves it:\n import win32timezone\n win32tz_name = win32timezone.TimeZoneInfo.local().timeZoneName\n win32timezone_to_en = dict(win32timezone.T... | [
3
] | [] | [] | [
"locale",
"python",
"pywin32",
"timezone",
"winapi"
] | stackoverflow_0003306787_locale_python_pywin32_timezone_winapi.txt |
Q:
amara and django
I am trying to do webservice calls with django views using Amara library.
However anytime I do import amara (by simply importing it!) and call a django view with it imported, I get such errors:
Environment:
Request Method: GET
Request URL: http://127.0.0.1:4444/test
Django Version: 1.2.1
Python Version: 2.6.5
Installed Applications:
['django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.admin',
'azula.epgdb',
'django_extensions']
Installed Middleware:
('django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware')
Traceback:
File "/usr/local/lib/python2.6/dist-packages/django/core/handlers/base.py" in get_response
80. response = middleware_method(request)
File "/usr/local/lib/python2.6/dist-packages/django/middleware/common.py" in process_request
58. _is_valid_path("%s/" % request.path_info, urlconf)):
File "/usr/local/lib/python2.6/dist-packages/django/middleware/common.py" in _is_valid_path
143. urlresolvers.resolve(path, urlconf)
File "/usr/local/lib/python2.6/dist-packages/django/core/urlresolvers.py" in resolve
301. return get_resolver(urlconf).resolve(path)
File "/usr/local/lib/python2.6/dist-packages/django/core/urlresolvers.py" in resolve
216. sub_match = pattern.resolve(new_path)
File "/usr/local/lib/python2.6/dist-packages/django/core/urlresolvers.py" in resolve
123. return self.callback, args, kwargs
File "/usr/local/lib/python2.6/dist-packages/django/core/urlresolvers.py" in _get_callback
129. self._callback = get_callable(self._callback_str)
File "/usr/local/lib/python2.6/dist-packages/django/utils/functional.py" in wrapper
124. result = func(*args)
File "/usr/local/lib/python2.6/dist-packages/django/core/urlresolvers.py" in get_callable
56. lookup_view = getattr(import_module(mod_name), func_name)
File "/usr/local/lib/python2.6/dist-packages/django/utils/importlib.py" in import_module
35. __import__(name)
File "/home/uluc/azula/epgdb/views.py" in <module>
4. from azula.epgdb.utils import EventSet
File "/home/uluc/azula/epgdb/utils.py" in <module>
6. import amara
File "/usr/lib/pymodules/python2.6/amara/__init__.py" in <module>
11. import binderytools
File "/usr/lib/pymodules/python2.6/amara/binderytools.py" in <module>
13. from Ft.Xml import InputSource
File "/usr/lib/python2.6/dist-packages/Ft/Xml/InputSource.py" in <module>
355. DefaultFactory = InputSourceFactory(catalog=GetDefaultCatalog())
File "/usr/lib/python2.6/dist-packages/Ft/Xml/Catalog.py" in GetDefaultCatalog
579. catalog = Catalog(uri, quiet)
File "/usr/lib/python2.6/dist-packages/Ft/Xml/Catalog.py" in __init__
95. self._parseXmlCat(data)
File "/usr/lib/python2.6/dist-packages/Ft/Xml/Catalog.py" in _parseXmlCat
372. from Ft.Xml.Sax import CreateParser
File "/usr/lib/python2.6/dist-packages/Ft/Xml/Sax.py" in <module>
242. class SaxPrinter(ContentHandler):
File "/usr/lib/python2.6/dist-packages/Ft/Xml/Sax.py" in SaxPrinter
247. def __init__(self, printer=XmlPrinter(sys.stdout, 'utf-8')):
File "/usr/lib/python2.6/dist-packages/Ft/Xml/Lib/XmlPrinter.py" in __init__
39. self.stream = sw = cStreamWriter.StreamWriter(stream, encoding)
Exception Type: TypeError at /test
Exception Value: argument must have 'write' attribute
How can this be solved ? I tried this under either Debian Lenny & Ubuntu 10.04 yet via Django SVN version and Amara 1
I am suspicious of some character encoding problem.
A:
Putting "WSGIRestrictStdout Off" to my apache fixed the issue as said in here.
| amara and django | I am trying to do webservice calls with django views using Amara library.
However anytime I do import amara (by simply importing it!) and call a django view with it imported, I get such errors:
Environment:
Request Method: GET
Request URL: http://127.0.0.1:4444/test
Django Version: 1.2.1
Python Version: 2.6.5
Installed Applications:
['django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.admin',
'azula.epgdb',
'django_extensions']
Installed Middleware:
('django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware')
Traceback:
File "/usr/local/lib/python2.6/dist-packages/django/core/handlers/base.py" in get_response
80. response = middleware_method(request)
File "/usr/local/lib/python2.6/dist-packages/django/middleware/common.py" in process_request
58. _is_valid_path("%s/" % request.path_info, urlconf)):
File "/usr/local/lib/python2.6/dist-packages/django/middleware/common.py" in _is_valid_path
143. urlresolvers.resolve(path, urlconf)
File "/usr/local/lib/python2.6/dist-packages/django/core/urlresolvers.py" in resolve
301. return get_resolver(urlconf).resolve(path)
File "/usr/local/lib/python2.6/dist-packages/django/core/urlresolvers.py" in resolve
216. sub_match = pattern.resolve(new_path)
File "/usr/local/lib/python2.6/dist-packages/django/core/urlresolvers.py" in resolve
123. return self.callback, args, kwargs
File "/usr/local/lib/python2.6/dist-packages/django/core/urlresolvers.py" in _get_callback
129. self._callback = get_callable(self._callback_str)
File "/usr/local/lib/python2.6/dist-packages/django/utils/functional.py" in wrapper
124. result = func(*args)
File "/usr/local/lib/python2.6/dist-packages/django/core/urlresolvers.py" in get_callable
56. lookup_view = getattr(import_module(mod_name), func_name)
File "/usr/local/lib/python2.6/dist-packages/django/utils/importlib.py" in import_module
35. __import__(name)
File "/home/uluc/azula/epgdb/views.py" in <module>
4. from azula.epgdb.utils import EventSet
File "/home/uluc/azula/epgdb/utils.py" in <module>
6. import amara
File "/usr/lib/pymodules/python2.6/amara/__init__.py" in <module>
11. import binderytools
File "/usr/lib/pymodules/python2.6/amara/binderytools.py" in <module>
13. from Ft.Xml import InputSource
File "/usr/lib/python2.6/dist-packages/Ft/Xml/InputSource.py" in <module>
355. DefaultFactory = InputSourceFactory(catalog=GetDefaultCatalog())
File "/usr/lib/python2.6/dist-packages/Ft/Xml/Catalog.py" in GetDefaultCatalog
579. catalog = Catalog(uri, quiet)
File "/usr/lib/python2.6/dist-packages/Ft/Xml/Catalog.py" in __init__
95. self._parseXmlCat(data)
File "/usr/lib/python2.6/dist-packages/Ft/Xml/Catalog.py" in _parseXmlCat
372. from Ft.Xml.Sax import CreateParser
File "/usr/lib/python2.6/dist-packages/Ft/Xml/Sax.py" in <module>
242. class SaxPrinter(ContentHandler):
File "/usr/lib/python2.6/dist-packages/Ft/Xml/Sax.py" in SaxPrinter
247. def __init__(self, printer=XmlPrinter(sys.stdout, 'utf-8')):
File "/usr/lib/python2.6/dist-packages/Ft/Xml/Lib/XmlPrinter.py" in __init__
39. self.stream = sw = cStreamWriter.StreamWriter(stream, encoding)
Exception Type: TypeError at /test
Exception Value: argument must have 'write' attribute
How can this be solved ? I tried this under either Debian Lenny & Ubuntu 10.04 yet via Django SVN version and Amara 1
I am suspicious of some character encoding problem.
| [
"Putting \"WSGIRestrictStdout Off\" to my apache fixed the issue as said in here.\n"
] | [
2
] | [] | [] | [
"amara",
"django",
"python"
] | stackoverflow_0003306760_amara_django_python.txt |
Q:
If x is list, why does x += "ha" work, while x = x + "ha" throws an exception?
From what little I know, + op for lists only requires the 2nd operand to be iterable, which "ha" clearly is.
In code:
>>> x = []
>>> x += "ha"
>>> x
['h', 'a']
>>> x = x + "ha"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can only concatenate list (not "str") to list
A:
Using += with a list is like calling extend, not +.
You can call extend with an iterable.
You can only use + with another list.
I can only guess why this decision was made, but I imagine it is for performance reasons. Calling + results in a new object being created and all items being copied, whereas extend can use free space in the existing list object saving a copy in some cases.
Another side-effect of this decision is that if you write x += y other references to the list will see the change but if you use x = x + y then they will not. This is demonstrated below:
>>> x = ['a','b']
>>> y = ['c', d']
>>> z = x
>>> x += y
>>> z
['a', 'b', 'c', 'd']
>>> x = ['a','b']
>>> y = ['c', d']
>>> z = x
>>> x = x + y
>>> z
['a', 'b']
References
Python source code for list.
Source code for +=:
static PyObject *
list_inplace_concat(PyListObject *self, PyObject *other)
{
PyObject *result;
result = listextend(self, other);
if (result == NULL)
return result;
Py_DECREF(result);
Py_INCREF(self);
return (PyObject *)self;
}
Source code for +:
static PyObject *
list_concat(PyListObject *a, PyObject *bb)
{
Py_ssize_t size;
Py_ssize_t i;
PyObject **src, **dest;
PyListObject *np;
if (!PyList_Check(bb)) {
PyErr_Format(PyExc_TypeError,
"can only concatenate list (not \"%.200s\") to list",
bb->ob_type->tp_name);
return NULL;
}
// etc ...
A:
You're thinking about it backwards. You're asking why x = x + 'ha' throws an exception, given that x += 'ha' works. Really, the question is why x += 'ha' works at all.
Everyone agrees (I hope) that 'abc' + 'ha' and [1, 2, 3] + ['h', 'a'] should work. And in these cases, overloading += to do in-place modification seems reasonable.
The language designers decided that [1, 2, 3] + 'ha' shouldn't, because you're mixing different types. And that seems reasonable as well.
So the question is why they decided to allow mixing different types in the case of x += 'ha'. In this case, I imagine there are a couple reasons:
It's a convenient shorthand
It's obvious what happens (you append each of the items in the iterable to x)
In general, Python tries to let you do what you want, but where there's ambiguity, it tends to force you to be explicit.
A:
When defining operators, there are two different "add" operators: One is called __add__, the other __iadd__. The latter one is for in-place additions with +=, the other one is the regular + operator. http://docs.python.org/reference/datamodel.html has more infos on that.
| If x is list, why does x += "ha" work, while x = x + "ha" throws an exception? | From what little I know, + op for lists only requires the 2nd operand to be iterable, which "ha" clearly is.
In code:
>>> x = []
>>> x += "ha"
>>> x
['h', 'a']
>>> x = x + "ha"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can only concatenate list (not "str") to list
| [
"Using += with a list is like calling extend, not +.\n\nYou can call extend with an iterable.\nYou can only use + with another list.\n\nI can only guess why this decision was made, but I imagine it is for performance reasons. Calling + results in a new object being created and all items being copied, whereas extend... | [
31,
7,
5
] | [] | [] | [
"list",
"operators",
"python"
] | stackoverflow_0003216706_list_operators_python.txt |
Q:
Django manage.py doesn't work with IPython
I am on MacOSX Snow Leopard and I'm using python 2.6.5 installed with macports. I'm inside a virtualenv.
I can't run python manage.py shell after installing IPython but I can run IPython standalone.
I figured out that the following line is what causes the issue:
(status, result) = commands.getstatusoutput("otool -L %s | grep libedit" % _rl.__file__ )
This happens because for a strange reason, the method getstatusoutput is not available when i launch python manage.py shell but it's available when I launch ipython. I can import the commands module in both cases.
I tried looking at the sys.path during the execution of both, but there are no differences.
A:
Are they the same commands? Try print commands.__file__ in each. You may find that your project has a module called "commands" which shadows the stdlib module.
| Django manage.py doesn't work with IPython | I am on MacOSX Snow Leopard and I'm using python 2.6.5 installed with macports. I'm inside a virtualenv.
I can't run python manage.py shell after installing IPython but I can run IPython standalone.
I figured out that the following line is what causes the issue:
(status, result) = commands.getstatusoutput("otool -L %s | grep libedit" % _rl.__file__ )
This happens because for a strange reason, the method getstatusoutput is not available when i launch python manage.py shell but it's available when I launch ipython. I can import the commands module in both cases.
I tried looking at the sys.path during the execution of both, but there are no differences.
| [
"Are they the same commands? Try print commands.__file__ in each. You may find that your project has a module called \"commands\" which shadows the stdlib module.\n"
] | [
0
] | [] | [] | [
"django",
"ipython",
"python"
] | stackoverflow_0003307431_django_ipython_python.txt |
Q:
how to send a post request and get the response in ruby
how to send a post request and get the response in ruby
request is,
name=$name_val
URL is http://example.com/a/2
how do i do this in python or ruby?
A:
To do this in python:
import urllib
data = urllib.urlencode({
"fieldName1" : "Val1",
"fieldName2" : "Val2",
"fieldName3" : "Val3"
})
f = urllib.urlopen("http://example.com/a/2", data)
html = f.read() # this is the response
| how to send a post request and get the response in ruby | how to send a post request and get the response in ruby
request is,
name=$name_val
URL is http://example.com/a/2
how do i do this in python or ruby?
| [
"To do this in python:\nimport urllib\ndata = urllib.urlencode({\n \"fieldName1\" : \"Val1\", \n \"fieldName2\" : \"Val2\", \n \"fieldName3\" : \"Val3\"\n})\nf = urllib.urlopen(\"http://example.com/a/2\", data)\nhtml = f.read() # this is the response\n\n"
] | [
1
] | [] | [] | [
"http",
"python",
"ruby"
] | stackoverflow_0003307601_http_python_ruby.txt |
Q:
On the google app engine, how do I get rid of the 'Only ancestor queries are allowed inside transactions' error?
I am having trouble with one specific query. It needs to run in a transaction, and it does, but whenever the app engine executes my query I get the following error:
Only ancestor queries are allowed
inside transactions
You'll see that my query DOES have an ancestor. So what is the app engine really complaining about?
q = db.Query(EventBase)
q.ancestor = db.Key.from_path(aggrRootKind, aggrRootKeyName)
q.filter('undone =','False')
q.order('-version')
qResult = q.fetch(1, 0)
A:
This line:
q.ancestor = db.Key.from_path(aggrRootKind, aggrRootKeyName)
should read:
q.ancestor(db.Key.from_path(aggrRootKind, aggrRootKeyName))
ancestor() is a method, and in the first snippet, you're replacing it, rather than calling it.
| On the google app engine, how do I get rid of the 'Only ancestor queries are allowed inside transactions' error? | I am having trouble with one specific query. It needs to run in a transaction, and it does, but whenever the app engine executes my query I get the following error:
Only ancestor queries are allowed
inside transactions
You'll see that my query DOES have an ancestor. So what is the app engine really complaining about?
q = db.Query(EventBase)
q.ancestor = db.Key.from_path(aggrRootKind, aggrRootKeyName)
q.filter('undone =','False')
q.order('-version')
qResult = q.fetch(1, 0)
| [
"This line:\nq.ancestor = db.Key.from_path(aggrRootKind, aggrRootKeyName)\n\nshould read:\nq.ancestor(db.Key.from_path(aggrRootKind, aggrRootKeyName))\n\nancestor() is a method, and in the first snippet, you're replacing it, rather than calling it.\n"
] | [
5
] | [] | [] | [
"google_app_engine",
"pydev",
"python"
] | stackoverflow_0003305821_google_app_engine_pydev_python.txt |
Q:
Python converting - [] to London
Excuse my total newbie question but how do I convert:
[<Location: London>] or [<Location: Edinburgh>, <Location: London>] etc
into:
'London' or 'Edinburgh, london'
Some background info to put it in context:
Models.py:
class Location(models.Model):
place = models.CharField(max_length=100)
def __unicode__(self):
return self.place
class LocationForm(ModelForm):
class Meta:
model = Location
forms.py
class BookingForm(forms.Form):
place = forms.ModelMultipleChoiceField(queryset=Location.objects.all(), label='Venue/Location:', required=False)
views.py
def booking(request):
if request.method == 'POST':
form = BookingForm(request.POST)
if form.is_valid():
place = form.cleaned_data['place']
recipients.append(sender)
message = '\nVenue or Location: ' + str(place)
send_mail('Thank you for booking', message, sender, recipients)
)
A:
You're printing the QueryList instead of the individual elements.
u', '.join(x.place for x in Q)
A:
Override the __repr__ method if you want to change the way a Django model is printed in the shell.
A:
If it's a result from a query you're printing there, try
[x.name for x in result]
if name is the attribute containing the location's name.
A:
You could do that with a regexp. Given your first example :
import re
s = "[<Location: London>]"
m = re.search("Location: (.*)>", s)
print m.group(1)
London
| Python converting - [] to London | Excuse my total newbie question but how do I convert:
[<Location: London>] or [<Location: Edinburgh>, <Location: London>] etc
into:
'London' or 'Edinburgh, london'
Some background info to put it in context:
Models.py:
class Location(models.Model):
place = models.CharField(max_length=100)
def __unicode__(self):
return self.place
class LocationForm(ModelForm):
class Meta:
model = Location
forms.py
class BookingForm(forms.Form):
place = forms.ModelMultipleChoiceField(queryset=Location.objects.all(), label='Venue/Location:', required=False)
views.py
def booking(request):
if request.method == 'POST':
form = BookingForm(request.POST)
if form.is_valid():
place = form.cleaned_data['place']
recipients.append(sender)
message = '\nVenue or Location: ' + str(place)
send_mail('Thank you for booking', message, sender, recipients)
)
| [
"You're printing the QueryList instead of the individual elements.\nu', '.join(x.place for x in Q)\n\n",
"Override the __repr__ method if you want to change the way a Django model is printed in the shell.\n",
"If it's a result from a query you're printing there, try \n[x.name for x in result]\n\nif name is the ... | [
7,
1,
1,
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0003307646_django_python.txt |
Q:
Improve fetch time and this function's performance
I am searching the Final model (defined below) with a query which filters on its name property. This query is taking about 2200ms to execute on the development server. How can I speed it up? Here is an AppStats screenshot.
I was filtering on the created field too, but this was taking in excess of 10000ms so I've removed that part of the query for now.
class Final(db.Model):
name = db.StringProperty() # 7 characters long
author = db.StringProperty()
rating = db.FloatProperty()
created = db.DateTimeProperty()
# this is the important part of the request (AppStats shows that these two
# queries take about 2200ms each).
query = Final.all()
query2 = Final.all()
query.filter('name = ', link1)
query2.filter('name = ', link2)
aa = query.fetch(10000)
bb = query2.fetch(10000)
A:
While David's suggestions are good ones, optimizing for speed on the development server is probably a bad idea. The development server's performance does not reflect that of the production server, and optimizations based on development server runtime may not fare well in production.
In general, you can assume that the performance of the development server will decrease as records are added, whilst this is definitely not the case in production, where your query runtime depends only on the size of the result set. If you can reduce the size of your sample dataset in production, that may be your best option for speeding development up.
A:
A few ways you might be able to speed up this query:
Use the SQLite backend for the development server (it might be faster).
Can you store integer IDs instead of string IDs for name? This might make the entities smaller and thus take less time to transfer and deserialize them. It is also easier to check integer equality than string equality so the filter operation might be faster for the datastore to perform.
If name is large, you could potentially save some time by moving the author name and rating into a separate child model. Then you could use an ancestor query to fetch the relevant child models - thus you will save transfer and deserialization time by only only fetching the fields you need.
| Improve fetch time and this function's performance | I am searching the Final model (defined below) with a query which filters on its name property. This query is taking about 2200ms to execute on the development server. How can I speed it up? Here is an AppStats screenshot.
I was filtering on the created field too, but this was taking in excess of 10000ms so I've removed that part of the query for now.
class Final(db.Model):
name = db.StringProperty() # 7 characters long
author = db.StringProperty()
rating = db.FloatProperty()
created = db.DateTimeProperty()
# this is the important part of the request (AppStats shows that these two
# queries take about 2200ms each).
query = Final.all()
query2 = Final.all()
query.filter('name = ', link1)
query2.filter('name = ', link2)
aa = query.fetch(10000)
bb = query2.fetch(10000)
| [
"While David's suggestions are good ones, optimizing for speed on the development server is probably a bad idea. The development server's performance does not reflect that of the production server, and optimizations based on development server runtime may not fare well in production.\nIn general, you can assume tha... | [
1,
0
] | [] | [] | [
"google_app_engine",
"performance",
"python"
] | stackoverflow_0003301607_google_app_engine_performance_python.txt |
Q:
Question on python xlrd
How to know the total number of columns used in an excel sheet in the following link
http://scienceoss.com/read-excel-files-from-python/
Thanks..
A:
The Sheet class has a ncols member which indicates the number of columns
A:
Here are the first 6 lines in the "Quick Start" section of xlrd's README.html:
import xlrd
book = xlrd.open_workbook("myfile.xls")
print "The number of worksheets is", book.nsheets
print "Worksheet name(s):", book.sheet_names()
sh = book.sheet_by_index(0)
print sh.name, sh.nrows, sh.ncols
You can get a tutorial via this link. Likewise the latest SVN commit of the docs. If you installed xlrd from a source distribution or by running a Windows installer, you should already have the docs.
| Question on python xlrd | How to know the total number of columns used in an excel sheet in the following link
http://scienceoss.com/read-excel-files-from-python/
Thanks..
| [
"The Sheet class has a ncols member which indicates the number of columns\n",
"Here are the first 6 lines in the \"Quick Start\" section of xlrd's README.html:\nimport xlrd\nbook = xlrd.open_workbook(\"myfile.xls\")\nprint \"The number of worksheets is\", book.nsheets\nprint \"Worksheet name(s):\", book.sheet_nam... | [
8,
8
] | [] | [] | [
"python",
"xlrd"
] | stackoverflow_0003307912_python_xlrd.txt |
Q:
Python access to object as a function (not __call__)
I want to do the following: I have a container-class Container, it has
attribute attr, which refers to another class OtherClass.
class OtherClass:
def __init__(self, value):
self._value = value
def default(self):
retirn self._value
def another(self):
return self._value ** 2
def as_str(self):
return 'String: %s' % self._value
class Container:
def __init__(self, attr):
self.attr = OtherClass(attr)
I want to:
x = Container(2)
x.attr # when accessing the attribute - return value from default method
2
x.attr.another() # but also an attribute can be treated as an object
4
x.attr.as_str()
'String: 2'
How can I do this?
A:
Not sure if this is what you need. Seems like an odd design to me
>>> class OtherClass(int):
... def __init__(self, value):
... self._value = value
... def another(self):
... return self._value ** 2
...
>>> class Container:
... def __init__(self, attr):
... self.attr = OtherClass(attr)
...
>>> x=Container(2)
>>> x.attr
2
>>> x.attr.another()
4
just-because-you-use-classes-doesn't-mean-it's-OO-ly gnibbler
A:
You can't, unless OtherClass overrides __int__() and you then put it through int() to get the integer value.
A:
What is attr meant to be? You can't have it both ways; either it's the return value of some function or it's an instance of OtherClass.
How about making OtherClass inherit from Integer?
| Python access to object as a function (not __call__) | I want to do the following: I have a container-class Container, it has
attribute attr, which refers to another class OtherClass.
class OtherClass:
def __init__(self, value):
self._value = value
def default(self):
retirn self._value
def another(self):
return self._value ** 2
def as_str(self):
return 'String: %s' % self._value
class Container:
def __init__(self, attr):
self.attr = OtherClass(attr)
I want to:
x = Container(2)
x.attr # when accessing the attribute - return value from default method
2
x.attr.another() # but also an attribute can be treated as an object
4
x.attr.as_str()
'String: 2'
How can I do this?
| [
"Not sure if this is what you need. Seems like an odd design to me\n>>> class OtherClass(int):\n... def __init__(self, value): \n... self._value = value \n... def another(self): \n... return self._value ** 2 \n... \n>>> class Container: \n... def __init__(self, attr): \n... self.... | [
2,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0003308579_python.txt |
Q:
I need help with messaging and queuing middleware systems for extjs
I developed a system that consists of software and hardware interaction. Basically its a transaction system where the transaction details are encrypted on a PCI device then returned back to my web based system where it is stored in a DB then displayed using javascript/extjs in the browser. How I do this now is the following:
Transaction encoding process
1.The user selects a transaction from a grid and presses "encode" button,extjs/js then sends the string to PHP where it is formatted and inserted into requests[incoming_request]. At this stage I start a extjs taskmanager to do interval checks on the requests[response] column for a result, and I display a "please wait..." message.
2.I have created a python daemon service that monitors the requests table for any transactions to encode.The python daemon then picks up any requests[incoming_request] then encodes the request and stores the result in requests[response] table.
3.The extjs taskmanager then picks up the requests[response] for the transaction and displays it to the user and then removes the "please wait..." message and terminates the taskmanager.
Now my question is: Is there a better way of doing this encryption process by using 3rd party Messaging and Queuing middleware systems? If so please help.
Thank You!
A:
I would change it this way:
make PHP block and wait until Python daemon finishes processing the transaction
increase the timeout in the Ext.data.Connection() so it would wait until PHP responds
remove the Ext.MessageBox and handle possible errors in the callback handler in Ext.data.Connection()
I.e. instead of waiting for the transaction to complete in JavaScript (which requires several calls to the webserver) you are now waiting in PHP.
This is assuming you are using Ext.data.Connection() to call the PHP handler - if any other Ext object is used the principle is the same but the timeout setting / completion handling would differ.
| I need help with messaging and queuing middleware systems for extjs | I developed a system that consists of software and hardware interaction. Basically its a transaction system where the transaction details are encrypted on a PCI device then returned back to my web based system where it is stored in a DB then displayed using javascript/extjs in the browser. How I do this now is the following:
Transaction encoding process
1.The user selects a transaction from a grid and presses "encode" button,extjs/js then sends the string to PHP where it is formatted and inserted into requests[incoming_request]. At this stage I start a extjs taskmanager to do interval checks on the requests[response] column for a result, and I display a "please wait..." message.
2.I have created a python daemon service that monitors the requests table for any transactions to encode.The python daemon then picks up any requests[incoming_request] then encodes the request and stores the result in requests[response] table.
3.The extjs taskmanager then picks up the requests[response] for the transaction and displays it to the user and then removes the "please wait..." message and terminates the taskmanager.
Now my question is: Is there a better way of doing this encryption process by using 3rd party Messaging and Queuing middleware systems? If so please help.
Thank You!
| [
"I would change it this way:\n\nmake PHP block and wait until Python daemon finishes processing the transaction\nincrease the timeout in the Ext.data.Connection() so it would wait until PHP responds\nremove the Ext.MessageBox and handle possible errors in the callback handler in Ext.data.Connection()\n\nI.e. instea... | [
0
] | [] | [] | [
"ajax",
"extjs",
"javascript",
"php",
"python"
] | stackoverflow_0003297110_ajax_extjs_javascript_php_python.txt |
Q:
django: Nonelogout in admin urls
After upgrading to Django 1.2 I have strange urls in my administration panel. They look like this:
http://example.com/admin/Nonelogout/
or
http://example.com/admin/Nonepassword_change/
What might have gone wrong during the migration and what I need to fix?
I have found in django source, that it is caused by root_path, but I have no idea, where I can set it properly or whether should I even do it.
Part of my urls.py for admin look like this:
(r'^admin/doc/', include('django.contrib.admindocs.urls')),
# (r'^admin/(.*)', admin.site.root),
(r'^admin/', include(admin.site.urls)),
A:
If you haven't found an answer for this, here is what I did... (and it is a hack, but it is the only thing that made it work).
In urls.py:
admin.site.root_path = ''
But I would be happy to see someone come out with a better solution.
| django: Nonelogout in admin urls | After upgrading to Django 1.2 I have strange urls in my administration panel. They look like this:
http://example.com/admin/Nonelogout/
or
http://example.com/admin/Nonepassword_change/
What might have gone wrong during the migration and what I need to fix?
I have found in django source, that it is caused by root_path, but I have no idea, where I can set it properly or whether should I even do it.
Part of my urls.py for admin look like this:
(r'^admin/doc/', include('django.contrib.admindocs.urls')),
# (r'^admin/(.*)', admin.site.root),
(r'^admin/', include(admin.site.urls)),
| [
"If you haven't found an answer for this, here is what I did... (and it is a hack, but it is the only thing that made it work).\nIn urls.py:\nadmin.site.root_path = ''\n\nBut I would be happy to see someone come out with a better solution.\n"
] | [
1
] | [] | [] | [
"admin",
"django",
"python",
"url"
] | stackoverflow_0003102817_admin_django_python_url.txt |
Q:
calling class from an external module causes NameError, in IDLE it works fine
i have the following code in a module called code_database.py
class Entry():
def enter_data(self):
self.title = input('enter a title: ')
print('enter the code, press ctrl-d to end: ')
self.code = sys.stdin.readlines()
self.tags = input('enter tags: ')
def save_data(self):
with open('entry.pickle2', 'ab') as f:
pickle.dump(self, f)
in idle the class-defined methods work fine:
>>> import code_database
>>> entry = code_database.Entry()
>>> entry.enter_data()
enter a title: a
enter the code, press ctrl-d to end:
benter tags: c
>>> entry.title
'a'
>>> entry.code
['b']
>>> entry.tags
'c'
>>>
however if i call the module from an external program and try to call the methods, they raise a NameError:
import code_database
entry = code_database.Entry()
entry.enter_data()
entry.save_data()
causes this in the terminal:
$python testclass.py
enter a title: mo
Traceback (most recent call last):
File "testclass.py", line 6, in <module>
entry.enter_data()
File "/home/mo/python/projects/code_database/code_database.py", line 8, in enter_data
self.title = input('enter a title: ')
File "<string>", line 1, in <module>
NameError: name 'mo' is not defined
A:
You're using python-2.x when running your testclass.py file. Your code, however, seems to be written for python-3.x version. In python-2.x you need to use raw_input functions for the same purpose you would use input in python-3.x. You could run
$ python --version
To find out what exactly version you're using by default.
| calling class from an external module causes NameError, in IDLE it works fine | i have the following code in a module called code_database.py
class Entry():
def enter_data(self):
self.title = input('enter a title: ')
print('enter the code, press ctrl-d to end: ')
self.code = sys.stdin.readlines()
self.tags = input('enter tags: ')
def save_data(self):
with open('entry.pickle2', 'ab') as f:
pickle.dump(self, f)
in idle the class-defined methods work fine:
>>> import code_database
>>> entry = code_database.Entry()
>>> entry.enter_data()
enter a title: a
enter the code, press ctrl-d to end:
benter tags: c
>>> entry.title
'a'
>>> entry.code
['b']
>>> entry.tags
'c'
>>>
however if i call the module from an external program and try to call the methods, they raise a NameError:
import code_database
entry = code_database.Entry()
entry.enter_data()
entry.save_data()
causes this in the terminal:
$python testclass.py
enter a title: mo
Traceback (most recent call last):
File "testclass.py", line 6, in <module>
entry.enter_data()
File "/home/mo/python/projects/code_database/code_database.py", line 8, in enter_data
self.title = input('enter a title: ')
File "<string>", line 1, in <module>
NameError: name 'mo' is not defined
| [
"You're using python-2.x when running your testclass.py file. Your code, however, seems to be written for python-3.x version. In python-2.x you need to use raw_input functions for the same purpose you would use input in python-3.x. You could run\n$ python --version\n\nTo find out what exactly version you're using b... | [
3
] | [] | [] | [
"call",
"class",
"nameerror",
"python"
] | stackoverflow_0003309843_call_class_nameerror_python.txt |
Q:
Facebook Style Wall / Activity log - General design help
I'm building a face-book style activity stream/wall. Using python/app engine. I have build the activity classes based on the current activity standard being used by face-book, yahoo and the likes. i have a Chanel/api system built that will create the various object messages that live on the wall/activity stream.
Where i can use some help is with some design ideas on how the wall should work. as follows:
I am using a fan out system. When something happens i send a message - making one copy but relating it to all that have subscribed to the channel it is written on. This is all working fine.
My original idea was to then simple use a query to show a wall - a simple get all the messages for a given channel or user. Which is fine.
But now I'm wondering if that is the best way to do it. I'm wondering if as the wall is a historical log that really should show "what has happened recently say last 90 days at the most. And that i will use Ajax to fetch the new messages. Is it better to use the message api i have built to send messages and then use a simple model/class/ to store the messages that form the wall for each user. Almost storing the raw HTML for each post. If each post was stored with its post date, object ref (comment,photo,event) it would be very easy to update/insert new entries in the right places and remove older ones. It would also be easy ajax side to simply listen for a new message. Insert it and continue.
I know their have been a lot of posts re "the wall" & "activity" stream does anyone have any thoughts i if my ideas are correct or off track?
Thanks
A:
This is pretty much exactly what Brett Slatkin was talking about in his 2009 I/O talk. I'd highly recommend watching it for inspiration, and to see how a member of the App Engine team solves this problem.
A:
Also you can check Opensocial API for design and maybe http://github.com/sahid/gosnippets.
| Facebook Style Wall / Activity log - General design help | I'm building a face-book style activity stream/wall. Using python/app engine. I have build the activity classes based on the current activity standard being used by face-book, yahoo and the likes. i have a Chanel/api system built that will create the various object messages that live on the wall/activity stream.
Where i can use some help is with some design ideas on how the wall should work. as follows:
I am using a fan out system. When something happens i send a message - making one copy but relating it to all that have subscribed to the channel it is written on. This is all working fine.
My original idea was to then simple use a query to show a wall - a simple get all the messages for a given channel or user. Which is fine.
But now I'm wondering if that is the best way to do it. I'm wondering if as the wall is a historical log that really should show "what has happened recently say last 90 days at the most. And that i will use Ajax to fetch the new messages. Is it better to use the message api i have built to send messages and then use a simple model/class/ to store the messages that form the wall for each user. Almost storing the raw HTML for each post. If each post was stored with its post date, object ref (comment,photo,event) it would be very easy to update/insert new entries in the right places and remove older ones. It would also be easy ajax side to simply listen for a new message. Insert it and continue.
I know their have been a lot of posts re "the wall" & "activity" stream does anyone have any thoughts i if my ideas are correct or off track?
Thanks
| [
"This is pretty much exactly what Brett Slatkin was talking about in his 2009 I/O talk. I'd highly recommend watching it for inspiration, and to see how a member of the App Engine team solves this problem.\n",
"Also you can check Opensocial API for design and maybe http://github.com/sahid/gosnippets.\n"
] | [
1,
0
] | [] | [] | [
"facebook",
"feed",
"google_app_engine",
"python"
] | stackoverflow_0003306545_facebook_feed_google_app_engine_python.txt |
Q:
Change texture on 3D object and export 2D image
I would like to generate 2D images of 3D books with custom covers on demand.
Ideally, I'd like to import a 3D model of a book (created by an artist), change the cover texture to the custom one, and export a bitmap image (jpeg, png, etc...). I'm fairly ignorant about 3D graphics, so I'm not sure if that's possible or feasible, but it describes what I want to do. Another method would be fine if it accomplishes something similar. Like maybe I could start with a rendered 2D image and distort the custom cover somehow then put it in the right place over the original image?
It would be best if I could do this using Python, but if that's not possible, I'm open to other solutions.
Any suggestions on how to accomplish this?
A:
Sure it's possible.
Blender would probably be overkill, but you can script blender with python, so that's one solution.
The latter solution is (I'm pretty sure) what most of those e-book cover generators do, which is why they always look a little off.
The PIL is an excellent tool for manipulating images and pixel data, so if you wanted to distort your own, that would be a great tool to look at, and if it goes too slow it's trivial to convert the image to a numpy array so you can get some speedup.
| Change texture on 3D object and export 2D image | I would like to generate 2D images of 3D books with custom covers on demand.
Ideally, I'd like to import a 3D model of a book (created by an artist), change the cover texture to the custom one, and export a bitmap image (jpeg, png, etc...). I'm fairly ignorant about 3D graphics, so I'm not sure if that's possible or feasible, but it describes what I want to do. Another method would be fine if it accomplishes something similar. Like maybe I could start with a rendered 2D image and distort the custom cover somehow then put it in the right place over the original image?
It would be best if I could do this using Python, but if that's not possible, I'm open to other solutions.
Any suggestions on how to accomplish this?
| [
"Sure it's possible.\nBlender would probably be overkill, but you can script blender with python, so that's one solution.\nThe latter solution is (I'm pretty sure) what most of those e-book cover generators do, which is why they always look a little off.\nThe PIL is an excellent tool for manipulating images and pix... | [
1
] | [] | [] | [
"3d",
"python"
] | stackoverflow_0003310017_3d_python.txt |
Q:
python execute remote program
I'm re-writing a legacy Windows application using Python and running on Linux. Initially, the new application needs to call the legacy application so that we have consistent results between customers still using the legacy application and customers using the new application.
So I have a Linux box, sitting right next to a Windows box and I want a process on the Linux box to execute a command on the Windows box and capture the result (synchronously).
My initial thought was to write a web service on the Windows box, but that would mean running a web server on the Windows machine in addition to the legacy application.
So then I thought that using Twisted.Conch might allow me to just execute a command over the network without the additional overhead of running a web server, but I assume there is also overhead with running an ssh server on the Windows machine.
What are some alternative ways that I can initiate a synchronous process on a different machine, using Python, besides a web service or ssh, or is a web service or ssh the best approach? Also, if a web service or ssh are the best routes to pursue, is Twisted something that I should consider using?
A:
I ended up going with SSH + Twisted. On the windows machine I setup freeSSHd as a Windows service. After hacking away trying to get paramiko to work and running into tons of problems getting my public/private keys to work, I decided to try Twisted, and it only took a few minutes to get it working. So, I wrote/stole this based on the Twisted documentation to accomplish what I needed as far as the SSH client side from Linux.
from twisted.conch.ssh import transport
from twisted.internet import defer
from twisted.conch.ssh import keys, userauth
from twisted.conch.ssh import connection
from twisted.conch.ssh import channel, common
from twisted.internet import protocol, reactor
class ClientTransport(transport.SSHClientTransport):
def verifyHostKey(self, pubKey, fingerprint):
return defer.succeed(1)
def connectionSecure(self):
self.requestService(ClientUserAuth('USERHERE', ClientConnection()))
class ClientUserAuth(userauth.SSHUserAuthClient):
def getPassword(self, prompt=None):
return
def getPublicKey(self):
return keys.Key.fromString(data=publicKey)
def getPrivateKey(self):
return defer.succeed(keys.Key.fromString(data=privateKey))
class ClientConnection(connection.SSHConnection):
def serviceStarted(self):
self.openChannel(CatChannel(conn=self))
class CatChannel(channel.SSHChannel):
name = 'session'
def channelOpen(self, data):
data = 'abcdefghijklmnopqrstuvwxyz' * 300
self.return_data = ''
self.conn.sendRequest(self, 'exec', common.NS('C:\helloworld %-10000s' % data), wantReply=True)
def dataReceived(self, data):
self.return_data += data
def closed(self):
print "got %d bytes of data back from Windows" % len(self.return_data)
print self.return_data
self.loseConnection()
reactor.stop()
if __name__ == "__main__":
factory = protocol.ClientFactory()
factory.protocol = ClientTransport
reactor.connectTCP('123.123.123.123', 22, factory)
reactor.run()
This has been working great!
A:
Another option is paramiko. It's a Python library that implements SSH. I've used it to remotely execute commands and transfer files to windows boxes running an SSH server. The problem is it doesn't properly capture stdout on windows due to the peculiarities of the windows command shell. You may have the same problem with a solution based on twisted.
What kind of results are you trying to capture?
A:
Try QAM with RabbitMQ.
A:
RPC is the right answer IMO.
I think:
using SimpleXMLRPCServer for the windows machine
using xmlrpclib for the linux machine
from the standard library would give you the most freedom. You implement what you need and you don't have to worry about windows APIs, overblown technologies as DCOM, etc., you are in python land, even on the windows machine.
Sidenote:
Twisted is of course always an excellent option, so don't worry about that; I think Apples CalDav server runs on Twisted too.
A:
I frequently use a little program called winexe, based on Samba.
Here's what the command syntax looks like, and here are some installation options.
| python execute remote program | I'm re-writing a legacy Windows application using Python and running on Linux. Initially, the new application needs to call the legacy application so that we have consistent results between customers still using the legacy application and customers using the new application.
So I have a Linux box, sitting right next to a Windows box and I want a process on the Linux box to execute a command on the Windows box and capture the result (synchronously).
My initial thought was to write a web service on the Windows box, but that would mean running a web server on the Windows machine in addition to the legacy application.
So then I thought that using Twisted.Conch might allow me to just execute a command over the network without the additional overhead of running a web server, but I assume there is also overhead with running an ssh server on the Windows machine.
What are some alternative ways that I can initiate a synchronous process on a different machine, using Python, besides a web service or ssh, or is a web service or ssh the best approach? Also, if a web service or ssh are the best routes to pursue, is Twisted something that I should consider using?
| [
"I ended up going with SSH + Twisted. On the windows machine I setup freeSSHd as a Windows service. After hacking away trying to get paramiko to work and running into tons of problems getting my public/private keys to work, I decided to try Twisted, and it only took a few minutes to get it working. So, I wrote/st... | [
5,
4,
1,
1,
0
] | [] | [] | [
"linux",
"python",
"twisted",
"windows"
] | stackoverflow_0003237558_linux_python_twisted_windows.txt |
Q:
Using Python with WAMP
I use WAMP for my PHP and MySQL development. I want to start learning Python for use in web development. Is there a way for me to use Python within WAMP?
A:
Short answer: yes, use MOD_WSGI (not MOD_PYTHON).
Long answer: yes, what do you want to use it for? Server-side scripting? Code generation?
| Using Python with WAMP | I use WAMP for my PHP and MySQL development. I want to start learning Python for use in web development. Is there a way for me to use Python within WAMP?
| [
"Short answer: yes, use MOD_WSGI (not MOD_PYTHON).\nLong answer: yes, what do you want to use it for? Server-side scripting? Code generation?\n"
] | [
4
] | [] | [] | [
"apache",
"python",
"wamp"
] | stackoverflow_0003310309_apache_python_wamp.txt |
Q:
What is best way for interactive debug in python?
I want to utilize introspection capability of python for debugging/development, but cannot find appropriate tool for this.
I need to enter into shell (IPython for example) at specific position or at specific event (like exception), with locals and globals of shell being set to the frame's ones.
My own quick hack to illustrate it:
import inspect
from IPython.Shell import IPShellEmbed
def run_debug():
stack = inspect.stack()
frame = stack[1][0]
loc = frame.f_locals
glob = frame.f_globals
shell = IPShellEmbed()
shell(local_ns=loc, global_ns=glob)
With according run_debug() call from 'breakpoint' or try/except. But, obviously, this needs alot of polishing, esp to work with threaded apps properly.
winpdb has breakpoints with console, but I found no way to quickly run proper python shell from it, and eval()/exec() are not very handy for long debug.
A:
Similar to what you're already doing, there's ipdb. Effectively, it's pdb with ipython's shell (i.e. tab completion, all the various magic functions, etc).
It's actually doing exactly what the little code snipped you posted in your question does, but wraps it into a simple "ipdb.set_trace()" call.
A:
For personal/education purposes you can use WingIDE - they have some pretty solid debugging capabilities.
Of course if you're just worried about changing values you can always just use raw_input() - but that may not be advanced enough for your needs.
A:
If you run your code from ipython and hit an exception, you can call %debug afterwards to drop into a pdb at the exception. This should give you what you want. Or if you run ipython -pdb, you will automatically be dropped into pdb when an uncaught exception occurs.
| What is best way for interactive debug in python? | I want to utilize introspection capability of python for debugging/development, but cannot find appropriate tool for this.
I need to enter into shell (IPython for example) at specific position or at specific event (like exception), with locals and globals of shell being set to the frame's ones.
My own quick hack to illustrate it:
import inspect
from IPython.Shell import IPShellEmbed
def run_debug():
stack = inspect.stack()
frame = stack[1][0]
loc = frame.f_locals
glob = frame.f_globals
shell = IPShellEmbed()
shell(local_ns=loc, global_ns=glob)
With according run_debug() call from 'breakpoint' or try/except. But, obviously, this needs alot of polishing, esp to work with threaded apps properly.
winpdb has breakpoints with console, but I found no way to quickly run proper python shell from it, and eval()/exec() are not very handy for long debug.
| [
"Similar to what you're already doing, there's ipdb. Effectively, it's pdb with ipython's shell (i.e. tab completion, all the various magic functions, etc).\nIt's actually doing exactly what the little code snipped you posted in your question does, but wraps it into a simple \"ipdb.set_trace()\" call.\n",
"For p... | [
3,
2,
0
] | [] | [] | [
"debugging",
"pdb",
"python"
] | stackoverflow_0003309878_debugging_pdb_python.txt |
Q:
How do you use pip, virtualenv and Fabric to handle deployment?
What are your settings, your tricks, and above all, your workflow?
These tools are great but there are still no best practices attached to their usage, so I don't know what is the most efficient way to use them.
Do you use pip bundles or always
download?
Do you set up Apache/Cherokee/MySQL by hand or do
you have a script for that?
Do you put everything in virtualenv and use --no-site-packages?
Do you use one virtualenv for several projects?
What do you use Fabric for (which part of
your deployment do you script)?
Do you put your Fabric scripts on the client or the server?
How do you handle database and media file migration?
Do you ever need a build tool such as SCons?
What are the steps of your deployment? How often do you perform each of them?
etc.
A:
"Best practices" are very context-dependent, so I won't claim my practices are best, just that they work for me. I work on mostly small sites, so no multiple-server deployments, CDNs etc. I do need to support Webfaction shared hosting deployment, as some clients need the cheapest hosting they can find. I do often have to deploy sites multiple times in different environments, so repeatable scripted deploys are critical.
I don't use pip bundles, I install from a requirements.txt. I do run my own chishop server with sdists of everything I need, so there aren't multiple single points of failure in the build process. I also use PIP_DOWNLOAD_CACHE on my development machines to speed up bootstrapping project environments, since most of my projects' requirements overlap quite a bit.
I have Fabric scripts that can automatically set up and configure nginx + Apache/mod_wsgi on an Ubuntu VPS, or configure the equivalent on Webfaction shared hosting, and then deploy the project.
I do not use --no-site-packages with virtualenv, because I prefer having slow-moving compiled packages (Python Imaging Library, psycopg2) installed at the system level; too slow and troublesome to do inside every virtualenv. I have not had trouble with polluted system site-packages, because I generally don't pollute it. And in any case, you can install a different version of something in the virtualenv and it will take precedence.
Each project has its own virtualenv. I have some bash scripts (not virtualenvwrapper, though a lot of people use that and love it) that automate deploying the virtualenv for a given project to a known location and installing that project's requirements into it.
The entire deployment process, from a bare Ubuntu server VPS or Webfaction shared hosting account to a running website, is scripted using Fabric.
Fabric scripts are part of the project source tree, and I run them from a local development checkout.
I have no need for SCons (that I am aware of).
Deployment
At the moment a fresh deployment is split into these steps:
fab staging bootstrap (server setup and initial code deploy)
fab staging enable (enable the Apache/nginx config for this site)
fab staging reload_server (reload Apache/nginx config).
Those can of course be combined into a single command line fab staging bootstrap enable reload_server.
Once these steps are done, updating the deployment with new code is just fab staging deploy.
If I need to roll back an update, fab staging rollback. Nothing particularly magical in the rollback; it just rolls back the code to the last-deployed version and migrates the database to the previous state (this does require recording some metadata about the migration state of the DB post-deploy, I just do that in a text file).
Examples
I haven't used the Fabric scripts described in this answer for a few years, so they aren't maintained at all and I disclaim responsibility for their quality :-) But you can see them at https://bitbucket.org/carljm/django-project-template - in fabfile.py in the repo root, and in the deploy/ subdirectory.
A:
I use fabric to build and deploy my code and assume a system already set up for that. I think that a tool like puppet is more appropriate to automate the installation of things like apache and mysql, though I have yet to really include it in my workflow.
Also, I usually have a different virtualenv per project. They are created from a 'base' install of python where - as Carl pointed out - you can leave some global python libraries.
So in terms of workflow that would be:
puppet to install required services (web server, database, ssh server, ...)
puppet to set up required users and base folders
fabric to create virtualenv for the application
fabric to pip install from requirements.txt
fabric to deploy your app
fabric to deploy configuration files (web server, ...)
| How do you use pip, virtualenv and Fabric to handle deployment? | What are your settings, your tricks, and above all, your workflow?
These tools are great but there are still no best practices attached to their usage, so I don't know what is the most efficient way to use them.
Do you use pip bundles or always
download?
Do you set up Apache/Cherokee/MySQL by hand or do
you have a script for that?
Do you put everything in virtualenv and use --no-site-packages?
Do you use one virtualenv for several projects?
What do you use Fabric for (which part of
your deployment do you script)?
Do you put your Fabric scripts on the client or the server?
How do you handle database and media file migration?
Do you ever need a build tool such as SCons?
What are the steps of your deployment? How often do you perform each of them?
etc.
| [
"\"Best practices\" are very context-dependent, so I won't claim my practices are best, just that they work for me. I work on mostly small sites, so no multiple-server deployments, CDNs etc. I do need to support Webfaction shared hosting deployment, as some clients need the cheapest hosting they can find. I do ofte... | [
79,
9
] | [] | [] | [
"deployment",
"fabric",
"pip",
"python",
"virtualenv"
] | stackoverflow_0002441704_deployment_fabric_pip_python_virtualenv.txt |
Q:
pysqlite - how to save images
I need to save an image file into sqlite database in python. I could not find a solution. How can I do it?
Thanks in advance.
A:
write - cursor.execute('insert into File
(id, name, bin) values (?,?,?)', (id, name, sqlite3.Binary(file.read())))
read - file = cursor.execute('select bin from File where id=?', (id,)).fetchone()
if you need to return bin data in web app - return cStringIO.StringIO(file['bin'])
A:
Do you have to store the image in the database? I would write the image to the filesystem and store its path in the DB. (You may not be able to do this, depending on your particular case.)
If you absolutely must, look here.
A:
I am not sure if pysqlite is the same as sqlite3, which is currently default in the standard python library. But if you use sqlite3 you can store the image in a buffer object and store that in a blob field in sqlite.
Be aware of the following though:
storing images in a database is frowned upon by some, storing files and path in the database is the other possibility.
make sure you return the proper mime type
A:
It's never a good idea to record raw types in databases. Couldn't you just save the file on the filesystem, and record the path to it in database?
| pysqlite - how to save images | I need to save an image file into sqlite database in python. I could not find a solution. How can I do it?
Thanks in advance.
| [
"write - cursor.execute('insert into File \n(id, name, bin) values (?,?,?)', (id, name, sqlite3.Binary(file.read())))\nread - file = cursor.execute('select bin from File where id=?', (id,)).fetchone()\nif you need to return bin data in web app - return cStringIO.StringIO(file['bin'])\n",
"Do you have to store the... | [
11,
3,
2,
0
] | [] | [] | [
"blob",
"image",
"pysqlite",
"python",
"sqlite"
] | stackoverflow_0003309957_blob_image_pysqlite_python_sqlite.txt |
Q:
Where and how do I set an environmental variable using mod-wsgi and django?
I'm trying to use this env variable to specify the path for my templates and it will probably be easier to do this using git or svn.
A:
In you app.wsgi file do:
import os
os.environ['MY_ENV_VARIABLE'] = 'value'
# e.g. os.environ['MPLCONFIGDIR'] = '/path/to/config/dir'
A:
The reference material is here. But the operative line for your project path (in case that's what you mean) is:
sys.path.append('/path/to/project')
However, your template path is set within your settings.py file:
TEMPLATE_DIRS = ( '/path/to/templatedir1', '/path/to/templatedir2' )
| Where and how do I set an environmental variable using mod-wsgi and django? | I'm trying to use this env variable to specify the path for my templates and it will probably be easier to do this using git or svn.
| [
"In you app.wsgi file do:\nimport os\nos.environ['MY_ENV_VARIABLE'] = 'value'\n# e.g. os.environ['MPLCONFIGDIR'] = '/path/to/config/dir'\n\n",
"The reference material is here. But the operative line for your project path (in case that's what you mean) is:\nsys.path.append('/path/to/project')\n\nHowever, your temp... | [
2,
0
] | [] | [] | [
"django",
"mod_wsgi",
"python"
] | stackoverflow_0003310419_django_mod_wsgi_python.txt |
Q:
How to add a comma to the end of a list efficiently?
I have a list of horizontal names that is too long to open in excel. It's 90,000 names long. I need to add a comma after each name to put into my program. I tried find/replace but it freezes up my computer and crashes. Is there a clever way I can get a comma at the end of each name? My options to work with are python and excel thanks.
A:
If you actually had a Python list, say names, then ','.join(names) would make into a string with a comma between each name and the following one (if you need one at the end as well, just use + ',' to append one more comma to the result).
Even though you say you have "a list" I suspect you actually have a string instead, for example in a file, where the names are separated by...? You don't tell us, and therefore force us to guess. For example, if they're separated by line-ends (one name per line), your life is easiest:
with open('yourfile.txt') as f:
result = ','.join(f)
(again, supplement this with a + ',' after the join if you need that, of course). That's because separation by line-ends is the normal default behavior for a text file, of course.
If the separator is something different, you'll have to read the file's contents as a string (with f.read()) and split it up appropriately then join it up again with commas.
For example, if the separator is a tab character:
with open('yourfile.txt') as f:
result = ','.join(f.read().split('\t'))
As you see, it's not so much worse;-).
| How to add a comma to the end of a list efficiently? | I have a list of horizontal names that is too long to open in excel. It's 90,000 names long. I need to add a comma after each name to put into my program. I tried find/replace but it freezes up my computer and crashes. Is there a clever way I can get a comma at the end of each name? My options to work with are python and excel thanks.
| [
"If you actually had a Python list, say names, then ','.join(names) would make into a string with a comma between each name and the following one (if you need one at the end as well, just use + ',' to append one more comma to the result).\nEven though you say you have \"a list\" I suspect you actually have a string... | [
5
] | [] | [] | [
"excel",
"python"
] | stackoverflow_0003311124_excel_python.txt |
Q:
How to make Django work with MySQL Connector/Python?
Has anyone made Django work with myconnpy?
I've checked out http://github.com/rtyler/connector-django-mysql but
the author said it's very outdated and not supported.
If you've managed to make Django work with myconnpy, please share your
experience.
Thanks.
A:
I needed something similar, so I forked the project you linked to and updated it to work (for small values of) with Django 1.2's newer database backend API.
It should be noted that my use case is very simple (read access to a single table on a single database) and I have not tested it with anything more than that.
You can find it at http://github.com/jerith/connector-django-mysql
| How to make Django work with MySQL Connector/Python? | Has anyone made Django work with myconnpy?
I've checked out http://github.com/rtyler/connector-django-mysql but
the author said it's very outdated and not supported.
If you've managed to make Django work with myconnpy, please share your
experience.
Thanks.
| [
"I needed something similar, so I forked the project you linked to and updated it to work (for small values of) with Django 1.2's newer database backend API.\nIt should be noted that my use case is very simple (read access to a single table on a single database) and I have not tested it with anything more than that... | [
1
] | [] | [] | [
"django",
"django_database",
"mysql",
"mysql_connector",
"python"
] | stackoverflow_0002814450_django_django_database_mysql_mysql_connector_python.txt |
Q:
Installing a python + django open source project on my server and making it work?
I'm trying to install an open source python + django project: http://github.com/coulix/Massive-Coupon---Open-source-groupon-clone
on a server to play with. I'm using mediatemple.net grid-server hosting. I uploaded the files in my html folder but I can't seem to run the program.
I'd love to talk this out with someone and figure it out.
Jordan
A:
Did you check out/follow the directions on deploying Django?
| Installing a python + django open source project on my server and making it work? | I'm trying to install an open source python + django project: http://github.com/coulix/Massive-Coupon---Open-source-groupon-clone
on a server to play with. I'm using mediatemple.net grid-server hosting. I uploaded the files in my html folder but I can't seem to run the program.
I'd love to talk this out with someone and figure it out.
Jordan
| [
"Did you check out/follow the directions on deploying Django?\n"
] | [
3
] | [] | [] | [
"django",
"python"
] | stackoverflow_0003311197_django_python.txt |
Q:
Python generator that returns the same thing forever
I'm looking for a standard function that does this:
def Forever(v):
while True:
yield v
It seems so trivial I can't believe there isn't a standard version.
For that matter anyone know of a good link to a list of all the standard generator functions?
A:
itertools.repeat(x[, count]) repeats x a finite number of times if told how many times, otherwise repeats forever.
For a general list of all of the itertools generator functions, see here:
http://docs.python.org/library/itertools.html
A:
Your are looking for itertools.repeat(object[, times]):
Make an iterator that returns object over and over again. Runs indefinitely unless the times argument is specified.
| Python generator that returns the same thing forever | I'm looking for a standard function that does this:
def Forever(v):
while True:
yield v
It seems so trivial I can't believe there isn't a standard version.
For that matter anyone know of a good link to a list of all the standard generator functions?
| [
"itertools.repeat(x[, count]) repeats x a finite number of times if told how many times, otherwise repeats forever.\nFor a general list of all of the itertools generator functions, see here:\nhttp://docs.python.org/library/itertools.html\n",
"Your are looking for itertools.repeat(object[, times]):\n\nMake an iter... | [
14,
9
] | [] | [] | [
"generator",
"python"
] | stackoverflow_0003311266_generator_python.txt |
Q:
How to protect text when doing INSERT using MySQLdb
This query is easy, but when the text contains some quotes it doesn't work.
cursor.execute ("INSERT INTO text (text_key, language_id, text) VALUES ('%s', '%s', '%s')" % (key, language_id, text))
What is the best way to protect my text variable ?
A:
What you are doing will lead to a SQL injection vulnerability. Pass the parametrized query as the first argument, and the sequence of values as the second argument.
A:
Always pass the parameters separately from the query:
cursor.execute (
"INSERT INTO text (text_key, language_id, text) VALUES (%s, %s, %s)",
(key, language_id, text))
That way the quoting will be handled correctly.
| How to protect text when doing INSERT using MySQLdb | This query is easy, but when the text contains some quotes it doesn't work.
cursor.execute ("INSERT INTO text (text_key, language_id, text) VALUES ('%s', '%s', '%s')" % (key, language_id, text))
What is the best way to protect my text variable ?
| [
"What you are doing will lead to a SQL injection vulnerability. Pass the parametrized query as the first argument, and the sequence of values as the second argument.\n",
"Always pass the parameters separately from the query:\ncursor.execute (\n \"INSERT INTO text (text_key, language_id, text) VALUES (%s, %s, %... | [
2,
2
] | [] | [] | [
"mysql",
"python"
] | stackoverflow_0003311417_mysql_python.txt |
Q:
programing pseudocode
I'm trying to make the game KenKen in Python.
I need some help with the pseudocode. What data types required to store and process the game information as it progresses and completes?
A:
By the sound of your question, babikar, you assumedly have very little knowledge of game programming in any language? If so, I advise that you start by looking for tutorials and books to read about both game programming in Python, and in general - the theory is usually language independent. You cannot expect people here to just give you psuedo-code - that requires time and effort for something that they would get nothing from. Creating psuedo-code can be just as hard, if not harder than writing the actual code - you are basically asking us to create your game for you.
I suggest this should be your first search.
EDIT:
To expand slightly and meet your question about actual game psuedo-code more - what exactly do you want to make, in terms of a 'KenKen' game. Is it a KenKen solver - a program that you give a KenKen puzzle to and it gives you the answer - or a KenKen generator - one that creates a KenKen puzzle for you to solve (and probably has other features too)?
Here is a post on a KenKen solver in Python - but if you are going to make one of these, I wouldn't read his code or it will just ruin the making for you.
| programing pseudocode | I'm trying to make the game KenKen in Python.
I need some help with the pseudocode. What data types required to store and process the game information as it progresses and completes?
| [
"By the sound of your question, babikar, you assumedly have very little knowledge of game programming in any language? If so, I advise that you start by looking for tutorials and books to read about both game programming in Python, and in general - the theory is usually language independent. You cannot expect peopl... | [
3
] | [] | [] | [
"python"
] | stackoverflow_0003311457_python.txt |
Q:
Make a line into a string
I got a bot from this website, and I've been having a lot of fun with it. However I wanted to add an away command.
The way it's gonna work is that when someone write
away [reason]
then it saves the reason, and when someone else types his name, the bot sais "He is not available, he left a note '[reason]'" or something like that.
The input I get when someone writes to the bot is
:NickName!email@gtanet-GTMJLJRI83.nextgentel.com
PRIVMSG #mychat :away Im going
fishing.
Using this
line[0][line[0].index(":")+1:line[0].index("!")]
I got the username of the person (in this case, it's NickName), and I need to extract all lines after away, using line[4:] (Im going fishing), and that works like a charm.
But I need to make it into a string, because it can't return a string and a list.
tl;dr I get a list as a value, and I want it to return it with a string, what do I do?
A:
You can do what you want like this:
line = [
':NickName!email@gtanet-GTMJLJRI83.nextgentel.com',
'PRIVMSG #mychat :away Im going',
'fishing.'
]
away = '\n'.join(line[1:])
needle = ':away '
away = away[away.index(needle) + len(needle):]
print away
Result:
Im going
fishing.
If you want the output on one line use ' '.join(line[1:]) instead of '\n'.join(line[1:]).
A:
If you want to turn a list into a string just use 'x'.join(mylist), where x is the character you want between each element (such as a space, comma, etc.), then you can just concatenate that with your other string.
I though I guess there are two different ways, but I'm not sure which is faster (or if you'll be doing it enough to make a difference)
...
mylist.append(mystr)
return ' '.join(mylist)
and
...
return ' '.join(mylist) + ' ' + mystr
though I think the former way looks better.
edit:
If your list contains non-strings you can feed a generator expression to .join():
return ' '.join(str(_) for _ in mylist)
A:
If it is a list of strings you can just do:
''.join(stringlist)
which concatenates all the strings, making it one string.
See str.join()
| Make a line into a string | I got a bot from this website, and I've been having a lot of fun with it. However I wanted to add an away command.
The way it's gonna work is that when someone write
away [reason]
then it saves the reason, and when someone else types his name, the bot sais "He is not available, he left a note '[reason]'" or something like that.
The input I get when someone writes to the bot is
:NickName!email@gtanet-GTMJLJRI83.nextgentel.com
PRIVMSG #mychat :away Im going
fishing.
Using this
line[0][line[0].index(":")+1:line[0].index("!")]
I got the username of the person (in this case, it's NickName), and I need to extract all lines after away, using line[4:] (Im going fishing), and that works like a charm.
But I need to make it into a string, because it can't return a string and a list.
tl;dr I get a list as a value, and I want it to return it with a string, what do I do?
| [
"You can do what you want like this:\nline = [\n ':NickName!email@gtanet-GTMJLJRI83.nextgentel.com',\n 'PRIVMSG #mychat :away Im going',\n 'fishing.'\n]\n\naway = '\\n'.join(line[1:])\nneedle = ':away '\naway = away[away.index(needle) + len(needle):]\nprint away\n\nResult:\n\nIm going\nfishing.\n\nIf you w... | [
1,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0003311520_python.txt |
Q:
Python function based on Scrapy to crawl entirely a web site
I recently discovered Scrapy which i find very efficient. However, I really don't see how to embed it in a larger project written in python. I would like to create a spider in the normal way but be able to launch it on a given url with a function
start_crawl(url)
which would launch the crawling process on a given domain and stop only when all the pages have been seen.
A:
Scrapy is much more complicated. It runs several processes and use multi-threating. So in fact there are no way to use it as normal python function. Of course you can import function that starts crawler and invoke it, but what then? You will have normal scrappy process, that has taken control of your program.
Probably the best approach here is to run scrappy as subprocess of your program and communicate with it using database or file. You have good separation between your program and crawler, and solid control over main process.
| Python function based on Scrapy to crawl entirely a web site | I recently discovered Scrapy which i find very efficient. However, I really don't see how to embed it in a larger project written in python. I would like to create a spider in the normal way but be able to launch it on a given url with a function
start_crawl(url)
which would launch the crawling process on a given domain and stop only when all the pages have been seen.
| [
"Scrapy is much more complicated. It runs several processes and use multi-threating. So in fact there are no way to use it as normal python function. Of course you can import function that starts crawler and invoke it, but what then? You will have normal scrappy process, that has taken control of your program.\nPro... | [
3
] | [] | [] | [
"python",
"scrapy",
"web_crawler"
] | stackoverflow_0003302220_python_scrapy_web_crawler.txt |
Q:
Creating a python priority Queue
I would like to build a priority queue in python in which the queue contains different dictionaries with their priority numbers. So when a "get function" is called, the dictionary with the highest priority(lowest number) will be pulled out of the queue and when "add function" is called, the new dictionary will be added to the queue and sorted based on its priority number.
Please do help out...
Thanks in advance!
A:
Use the heapq module in the standard library.
You don't specify how you wanted to associate priorities with dictionaries, but here's a simple implementation:
import heapq
class MyPriQueue(object):
def __init__(self):
self.heap = []
def add(self, d, pri):
heapq.heappush(self.heap, (pri, d))
def get(self):
pri, d = heapq.heappop(self.heap)
return d
A:
This is what I usually present as a side note in some of my patterns talks:
class PriorityQueue(object):
def __init__(self, key=lambda x: x):
self.l = []
self.key = key
def __len__(self):
return len(self.l)
def push(self, obj):
heapq.heappush(self.l, (self.key(obj), obj))
def pop(self):
return heapq.heappop(self.l)[-1]
The OP's requirements are apparently to use operator.itemgetter('priority') as the key argument when instantiating PriorityQueue (needs an import operator at top of module, of course;-).
A:
You can do this by adding a dict object to the class, and search it inside.
| Creating a python priority Queue | I would like to build a priority queue in python in which the queue contains different dictionaries with their priority numbers. So when a "get function" is called, the dictionary with the highest priority(lowest number) will be pulled out of the queue and when "add function" is called, the new dictionary will be added to the queue and sorted based on its priority number.
Please do help out...
Thanks in advance!
| [
"Use the heapq module in the standard library.\nYou don't specify how you wanted to associate priorities with dictionaries, but here's a simple implementation:\nimport heapq\n\nclass MyPriQueue(object):\n def __init__(self):\n self.heap = []\n\n def add(self, d, pri):\n heapq.heappush(self.heap,... | [
6,
2,
0
] | [] | [] | [
"priority_queue",
"python",
"task_queue"
] | stackoverflow_0003311480_priority_queue_python_task_queue.txt |
Q:
How to default to CLoader in PyYaml.load
Is there any way to have the default loader for PyYaml be CLoader. So instead of having to do
yaml.load(f, Loader=yaml.CLoader)
It would just default to CLoader, so I could do:
yaml.load(f)
A:
I think what you are looking for is functools.partial:
import functools
myload=functools.partial(yaml.load,Loader=yaml.CLoader)
myload(f)
| How to default to CLoader in PyYaml.load | Is there any way to have the default loader for PyYaml be CLoader. So instead of having to do
yaml.load(f, Loader=yaml.CLoader)
It would just default to CLoader, so I could do:
yaml.load(f)
| [
"I think what you are looking for is functools.partial:\nimport functools\n\nmyload=functools.partial(yaml.load,Loader=yaml.CLoader)\nmyload(f)\n\n"
] | [
1
] | [] | [] | [
"python",
"pyyaml",
"yaml"
] | stackoverflow_0003311682_python_pyyaml_yaml.txt |
Q:
Calling method from a class in a different class in python
Let's say I have this code:
class class1(object):
def __init__(self):
#don't worry about this
def parse(self, array):
# do something with array
class class2(object):
def __init__(self):
#don't worry about this
def parse(self, array):
# do something else with array
I want to be able to call class1's parse from class2 and vice-versa. I know with c++ this can be done quite easily by doing
class1::parse(array)
How would I do the equivalent in python?
A:
It sounds like you want a static method:
class class1(object):
@staticmethod
def parse(array):
...
Note that in such cases you leave off the usually-required self parameter, because parse is not a function called on a particular instance of class1.
On the other hand, if you want a method which is still tied to its owner class, you can write a class method, where the first argument is actually the class object:
class class1(object):
@classmethod
def parse(cls, array):
...
| Calling method from a class in a different class in python | Let's say I have this code:
class class1(object):
def __init__(self):
#don't worry about this
def parse(self, array):
# do something with array
class class2(object):
def __init__(self):
#don't worry about this
def parse(self, array):
# do something else with array
I want to be able to call class1's parse from class2 and vice-versa. I know with c++ this can be done quite easily by doing
class1::parse(array)
How would I do the equivalent in python?
| [
"It sounds like you want a static method:\nclass class1(object):\n @staticmethod\n def parse(array):\n ...\n\nNote that in such cases you leave off the usually-required self parameter, because parse is not a function called on a particular instance of class1.\nOn the other hand, if you want a method wh... | [
5
] | [] | [] | [
"python"
] | stackoverflow_0003311987_python.txt |
Q:
django URL reverse: When URL reversig a username it fails when username has a '.' literal in it
I didn't expect this to occur [since I didn't know when django changed to allow _ and . in usernames], but when I attempt
{% url feed_user entry.username %}
I will get a 500 error when the username contains a '.'
In this case rob.e as a username will fail.
Any ideas how to deal with this?
A:
The problem will be in whatever regex you are using in your urls.py to match feed_user. Presumably you are using something like r'(?P<username>\w+)/$', which only matches on alphanumeric characters and doesn't match on punctuation.
Instead, use this: r'(?P<username>[\w.]+)/$'
| django URL reverse: When URL reversig a username it fails when username has a '.' literal in it | I didn't expect this to occur [since I didn't know when django changed to allow _ and . in usernames], but when I attempt
{% url feed_user entry.username %}
I will get a 500 error when the username contains a '.'
In this case rob.e as a username will fail.
Any ideas how to deal with this?
| [
"The problem will be in whatever regex you are using in your urls.py to match feed_user. Presumably you are using something like r'(?P<username>\\w+)/$', which only matches on alphanumeric characters and doesn't match on punctuation.\nInstead, use this: r'(?P<username>[\\w.]+)/$'\n"
] | [
4
] | [] | [] | [
"django",
"python",
"reverse",
"url"
] | stackoverflow_0003311973_django_python_reverse_url.txt |
Q:
algorithms issue in uniform cost solution
I am doing the unifrom cost search algorithm. I am getting my solution slightly larger than actual. The number of expanded nodes are coming larger than actual.
I used this algorithm:
Get the initial node and put it into the priority queue.The P.queue will itself arranges the nodes in it according to the cost. Lower cost node will come first.
use the while loop, it goes until the queue is empty.
remove a node from the queue and check if it is a goal state or not. if not, check if it in the visited list or not. visited list is a set that has all the nodes that are already expanded. if not present in the visited list, get its successors along with cost and action.
calculate the cost upto this node.
if the cost of succcessor is larger than the cost of parent node, add it into the queue and check if this successor is in the parentdirectory or not. If not, make him a pparent so that we can backtrack the path.
is my algorithm is right or do i need to check something else:
A:
It seems that you're implementing a Dijkstra with priority queue. But since the costs are uniform, BFS would be enough.
| algorithms issue in uniform cost solution | I am doing the unifrom cost search algorithm. I am getting my solution slightly larger than actual. The number of expanded nodes are coming larger than actual.
I used this algorithm:
Get the initial node and put it into the priority queue.The P.queue will itself arranges the nodes in it according to the cost. Lower cost node will come first.
use the while loop, it goes until the queue is empty.
remove a node from the queue and check if it is a goal state or not. if not, check if it in the visited list or not. visited list is a set that has all the nodes that are already expanded. if not present in the visited list, get its successors along with cost and action.
calculate the cost upto this node.
if the cost of succcessor is larger than the cost of parent node, add it into the queue and check if this successor is in the parentdirectory or not. If not, make him a pparent so that we can backtrack the path.
is my algorithm is right or do i need to check something else:
| [
"It seems that you're implementing a Dijkstra with priority queue. But since the costs are uniform, BFS would be enough. \n"
] | [
2
] | [] | [] | [
"algorithm",
"python",
"search",
"uniform"
] | stackoverflow_0003305713_algorithm_python_search_uniform.txt |
Q:
Python and Zope: Module will import in python but not in zope
I have installed the Image module http://www.pythonware.com/products/pil/. I then try and import it in the python interpreter and successfully so:
>>> import Image
>>>
But when I try to import the module in Zope via DTML page:
DTML page looks like:
<dtml-var import_image>
Which calls this script:
def import_image(self):
import Image
im = Image.open("/home/rv/Desktop/blah.jpg")
return im
I then get this error:
"ImportError: No module named Image" How can there be no module when I can import it in the python interpreter?
EDIT
The python script is in Zopes extension folder
A:
Try:
import PIL.Image
rather than:
import Image
Zope has an Image module and you could be encountering a namespace clash.
A:
You can’t just import any module in zope python script. Zope has some security restrictions. In your case you need create external method in %zope-instance%/Extensions
OR maybe your zope instance cannot find this library because it's running in another python environment. You should check if all parameters are right in %zope-instance%/bin/zopectl
| Python and Zope: Module will import in python but not in zope | I have installed the Image module http://www.pythonware.com/products/pil/. I then try and import it in the python interpreter and successfully so:
>>> import Image
>>>
But when I try to import the module in Zope via DTML page:
DTML page looks like:
<dtml-var import_image>
Which calls this script:
def import_image(self):
import Image
im = Image.open("/home/rv/Desktop/blah.jpg")
return im
I then get this error:
"ImportError: No module named Image" How can there be no module when I can import it in the python interpreter?
EDIT
The python script is in Zopes extension folder
| [
"Try:\nimport PIL.Image\n\nrather than:\nimport Image\n\nZope has an Image module and you could be encountering a namespace clash.\n",
"You can’t just import any module in zope python script. Zope has some security restrictions. In your case you need create external method in %zope-instance%/Extensions\n\nOR mayb... | [
2,
1
] | [] | [] | [
"python",
"zope"
] | stackoverflow_0003303333_python_zope.txt |
Q:
Why does an apostrophe in a python docstring break emacs syntax highlighting?
Running GNU Emacs 22.2.1 on Ubuntu 9.04.
When editing python code in emacs, if a docstring contains an apostrophe, emacs highlights all following code as a comment, until another apostrophe is used. Really annoying!
In other words, if I have a docstring like this:
''' This docstring has an apostrophe ' '''
Then all following code is highlighted as a comment. Comments are highlighted as code.
I can escape the docstring to avoid this, like this:
''' This docstring has an escaped apostrophe \' '''
Then highlighting is fine, but then it looks funny and unnecessary to other devs on my team, and I get made fun of for using emacs since " it can't handle apostrophies". ;)
So, anyone know how to make emacs behave better in this regard?
Thanks,
Josh
A:
This appears to work correctly in GNU Emacs 23.2.1. If it's not practical to upgrade, you might be able to copy python.el out of the Emacs 23 source code, or perhaps just the relevant pieces of it (python-quote-syntax, python-font-lock-syntactic-keywords, and the code that uses the latter, I think - I'm not much of an Elisp hacker).
Unfortunately savannah.gnu.org's bzr browser isn't working just now so I can't point you directly at the code, you'll have to download it. See http://www.gnu.org/software/emacs/
A:
It may be an emacs bug, but it could also be by purpose. If you insert doctests in your docstrings, as I often do to explain API, I could even wish to have the full python syntax highlighting inside docstrings.
But it's probably a bug... (probably emacs syntax highlighter just care of simple and double quotes and ignore triple simple and triple doubles). If so, you should use triple double quotes instead of triple simple quotes as in your example (as far as I know most users use triple double quotes for docstring), and you won't have the problem.
| Why does an apostrophe in a python docstring break emacs syntax highlighting? | Running GNU Emacs 22.2.1 on Ubuntu 9.04.
When editing python code in emacs, if a docstring contains an apostrophe, emacs highlights all following code as a comment, until another apostrophe is used. Really annoying!
In other words, if I have a docstring like this:
''' This docstring has an apostrophe ' '''
Then all following code is highlighted as a comment. Comments are highlighted as code.
I can escape the docstring to avoid this, like this:
''' This docstring has an escaped apostrophe \' '''
Then highlighting is fine, but then it looks funny and unnecessary to other devs on my team, and I get made fun of for using emacs since " it can't handle apostrophies". ;)
So, anyone know how to make emacs behave better in this regard?
Thanks,
Josh
| [
"This appears to work correctly in GNU Emacs 23.2.1. If it's not practical to upgrade, you might be able to copy python.el out of the Emacs 23 source code, or perhaps just the relevant pieces of it (python-quote-syntax, python-font-lock-syntactic-keywords, and the code that uses the latter, I think - I'm not much ... | [
7,
2
] | [] | [] | [
"emacs",
"python"
] | stackoverflow_0003312436_emacs_python.txt |
Q:
Python: Amazon AWS interface?
Googling reveals several Python interfaces to Amazon Web Services (AWS). Which are the most popular, feature-complete, etc?
A:
I suggest boto - It's an active project, and boto's new home is now on GitHub, so you can fork it and add/patch it as desired (not that you need to - it seems very stable).
The author recently got a job that lets him hack on this part time for work, see And Now For Something Completely Different...
Update: Meanwhile the author, Mitch Garnaat, has fortunately joined the AWS team as well, see Big News Regarding Python, boto, and AWS, promoting this de facto AWS SDK for Python to a semi official one:
Building on this model, Mitch Garnaat has also joined the team. Mitch
has been a member of the AWS community for over 6 years and has made
over 2,000 posts to the AWS Developer Forums. He is also the author of
boto, the most popular third-party library for accessing AWS, and of
the Python and AWS Cookbook.
A:
I'm using pyaws. It's not bug-free[*], but seems to work sufficiently well for me. I haven't tried looking for alternatives.
[*] The only bug I've seen was that iterating over the results for a particular query gave me an unexpected IndexError; for some reason the iterator thought it had 11 results instead of the 8 ones it actually had. I don't really know if the bug is in pyaws or if Amazon manager to return an invalid result for that query.
| Python: Amazon AWS interface? | Googling reveals several Python interfaces to Amazon Web Services (AWS). Which are the most popular, feature-complete, etc?
| [
"I suggest boto - It's an active project, and boto's new home is now on GitHub, so you can fork it and add/patch it as desired (not that you need to - it seems very stable).\nThe author recently got a job that lets him hack on this part time for work, see And Now For Something Completely Different...\nUpdate: Meanw... | [
20,
0
] | [] | [] | [
"amazon_web_services",
"python"
] | stackoverflow_0003216791_amazon_web_services_python.txt |
Q:
Creating a spider using Scrapy, Spider generation error
I just downloaded Scrapy (web crawler) on Windows 32 and have just created a new project folder using the "scrapy-ctl.py startproject dmoz" command in dos. I then proceeded to created the first spider using the command:
scrapy-ctl.py genspider myspider myspdier-domain.com
but it did not work and returns the error:
Error running: scrapy-ctl.py genspider, Cannot find project settings module in python path: scrapy_settings.
I know I have the path set right (to python26/scripts), but I am having difficulty figuring out what the problem is. I am new to both scrapy and python so there is a good possibility that I have failled to do something important.
Also, I have been using eclipse with the Pydev plugin to edit the code if that might cause some problems.
A:
There is a difference between PATH and PYTHON_PATH. Is your PYTHON_PATH set correctly? This path is where python looks to include packages / modules.
A:
use the scrapy-ctl.py in the project's dir. that script will know about that project's settings. the main scrapy-ctl.py doesn't have a clue about that specific project's settings.
A:
Set PYTHONPATH environment variable to python26/scripts.
| Creating a spider using Scrapy, Spider generation error | I just downloaded Scrapy (web crawler) on Windows 32 and have just created a new project folder using the "scrapy-ctl.py startproject dmoz" command in dos. I then proceeded to created the first spider using the command:
scrapy-ctl.py genspider myspider myspdier-domain.com
but it did not work and returns the error:
Error running: scrapy-ctl.py genspider, Cannot find project settings module in python path: scrapy_settings.
I know I have the path set right (to python26/scripts), but I am having difficulty figuring out what the problem is. I am new to both scrapy and python so there is a good possibility that I have failled to do something important.
Also, I have been using eclipse with the Pydev plugin to edit the code if that might cause some problems.
| [
"There is a difference between PATH and PYTHON_PATH. Is your PYTHON_PATH set correctly? This path is where python looks to include packages / modules.\n",
"use the scrapy-ctl.py in the project's dir. that script will know about that project's settings. the main scrapy-ctl.py doesn't have a clue about that specifi... | [
2,
1,
0
] | [] | [] | [
"python",
"scrapy",
"web_crawler"
] | stackoverflow_0002842629_python_scrapy_web_crawler.txt |
Q:
Django, PIP, and Virtualenv
Got this django project that I assume would run on virtualenv. I installed virtualenv through pip install and created the env but when I try to feed the pip requirements file, I got this:
Directory 'tagging' is not installable. File 'setup.py' not found.
Storing complete log in /Users/XXXX/.pip/pip.log
Here's the entry on the log file:
------------------------------------------------------------
/Users/XXXX/Sites/SampleProject/bin/pip run on Wed Jul 21 06:35:02 2010
Directory 'tagging' is not installable. File 'setup.py' not found.
Exception information:
Traceback (most recent call last):
File "/Users/XXXX/Sites/SampleProject/lib/python2.6/site-packages/pip-0.7.2-py2.6.egg/pip/basecommand.py", line 120, in main
self.run(options, args)
File "/Users/XXXX/Sites/SampleProject/lib/python2.6/site-packages/pip-0.7.2-py2.6.egg/pip/commands/install.py", line 158, in run
for req in parse_requirements(filename, finder=finder, options=options):
File "/Users/XXXX/Sites/SampleProject/lib/python2.6/site-packages/pip-0.7.2-py2.6.egg/pip/req.py", line 1395, in parse_requirements
req = InstallRequirement.from_line(line, comes_from)
File "/Users/XXXX/Sites/SampleProject/lib/python2.6/site-packages/pip-0.7.2-py2.6.egg/pip/req.py", line 87, in from_line
% name)
InstallationError: Directory 'tagging' is not installable. File 'setup.py' not found.
Also, here's the requirements file I'm trying to feed:
# to use:
# mkvirtualenv %PROJECT% (or workon %PROJECT%)
# export PIP_RESPECT_VIRTUALENV=true
# pip install -r requirements.txt
# you'll also need:
# mongodb1.1.4
# imagemagick > 6.3.8
# -e svn+http://code.djangoproject.com/svn/django/trunk#egg=djangoipython
ipdb
PIL
django-extensions
django-debug-toolbar
pytz
tagging
Could it be a problem with PIP? I have installed it through easy_install and used it already to install some modules such as fabric and etc. with no problems.
Hope someone could lend a hand :) BTW, here's my local setup: OSX 10.6.4, Python 2.6.1, Django 1.3 alpha. Thanks!
A:
Sounds like you have a tagging/ directory in the directory from which you are running pip, and pip thinks this directory (rather than the django-tagging project on PyPI) is what you want it to install. But there's no setup.py in that directory, so pip doesn't know how to install it.
If the name of the project you wanted to install from PyPI were actually "tagging", you'd need to move or rename the tagging/ directory, or else run pip from a different directory. But it's not; it's actually django-tagging: http://pypi.python.org/pypi/django-tagging So if you just change the entry in your requirements file from "tagging" to "django-tagging," it should work.
All of this is a bug in pip, really: it should assume something is a PyPI project name rather than a local directory, unless the name you give has an actual slash in it or appended to it.
A:
Is it possible you've copied the "tagging" directory from this location in the django-tagging source? In that case, you actually need the root from this location which has "tagging" as a sub-directory and a setup.py file. Just checkout from trunk or unzip to a "django-tagging" directory and make sure that your requirements file points to the "django-tagging" directory.
| Django, PIP, and Virtualenv | Got this django project that I assume would run on virtualenv. I installed virtualenv through pip install and created the env but when I try to feed the pip requirements file, I got this:
Directory 'tagging' is not installable. File 'setup.py' not found.
Storing complete log in /Users/XXXX/.pip/pip.log
Here's the entry on the log file:
------------------------------------------------------------
/Users/XXXX/Sites/SampleProject/bin/pip run on Wed Jul 21 06:35:02 2010
Directory 'tagging' is not installable. File 'setup.py' not found.
Exception information:
Traceback (most recent call last):
File "/Users/XXXX/Sites/SampleProject/lib/python2.6/site-packages/pip-0.7.2-py2.6.egg/pip/basecommand.py", line 120, in main
self.run(options, args)
File "/Users/XXXX/Sites/SampleProject/lib/python2.6/site-packages/pip-0.7.2-py2.6.egg/pip/commands/install.py", line 158, in run
for req in parse_requirements(filename, finder=finder, options=options):
File "/Users/XXXX/Sites/SampleProject/lib/python2.6/site-packages/pip-0.7.2-py2.6.egg/pip/req.py", line 1395, in parse_requirements
req = InstallRequirement.from_line(line, comes_from)
File "/Users/XXXX/Sites/SampleProject/lib/python2.6/site-packages/pip-0.7.2-py2.6.egg/pip/req.py", line 87, in from_line
% name)
InstallationError: Directory 'tagging' is not installable. File 'setup.py' not found.
Also, here's the requirements file I'm trying to feed:
# to use:
# mkvirtualenv %PROJECT% (or workon %PROJECT%)
# export PIP_RESPECT_VIRTUALENV=true
# pip install -r requirements.txt
# you'll also need:
# mongodb1.1.4
# imagemagick > 6.3.8
# -e svn+http://code.djangoproject.com/svn/django/trunk#egg=djangoipython
ipdb
PIL
django-extensions
django-debug-toolbar
pytz
tagging
Could it be a problem with PIP? I have installed it through easy_install and used it already to install some modules such as fabric and etc. with no problems.
Hope someone could lend a hand :) BTW, here's my local setup: OSX 10.6.4, Python 2.6.1, Django 1.3 alpha. Thanks!
| [
"Sounds like you have a tagging/ directory in the directory from which you are running pip, and pip thinks this directory (rather than the django-tagging project on PyPI) is what you want it to install. But there's no setup.py in that directory, so pip doesn't know how to install it.\nIf the name of the project you... | [
3,
1
] | [] | [] | [
"django",
"pip",
"python",
"setup_project",
"virtualenv"
] | stackoverflow_0003295322_django_pip_python_setup_project_virtualenv.txt |
Q:
Finding blank regions in image
This question is somewhat language-agnostic, but my tool of choice happens to be a numpy array.
What I am doing is taking the difference of two images via PIL:
img = ImageChops.difference(img1, img2)
And I want to find the rectangular regions that contain changes from one picture to another. Of course there's the built in .getbbox() method, but if there are two regions with changes it will return a box from one region to another, and if there are only 1 pixel changes in each corner it will return the whole image.
For instance consider the following where o is a non-zero pixel:
______________________
|o ooo |
| oooo ooo |
| o |
| o o |
| |
| oo o |
| o o ooo |
| oo ooooo |
| ooo |
| o |
|____________________|
I'd like to get 4x4-tuples containing the bounding boxes for each non-zero region. For the edge case of the
oooo
o
o o
structure, I'm not terribly worried how that's handled - either getting both sections separately or together, because the bounds of the inverted-L shape will completely overlap the bounds of the single pixel.
I've never done anything this advanced with image processing so I wanted to get some input before I really write anything (and if there are pre-existing methods in the modules I'm already using, I welcome them!).
My psuedocode-ish version goes something like this:
for line in image:
started = False
for pixel in line:
if pixel and not started:
started = True
save start coords
elif started and not pixel:
started = False
save end coords (x - 1 of course)
This should give me a list of coordinates, but then I have to determine if the regions are contiguous. I could do that with a graph-type search? (We did plenty of DFS and BFS in Algorithms last semester) Of course I guess I could do that instead/in conjunction with my previous loops?
I won't be doing this on "large" images - they're pulled from a webcam and the best one I currently have does 640x480. At most I'd be doing 720p or 1080p, but that's far enough into the future that it's not a real concern.
So my question(s): Am I headed on the right path, or am I way off? And more important, are there any built-in functions that prevent me from re-inventing the wheel? And finally, are there any good resources I should look at (tutorials, papers, etc.) that will help out here?
Thanks!
A:
I believe scipy's ndimage module has everything you need...
Here's a quick example
import numpy as np
import scipy as sp
import scipy.ndimage.morphology
# The array you gave above
data = np.array(
[
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
])
# Fill holes to make sure we get nice clusters
filled = sp.ndimage.morphology.binary_fill_holes(data)
# Now seperate each group of contigous ones into a distinct value
# This will be an array of values from 1 - num_objects, with zeros
# outside of any contigous object
objects, num_objects = sp.ndimage.label(filled)
# Now return a list of slices around each object
# (This is effectively the tuple that you wanted)
object_slices = sp.ndimage.find_objects(objects)
# Just to illustrate using the object_slices
for obj_slice in object_slices:
print data[obj_slice]
This outputs:
[[1]]
[[1 1 1]
[1 1 1]]
[[1 1 1 1]
[1 0 0 0]
[1 0 0 1]]
[[1]]
[[0 1 1 0]
[1 0 0 1]
[0 1 1 0]]
[[0 0 1 0 0]
[0 1 1 1 0]
[1 1 1 1 1]
[0 1 1 1 0]
[0 0 1 0 0]]
Note that the "object_slices" are basically what you originally asked for, if you need the actual indicies.
Edit: Just wanted to point out that despite it appearing to properly handle the edge case of
[[1 1 1 1]
[1 0 0 0]
[1 0 0 1]]
it actually doesn't (Thus the extra lone [[1]]). You can see this if you print out the "objects" array and take a look at objects 3 & 4.
[[1 0 0 0 0 0 0 0 0 0 0 0 0 2 2 2 0 0 0 0]
[0 0 0 0 0 0 3 3 3 3 0 0 0 2 2 2 0 0 0 0]
[0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 3 0 0 4 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 5 5 0 0 0 0 0 0 0 6 0 0 0 0 0]
[0 0 0 0 5 5 5 5 0 0 0 0 0 6 6 6 0 0 0 0]
[0 0 0 0 0 5 5 0 0 0 0 0 6 6 6 6 6 0 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0 0 6 6 6 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 0 0 0 0 0]]
Hope that helps!
[1]
A:
You could look for connected components in the image and then determine the bounding boxes of these components.
A:
A clustering package (ie this) should be able to most of the work ( find connected pixels ). Finding the bounding box for a cluster is trivial then.
| Finding blank regions in image | This question is somewhat language-agnostic, but my tool of choice happens to be a numpy array.
What I am doing is taking the difference of two images via PIL:
img = ImageChops.difference(img1, img2)
And I want to find the rectangular regions that contain changes from one picture to another. Of course there's the built in .getbbox() method, but if there are two regions with changes it will return a box from one region to another, and if there are only 1 pixel changes in each corner it will return the whole image.
For instance consider the following where o is a non-zero pixel:
______________________
|o ooo |
| oooo ooo |
| o |
| o o |
| |
| oo o |
| o o ooo |
| oo ooooo |
| ooo |
| o |
|____________________|
I'd like to get 4x4-tuples containing the bounding boxes for each non-zero region. For the edge case of the
oooo
o
o o
structure, I'm not terribly worried how that's handled - either getting both sections separately or together, because the bounds of the inverted-L shape will completely overlap the bounds of the single pixel.
I've never done anything this advanced with image processing so I wanted to get some input before I really write anything (and if there are pre-existing methods in the modules I'm already using, I welcome them!).
My psuedocode-ish version goes something like this:
for line in image:
started = False
for pixel in line:
if pixel and not started:
started = True
save start coords
elif started and not pixel:
started = False
save end coords (x - 1 of course)
This should give me a list of coordinates, but then I have to determine if the regions are contiguous. I could do that with a graph-type search? (We did plenty of DFS and BFS in Algorithms last semester) Of course I guess I could do that instead/in conjunction with my previous loops?
I won't be doing this on "large" images - they're pulled from a webcam and the best one I currently have does 640x480. At most I'd be doing 720p or 1080p, but that's far enough into the future that it's not a real concern.
So my question(s): Am I headed on the right path, or am I way off? And more important, are there any built-in functions that prevent me from re-inventing the wheel? And finally, are there any good resources I should look at (tutorials, papers, etc.) that will help out here?
Thanks!
| [
"I believe scipy's ndimage module has everything you need... \nHere's a quick example\nimport numpy as np\nimport scipy as sp\nimport scipy.ndimage.morphology\n\n# The array you gave above\ndata = np.array( \n [\n [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0], \n [0, 0, 0, ... | [
20,
1,
1
] | [] | [] | [
"image_processing",
"language_agnostic",
"numpy",
"python",
"python_imaging_library"
] | stackoverflow_0003310681_image_processing_language_agnostic_numpy_python_python_imaging_library.txt |
Q:
How to Convert Extended ASCII to HTML Entity Names in Python?
I'm currently doing this to replace extended-ascii characters with their HTML-entity-number equivalents:
s.encode('ascii', 'xmlcharrefreplace')
What I would like to do is convert to the HTML-entity-name equivalent (i.e. © instead of ©). This small program below shows what I'm trying to do that is failing. Is there a way to do this, aside from doing a find/replace?
#coding=latin-1
def convertEntities(s):
return s.encode('ascii', 'xmlcharrefreplace')
ok = 'ascii: !@#$%^&*()<>'
not_ok = u'extended-ascii: ©®°±¼'
ok_expected = ok
not_ok_expected = u'extended-ascii: ©®°±¼'
ok_2 = convertEntities(ok)
not_ok_2 = convertEntities(not_ok)
if ok_2 == ok_expected:
print 'ascii worked'
else:
print 'ascii failed: "%s"' % ok_2
if not_ok_2 == not_ok_expected:
print 'extended-ascii worked'
else:
print 'extended-ascii failed: "%s"' % not_ok_2
A:
edit
Others have mentioned the htmlentitydefs that I never knew about. It would work with my code this way:
from htmlentitydefs import entitydefs as symbols
for tag, val in symbols.iteritems():
mystr = mystr.replace("&{0};".format(tag), val)
And that should work.
A:
Is htmlentitydefs what you want?
import htmlentitydefs
htmlentitydefs.codepoint2name.get(ord(c),c)
A:
I'm not sure how directly but I think the htmlentitydefs module will be of use. An example can be found here.
A:
Update This is the solution I'm going with, with a small fix to check that entitydefs contains a mapping for the character we have.
def convertEntities(s):
return ''.join([getEntity(c) for c in s])
def getEntity(c):
ord_c = ord(c)
if ord_c > 127 and ord_c in htmlentitydefs.codepoint2name:
return "&%s;" % htmlentitydefs.codepoint2name[ord_c]
return c
A:
Are you sure that you don't want the conversion to be reversible? Your ok_expected string indicates you don't want existing & characters escaped, so the conversion will be one way. The code below assumes that & should be escaped, but just remove the cgi.escape if you really don't want that.
Anyway, I'd combine your original approach with a regular expression substitution: do the encoding as before and then just fix up the numeric entities. That way you don't end up mapping every single character through your getEntity function.
#coding=latin-1
import cgi
import re
import htmlentitydefs
def replace_entity(match):
c = int(match.group(1))
name = htmlentitydefs.codepoint2name.get(c, None)
if name:
return "&%s;" % name
return match.group(0)
def convertEntities(s):
s = cgi.escape(s) # Remove if you want ok_expected to pass!
s = s.encode('ascii', 'xmlcharrefreplace')
s = re.sub("&#([0-9]+);", replace_entity, s)
return s
ok = 'ascii: !@#$%^&*()<>'
not_ok = u'extended-ascii: ©®°±¼'
ok_expected = ok
not_ok_expected = u'extended-ascii: ©®°±¼'
ok_2 = convertEntities(ok)
not_ok_2 = convertEntities(not_ok)
if ok_2 == ok_expected:
print 'ascii worked'
else:
print 'ascii failed: "%s"' % ok_2
if not_ok_2 == not_ok_expected:
print 'extended-ascii worked'
else:
print 'extended-ascii failed: "%s"' % not_ok_2
| How to Convert Extended ASCII to HTML Entity Names in Python? | I'm currently doing this to replace extended-ascii characters with their HTML-entity-number equivalents:
s.encode('ascii', 'xmlcharrefreplace')
What I would like to do is convert to the HTML-entity-name equivalent (i.e. © instead of ©). This small program below shows what I'm trying to do that is failing. Is there a way to do this, aside from doing a find/replace?
#coding=latin-1
def convertEntities(s):
return s.encode('ascii', 'xmlcharrefreplace')
ok = 'ascii: !@#$%^&*()<>'
not_ok = u'extended-ascii: ©®°±¼'
ok_expected = ok
not_ok_expected = u'extended-ascii: ©®°±¼'
ok_2 = convertEntities(ok)
not_ok_2 = convertEntities(not_ok)
if ok_2 == ok_expected:
print 'ascii worked'
else:
print 'ascii failed: "%s"' % ok_2
if not_ok_2 == not_ok_expected:
print 'extended-ascii worked'
else:
print 'extended-ascii failed: "%s"' % not_ok_2
| [
"edit\nOthers have mentioned the htmlentitydefs that I never knew about. It would work with my code this way:\nfrom htmlentitydefs import entitydefs as symbols\n\nfor tag, val in symbols.iteritems():\n mystr = mystr.replace(\"&{0};\".format(tag), val)\n\nAnd that should work.\n",
"Is htmlentitydefs what you wan... | [
2,
2,
1,
1,
0
] | [] | [] | [
"ascii",
"encoding",
"html",
"python"
] | stackoverflow_0003312810_ascii_encoding_html_python.txt |
Q:
In Python, are single character strings guaranteed to be identical?
I read somewhere (an SO post, I think, and probably somewhere else, too), that Python automatically references single character strings, so not only does 'a' == 'a', but 'a' is 'a'.
However, I can't remember reading if this is guaranteed behavior in Python, or is it just implementation specific?
Bonus points for official sources.
A:
It's implementation specific. It's difficult to tell, because (as the reference says):
... for immutable types, operations that compute new values may actually return a reference to any existing object with the same type and value, while for mutable objects this is not allowed.
The interpreter's pretty good about ensuring they're identical, but it doesn't always work:
x = u'a'
y = u'abc'[:1]
print x == y, x is y
Run on CPython 2.6, this gives True False.
A:
It is all implementation defined.
The documentation for intern says: "Normally, the names used in Python programs are automatically interned, and the dictionaries used to hold module, class or instance attributes have interned keys."
That means that anything that could be a name and which is known at compile time is likely (but not guaranteed) to be the same as any other occurrences of the same name.
Other strings aren't stated to be interned. Constant strings appearing in the same compilation unit are folded together (but that is also just an implementation detail) so you get:
>>> a = '!'
>>> a is '!'
False
>>> a = 'a'
>>> a is 'a'
True
>>>
The string that contains an identifier is interned so even in different compilations you get the same string. The string that is not an identifier is only shared when in the same compilation unit:
>>> '!' is '!'
True
| In Python, are single character strings guaranteed to be identical? | I read somewhere (an SO post, I think, and probably somewhere else, too), that Python automatically references single character strings, so not only does 'a' == 'a', but 'a' is 'a'.
However, I can't remember reading if this is guaranteed behavior in Python, or is it just implementation specific?
Bonus points for official sources.
| [
"It's implementation specific. It's difficult to tell, because (as the reference says):\n\n... for immutable types, operations that compute new values may actually return a reference to any existing object with the same type and value, while for mutable objects this is not allowed.\n\nThe interpreter's pretty good... | [
14,
6
] | [] | [] | [
"identity",
"python",
"string"
] | stackoverflow_0003313135_identity_python_string.txt |
Q:
PIL: Composite / merge two images as "Dodge"
How do I use PIL to implement the equivalent of merging a layer in "dodge" mode with another layer (as done in Gimp/Photoshop)?
I have my original image as well as the image I'd like to use as the layer to merge with, but I don't how to do the dodge merge/composite:
from PIL import Image, ImageFilter, ImageOps
img = Image.open(fname)
img_blur = img.filter(ImageFilter.BLUR)
img_blur_invert = ImageOps.invert(img_blur)
# Now "dodge" merge img_blur_invert on top of img
A:
There might be a pure-PIL way to do this; I don't know. However, if not, here is a way you could do it with numpy:
import numpy as np
import Image
import ImageFilter
def dodge(front,back):
# The formula comes from http://www.adobe.com/devnet/pdf/pdfs/blend_modes.pdf
result=back*256.0/(256.0-front)
result[result>255]=255
result[front==255]=255
return result.astype('uint8')
img = Image.open(fname,'r').convert('RGB')
arr = np.asarray(img)
img_blur = img.filter(ImageFilter.BLUR)
blur = np.asarray(img_blur)
result=dodge(front=blur, back=arr)
result = Image.fromarray(result, 'RGB')
result.show()
| PIL: Composite / merge two images as "Dodge" | How do I use PIL to implement the equivalent of merging a layer in "dodge" mode with another layer (as done in Gimp/Photoshop)?
I have my original image as well as the image I'd like to use as the layer to merge with, but I don't how to do the dodge merge/composite:
from PIL import Image, ImageFilter, ImageOps
img = Image.open(fname)
img_blur = img.filter(ImageFilter.BLUR)
img_blur_invert = ImageOps.invert(img_blur)
# Now "dodge" merge img_blur_invert on top of img
| [
"There might be a pure-PIL way to do this; I don't know. However, if not, here is a way you could do it with numpy:\nimport numpy as np\nimport Image\nimport ImageFilter\n\ndef dodge(front,back):\n # The formula comes from http://www.adobe.com/devnet/pdf/pdfs/blend_modes.pdf\n result=back*256.0/(256.0-front) ... | [
7
] | [] | [] | [
"image_processing",
"python",
"python_imaging_library"
] | stackoverflow_0003312606_image_processing_python_python_imaging_library.txt |
Q:
Why does my setup.py script give this error?
So I have a C++ class that I made python wrappers for, and I made a setup.py file to compile it in order to use it in python. When I try to run python setup.py install I get the following error:
lipo: can't create output file: build/temp.macosx-10.5-fat3-2.7/../tools/transport-stream/TransportStreamPacket_py.o (No such file or directory)
error: command 'gcc-4.0' failed with exit status 1
I don't think the problem is with the file being compiled, I think I must be setting up the setup.py wrong. Here is what my setup.py file looks like:
from distutils.core import setup, Extension
module1 = Extension('CL_TransportStreamPacket_py',
sources = ['../tools/transport-stream/TransportStreamPacket_py.cpp'],
include_dirs = ['.',
'../common',
'../tools/transport-stream'],
library_dirs = ['common',
'../tools/transport-stream'],
libraries = ['Common',
'TransportStream']
)
setup (name = 'CL_TransportStreamPacket_py',
version = '1.0',
description = 'This is the transport stream packet parser',
ext_modules = [module1])
A:
Your problem is the leading '..' in the source definitions. Distutils uses the names of the source files to generate names of temporary and output files, but doesn't normalize them. Reorganize your source tree (or move the setup.py file) so you don't need to reference '../tools/...'
| Why does my setup.py script give this error? | So I have a C++ class that I made python wrappers for, and I made a setup.py file to compile it in order to use it in python. When I try to run python setup.py install I get the following error:
lipo: can't create output file: build/temp.macosx-10.5-fat3-2.7/../tools/transport-stream/TransportStreamPacket_py.o (No such file or directory)
error: command 'gcc-4.0' failed with exit status 1
I don't think the problem is with the file being compiled, I think I must be setting up the setup.py wrong. Here is what my setup.py file looks like:
from distutils.core import setup, Extension
module1 = Extension('CL_TransportStreamPacket_py',
sources = ['../tools/transport-stream/TransportStreamPacket_py.cpp'],
include_dirs = ['.',
'../common',
'../tools/transport-stream'],
library_dirs = ['common',
'../tools/transport-stream'],
libraries = ['Common',
'TransportStream']
)
setup (name = 'CL_TransportStreamPacket_py',
version = '1.0',
description = 'This is the transport stream packet parser',
ext_modules = [module1])
| [
"Your problem is the leading '..' in the source definitions. Distutils uses the names of the source files to generate names of temporary and output files, but doesn't normalize them. Reorganize your source tree (or move the setup.py file) so you don't need to reference '../tools/...'\n"
] | [
3
] | [] | [] | [
"c++",
"python",
"setup.py",
"wrapper"
] | stackoverflow_0003313625_c++_python_setup.py_wrapper.txt |
Q:
twisted, unblock a threads.blockingCallFromThread when the reactor stops
it seems threads.blockingCallFromThread keeps blocking even when the reactor stops. is there any way to un-block it? the deferred that it is blocking on relies on an RPC coming from the other end, and that definitely won't come in with the reactor stopped.
A:
It blocks until the Deferred fires. If you want it to unblock, fire the Deferred. If you're stopping your application and stopping the reactor, then you might want to fire the Deferred before you do that. You probably want to fire it with a Failure since presumably you haven't been able to come up with a successful result. You can install reactor shutdown hooks to run code when the reactor is about to stop, either using a custom Service or reactor.addSystemEventTrigger.
| twisted, unblock a threads.blockingCallFromThread when the reactor stops | it seems threads.blockingCallFromThread keeps blocking even when the reactor stops. is there any way to un-block it? the deferred that it is blocking on relies on an RPC coming from the other end, and that definitely won't come in with the reactor stopped.
| [
"It blocks until the Deferred fires. If you want it to unblock, fire the Deferred. If you're stopping your application and stopping the reactor, then you might want to fire the Deferred before you do that. You probably want to fire it with a Failure since presumably you haven't been able to come up with a succes... | [
1
] | [] | [] | [
"deferred",
"python",
"twisted"
] | stackoverflow_0003311622_deferred_python_twisted.txt |
Q:
how to get final redirected url
i am using google app engine for fetching the feed url bur few of the urls are 301 redirect i want to get the final url which returns me the result
i am usign the universal feed reader for parsing the url is there any way or any function which can give me the final url.
A:
It is not possible to get the 'final' URL by parsing, in order to resolve it, you would need to at least perform an HTTP HEAD operation
A:
If you're using the urlfetch API, you can just access the final_url attribute of the response object you get from urlfetch.fetch(), assuming you set follow_redirects to True:
>>> from google.appengine.api import urlfetch
>>> url_that_redirects = 'http://www.example.com/redirect/'
>>> resp = urlfetch.fetch(url=url_that_redirects, follow_redirects=False)
>>> resp.status_code
302 # or 301 or whatever
>>> resp = urlfetch.fetch(url=url_that_redirects, follow_redirects=True)
>>> resp.status_code
200
>>> resp.final_url
'http://www.example.com/final_url/'
Note that the follow_redirects keyword argument defaults to True, so you don't have to set it explicitly.
A:
You can do this by handling redirects manually. When calling fetch, pass in follow_redirects=False. If your response object's HTTP status is a redirect code, either 301 or 302, grab the Location response header and fetch again until the HTTP status is something else. Add a sanity check (perhaps 5 redirects max) to avoid redirect loops.
| how to get final redirected url | i am using google app engine for fetching the feed url bur few of the urls are 301 redirect i want to get the final url which returns me the result
i am usign the universal feed reader for parsing the url is there any way or any function which can give me the final url.
| [
"It is not possible to get the 'final' URL by parsing, in order to resolve it, you would need to at least perform an HTTP HEAD operation\n",
"If you're using the urlfetch API, you can just access the final_url attribute of the response object you get from urlfetch.fetch(), assuming you set follow_redirects to Tru... | [
3,
3,
0
] | [] | [] | [
"feedparser",
"google_app_engine",
"python"
] | stackoverflow_0003309695_feedparser_google_app_engine_python.txt |
Q:
Class design: Last Modified
My class:
class ManagementReview:
"""Class describing ManagementReview Object.
"""
# Class attributes
id = 0
Title = 'New Management Review Object'
fiscal_year = ''
region = ''
review_date = ''
date_completed = ''
prepared_by = ''
__goals = [] # List of <ManagementReviewGoals>.
__objectives = [] # List of <ManagementReviewObjetives>.
__actions = [] # List of <ManagementReviewActions>.
__deliverables = [] # List of <ManagementReviewDeliverable>.
__issues = [] # List of <ManagementReviewIssue>.
__created = ''
__created_by = ''
__modified = ''
__modified_by = ''
The __modified attribute is a datetime string in isoformat. I want that attribute to be automatically to be upated to datetime.now().isoformat() every time one of the other attributes is updated. For each of the other attributes I have a setter like:
def setObjectives(self,objectives):
mro = ManagementReviewObjective(args)
self.__objectives.append(mro)
So, is there an easier way to than to add a line like:
self.__modified = datetime.now().isoformat()
to every setter?
Thanks! :)
A:
To update __modified when instance attributes are modified (as in your example of self.__objectives), you could override __setattr__.
For example, you could add this to your class:
def __setattr__(self, name, value):
# set the value like usual and then update the modified attribute too
self.__dict__[name] = value
self.__dict__['__modified'] = datetime.now().isoformat()
A:
Perhaps adding a decorator before each setter?
A:
If you have a method that commits the changes made to these attributes to a database (like a save() method or update_record() method. Something like that), you could just append the
self.__modified = datetime.now().isoformat()
just before its all committed, since thats the only time it really matters anyway.
| Class design: Last Modified | My class:
class ManagementReview:
"""Class describing ManagementReview Object.
"""
# Class attributes
id = 0
Title = 'New Management Review Object'
fiscal_year = ''
region = ''
review_date = ''
date_completed = ''
prepared_by = ''
__goals = [] # List of <ManagementReviewGoals>.
__objectives = [] # List of <ManagementReviewObjetives>.
__actions = [] # List of <ManagementReviewActions>.
__deliverables = [] # List of <ManagementReviewDeliverable>.
__issues = [] # List of <ManagementReviewIssue>.
__created = ''
__created_by = ''
__modified = ''
__modified_by = ''
The __modified attribute is a datetime string in isoformat. I want that attribute to be automatically to be upated to datetime.now().isoformat() every time one of the other attributes is updated. For each of the other attributes I have a setter like:
def setObjectives(self,objectives):
mro = ManagementReviewObjective(args)
self.__objectives.append(mro)
So, is there an easier way to than to add a line like:
self.__modified = datetime.now().isoformat()
to every setter?
Thanks! :)
| [
"To update __modified when instance attributes are modified (as in your example of self.__objectives), you could override __setattr__.\nFor example, you could add this to your class:\ndef __setattr__(self, name, value):\n # set the value like usual and then update the modified attribute too\n self.__dict__[na... | [
5,
0,
0
] | [] | [] | [
"class_design",
"python"
] | stackoverflow_0003313978_class_design_python.txt |
Q:
In Python, how do I find the date of the first Monday of a given week?
If I have a certain week number (eg 51) and a given year (eg 2008), how do I find the date of the first Monday of that same week?
Many thanks
A:
>>> import time
>>> time.asctime(time.strptime('2008 50 1', '%Y %W %w'))
'Mon Dec 15 00:00:00 2008'
Assuming the first day of your week is Monday, use %U instead of %W if the first day of your week is Sunday. See the documentation for strptime for details.
Update: Fixed week number. The %W directive is 0-based so week 51 would be entered as 50, not 51.
A:
PEZ's and Gerald Kaszuba's solutions work under assumption that January 1st will always be in the first week of a given year. This assumption is not correct for ISO calendar, see Python's docs for reference. For example, in ISO calendar, week 1 of 2010 actually starts on Jan 4, and Jan 1 of 2010 is in week 53 of 2009. An ISO calendar-compatible solution:
from datetime import date, timedelta
def week_start_date(year, week):
d = date(year, 1, 1)
delta_days = d.isoweekday() - 1
delta_weeks = week
if year == d.isocalendar()[0]:
delta_weeks -= 1
delta = timedelta(days=-delta_days, weeks=delta_weeks)
return d + delta
A:
from datetime import date, timedelta
def first_monday(year, week):
d = date(year, 1, 4) # The Jan 4th must be in week 1 according to ISO
return d + timedelta(weeks=(week-1), days=-d.weekday())
A:
This seems to work, assuming week one can have a Monday falling on a day in the last year.
from datetime import date, timedelta
def get_first_dow(year, week):
d = date(year, 1, 1)
d = d - timedelta(d.weekday())
dlt = timedelta(days = (week - 1) * 7)
return d + dlt
A:
I have slightly modified the script of Vaidas K. in a way that it will return the beginning of the week and the end day of the week.
from datetime import datetime, date, timedelta
def weekbegend(year, week):
"""
Calcul du premier et du dernier jour de la semaine ISO
"""
d = date(year, 1, 1)
delta_days = d.isoweekday() - 1
delta_weeks = week
if year == d.isocalendar()[0]:
delta_weeks -= 1
# delta for the beginning of the week
delta = timedelta(days=-delta_days, weeks=delta_weeks)
weekbeg = d + delta
# delta2 for the end of the week
delta2 = timedelta(days=6-delta_days, weeks=delta_weeks)
weekend = d + delta2
return weekbeg, weekend
Soyou can use it that way.
weekbeg, weekend = weekbegend(2009, 1)
begweek = weekbeg.strftime("%A %d %B %Y")
endweek = weekend.strftime("%A %d %B %Y")
A:
Use the string formatting found in the time module. Detailed explanation of the formats used
>>> import time
>>> time.strptime("51 08 1","%U %y %w")
(2008, 12, 22, 0, 0, 0, 0, 357, -1)
The date returned is off by one week according to the calendar on my computer, maybe that is because weeks are indexed from 0?
| In Python, how do I find the date of the first Monday of a given week? | If I have a certain week number (eg 51) and a given year (eg 2008), how do I find the date of the first Monday of that same week?
Many thanks
| [
">>> import time\n>>> time.asctime(time.strptime('2008 50 1', '%Y %W %w'))\n'Mon Dec 15 00:00:00 2008'\n\nAssuming the first day of your week is Monday, use %U instead of %W if the first day of your week is Sunday. See the documentation for strptime for details.\nUpdate: Fixed week number. The %W directive is 0-b... | [
34,
33,
17,
8,
4,
0
] | [] | [] | [
"datetime",
"python"
] | stackoverflow_0000396913_datetime_python.txt |
Q:
twisted.protocols.ftp.FTPClient and Deferreds
Like most it's taking me a while to get used to using Deferreds but I'm slowly getting there. However, it's not clear to me how I can process a response and then call another FTP command using the processed response when using Twisted's FTP module. I'm using the the example FTP code as my jumping off point.
I want to connect to a FTP server, get all the top level directories, then enter each one and download all the files.
First I connect:
creator = ClientCreator(reactor, FTPClient, config.opts['username'], config.opts['password'], passive=config.opts['passive'])
creator.connectTCP(config.opts['host'], config.opts['port']).addCallback(connectionMade).addErrback(connectionFailed)
reactor.run()
It connects successfully, so my connectionMade function gets called:
def connectionMade(ftpClient):
# Get a detailed listing of the current directory
fileList = FTPFileListProtocol()
d = ftpClient.list('.', fileList)
d.addCallbacks(getSortedDirectories, fail, callbackArgs=(fileList,))
d.addCallback(enterDirs)
As you see, I queue up getSortedDirectories and then enterDirs.
def getSortedDirectories(result, fileListProtocol):
# Go through all directories from greatest to least
dirs = [directory for directory in sorted(fileListProtocol.files, reverse=True) if directory['filetype'] == 'd']
return dirs
My question is, how do I go through the directories in enterDirs?
def enterDirs(dirs):
for directory in dirs:
print "We'd be entering '%s' now." % directory
Should I be passing ftpClient to enterDirs like fileList is passed to getSortedDirectories and then make my download requests?
d.addCallback(enterDirs, callbackArgs=(ftpClient,))
def enterDirs(dirs, ftpClient):
for directory in dirs:
fileList = FTPFileListProtocol()
d = ftpClient.list(directory, fileList)
d.addCallbacks(downloadFiles, fail, callbackArgs=(directory, fileList, ftpClient))
def downloadFiles(result, directory, fileListProtocol, ftpClient):
for f in fileListProtocol.files if f.filetype == '-':
fileConsumer = FileConsumer()
ftpClient.retrieveFile(os.path.join(directory['filename'], file['filename']), fileConsumer)
Is this the best approach?
A:
Should I be passing ftpClient to
enterDirs like fileList is passed to
getSortedDirectories and then make my
download requests? ... Is this the
best approach?
I do think that passing the client object explicitly as an argument is indeed the best approach -- mostly, it's spare and elegant. The main alternative would be to code a class and stash the client object into an instance variable, which seems a bit more cumbersome; to use a global variable for the purpose would, in my opinion, be the least desirable alternative (the fewer globals around, the better!-).
| twisted.protocols.ftp.FTPClient and Deferreds | Like most it's taking me a while to get used to using Deferreds but I'm slowly getting there. However, it's not clear to me how I can process a response and then call another FTP command using the processed response when using Twisted's FTP module. I'm using the the example FTP code as my jumping off point.
I want to connect to a FTP server, get all the top level directories, then enter each one and download all the files.
First I connect:
creator = ClientCreator(reactor, FTPClient, config.opts['username'], config.opts['password'], passive=config.opts['passive'])
creator.connectTCP(config.opts['host'], config.opts['port']).addCallback(connectionMade).addErrback(connectionFailed)
reactor.run()
It connects successfully, so my connectionMade function gets called:
def connectionMade(ftpClient):
# Get a detailed listing of the current directory
fileList = FTPFileListProtocol()
d = ftpClient.list('.', fileList)
d.addCallbacks(getSortedDirectories, fail, callbackArgs=(fileList,))
d.addCallback(enterDirs)
As you see, I queue up getSortedDirectories and then enterDirs.
def getSortedDirectories(result, fileListProtocol):
# Go through all directories from greatest to least
dirs = [directory for directory in sorted(fileListProtocol.files, reverse=True) if directory['filetype'] == 'd']
return dirs
My question is, how do I go through the directories in enterDirs?
def enterDirs(dirs):
for directory in dirs:
print "We'd be entering '%s' now." % directory
Should I be passing ftpClient to enterDirs like fileList is passed to getSortedDirectories and then make my download requests?
d.addCallback(enterDirs, callbackArgs=(ftpClient,))
def enterDirs(dirs, ftpClient):
for directory in dirs:
fileList = FTPFileListProtocol()
d = ftpClient.list(directory, fileList)
d.addCallbacks(downloadFiles, fail, callbackArgs=(directory, fileList, ftpClient))
def downloadFiles(result, directory, fileListProtocol, ftpClient):
for f in fileListProtocol.files if f.filetype == '-':
fileConsumer = FileConsumer()
ftpClient.retrieveFile(os.path.join(directory['filename'], file['filename']), fileConsumer)
Is this the best approach?
| [
"\nShould I be passing ftpClient to\n enterDirs like fileList is passed to\n getSortedDirectories and then make my\n download requests? ... Is this the\n best approach?\n\nI do think that passing the client object explicitly as an argument is indeed the best approach -- mostly, it's spare and elegant. The main... | [
1
] | [] | [] | [
"deferred",
"ftp",
"python",
"twisted"
] | stackoverflow_0003313663_deferred_ftp_python_twisted.txt |
Q:
Creating an instance of a class, before class is defined?
Here's an example of my problem :
class bar() :
spam = foo()
def test(self) :
print self.spam.eggs
pass
class foo() :
eggs = 50
The issue (I assume) is that I'm trying to create an instance of a class before the class is defined. The obvious solution is to re-order my classes, but I like to keep my classes in alphabetical order (maybe a silly practice). Is there any way to define all my classes simultaneously? Do I just have to bite the bullet and reorder my class definitions?
A:
Alphabetical order is a silly idea indeed;-). Nevertheless, various possibilities include:
class aardvark(object):
eggs = 50
class bar(object):
spam = aardvark()
def test(self) :
print self.spam.eggs
foo = aardvark
and
class bar(object):
spam = None # optional!-)
def test(self) :
print self.spam.eggs
class foo(object):
eggs = 50
bar.spam = foo()
Note that much more crucial aspects than "alphabetical order" include making all classes "new style" (I've done it in both examples above) and removing totally redundant code like those pass statements (I've done that, too;-).
| Creating an instance of a class, before class is defined? | Here's an example of my problem :
class bar() :
spam = foo()
def test(self) :
print self.spam.eggs
pass
class foo() :
eggs = 50
The issue (I assume) is that I'm trying to create an instance of a class before the class is defined. The obvious solution is to re-order my classes, but I like to keep my classes in alphabetical order (maybe a silly practice). Is there any way to define all my classes simultaneously? Do I just have to bite the bullet and reorder my class definitions?
| [
"Alphabetical order is a silly idea indeed;-). Nevertheless, various possibilities include:\nclass aardvark(object):\n eggs = 50\n\nclass bar(object):\n spam = aardvark()\n def test(self) :\n print self.spam.eggs\n\nfoo = aardvark\n\nand\nclass bar(object):\n spam = None # optional!-)\n def t... | [
7
] | [] | [] | [
"class",
"python"
] | stackoverflow_0003314528_class_python.txt |
Q:
How do I do this replace regex in python?
Given a string of text, in Python:
s = "(((((hi abc )))))))"
s = "***(((((hi abc ***&&&&"
How do I replace all non-alphabetic symbols that occur more than 3 times...as blank string
For all the above, the result should be:
hi abc
A:
This should work: \W{3,}: matching non-alphanumerics that occur 3 or more times:
>>> s = "***(((((hi abc ***&&&&"
>>> re.sub("\W{3,}", "", s)
'hi abc'
>>> s = "(((((hi abc )))))))"
>>> re.sub("\W{3,}", "", s)
'hi abc'
A:
If you want to replace any sequence of non-space non-alphamerics (e.g. '!?&' as well as your examples), @Stephen's answer is fine. But if you only want to replace sequences of three or more identical non-alphamerics, a backreference will help:
>>> r3 = re.compile(r'(([^\s\w])\2{2,})')
>>> r3.findall('&&&xxx!&?yyy*****')
[('&&&', '&'), ('*****', '*')]
So, for example:
>>> r3.sub('', '&&&xxx!&?yyy*****')
'xxx!&?yyy'
A:
You can't (easily, using regexes) replace that by a "blank string" that's the same length as the replaced text. You can replace it with an empty string "" or a single space " " or any other constant string of your choice; I've used "*" in the example so that it is easier to see what is happening.
>>> re.sub(r"(\W)\1{3,}", "*", "12345<><>aaaaa%%%11111<<<<..>>>>")
'12345<><>aaaaa%%%11111*..*'
>>>
Note carefully: it doesn't change "<><>" ... I'm assuming that "non-alphabetic symbols that occur more than 3 times" means the same symbol has to occur more than 3 times". I'm also assuming that you did mean "more than 3" and not "3 or more".
| How do I do this replace regex in python? | Given a string of text, in Python:
s = "(((((hi abc )))))))"
s = "***(((((hi abc ***&&&&"
How do I replace all non-alphabetic symbols that occur more than 3 times...as blank string
For all the above, the result should be:
hi abc
| [
"This should work: \\W{3,}: matching non-alphanumerics that occur 3 or more times:\n>>> s = \"***(((((hi abc ***&&&&\"\n>>> re.sub(\"\\W{3,}\", \"\", s) \n'hi abc'\n>>> s = \"(((((hi abc )))))))\"\n>>> re.sub(\"\\W{3,}\", \"\", s) \n'hi abc'\n\n",
"If you want to replace any sequence of non-space non-alphamerics ... | [
8,
4,
0
] | [] | [] | [
"python",
"regex",
"string",
"text"
] | stackoverflow_0003314517_python_regex_string_text.txt |
Q:
Destroy session in Pylons or Python
I can't find information on how to destroy / kill a session within Pylons or Python on the interwebs. Thought it would be a good idea just to ask it here so that when I need to do it again a year from now I'll find this question :)
So I'm looking for a PHP session_destroy() equivalent for Pylons or Python.
Thanks,
Maggi
A:
session.delete() or session.invalidate(), depending on what you want.
http://beaker.readthedocs.org/en/latest/sessions.html#deleting
| Destroy session in Pylons or Python | I can't find information on how to destroy / kill a session within Pylons or Python on the interwebs. Thought it would be a good idea just to ask it here so that when I need to do it again a year from now I'll find this question :)
So I'm looking for a PHP session_destroy() equivalent for Pylons or Python.
Thanks,
Maggi
| [
"session.delete() or session.invalidate(), depending on what you want.\nhttp://beaker.readthedocs.org/en/latest/sessions.html#deleting\n"
] | [
4
] | [] | [] | [
"pylons",
"python",
"session"
] | stackoverflow_0003308510_pylons_python_session.txt |
Q:
GIMP get layer position relative to the image
i'm coding in python-fu and i need to get the layer position relative to the image (eg. the layer starts at x=35, y=50)
Is this possible? I haven't found anything in the gimp pdb docs
A:
Nevermind, got it.
It's layer.offsets (property) for future reference.
:)
| GIMP get layer position relative to the image | i'm coding in python-fu and i need to get the layer position relative to the image (eg. the layer starts at x=35, y=50)
Is this possible? I haven't found anything in the gimp pdb docs
| [
"Nevermind, got it. \nIt's layer.offsets (property) for future reference.\n:)\n"
] | [
5
] | [] | [] | [
"gimp",
"python"
] | stackoverflow_0003314598_gimp_python.txt |
Q:
How to display choices in my form using django forms?
I have a model that looks like this in my application:
class Member(models.Model):
name = models.CharField(max_length=200)
telephone_number = models.CharField(max_length=200)
email_address = models.CharField(max_length=200)
membership_type = models.CharField(max_length=1, choices=MEMBERSHIP_TYPES)
membership_number = models.CharField(max_length=200, primary_key=True)
def __unicode__(self):
return self.name
Reading up on forms for my model i see that i can replace the models stuff with forms and place it in my forms.py. However when for membership type i don't know how it is used in the forms class.
A:
Hmmm... You might want to read on ModelForms.
A:
Using the multiple choice field in the forms class.
| How to display choices in my form using django forms? | I have a model that looks like this in my application:
class Member(models.Model):
name = models.CharField(max_length=200)
telephone_number = models.CharField(max_length=200)
email_address = models.CharField(max_length=200)
membership_type = models.CharField(max_length=1, choices=MEMBERSHIP_TYPES)
membership_number = models.CharField(max_length=200, primary_key=True)
def __unicode__(self):
return self.name
Reading up on forms for my model i see that i can replace the models stuff with forms and place it in my forms.py. However when for membership type i don't know how it is used in the forms class.
| [
"Hmmm... You might want to read on ModelForms. \n",
"Using the multiple choice field in the forms class. \n"
] | [
2,
0
] | [] | [] | [
"django",
"django_models",
"python"
] | stackoverflow_0003313919_django_django_models_python.txt |
Q:
How do I detect the currently focused application?
I'd like to be able to track which application is currently focused on my X11 display from Python. The intent is to tie it into a timetracking tool so that I can keep track of how much time I spend being unproductive.
I already found this code at http://thpinfo.com/2007/09/x11-idle-time-and-focused-window-in.html:
import Xlib.display
display = Xlib.display.Display()
focus = display.get_input_focus()
print "WM Class: %s" % ( focus.focus.get_wm_class(), )
print "WM Name: %s" % ( focus.focus.get_wm_name(), )
However, it doesn't seem to work for me. Apparently, no matter which application is focused, both get_wm_class() and get_wm_name() just return None.
Of course the solution needs to work with all these new fangled window managers like Compiz and such.
A:
Whoo! I figured it out myself:
import Xlib.display
display = Xlib.display.Display()
window = display.get_input_focus().focus
wmname = window.get_wm_name()
wmclass = window.get_wm_class()
if wmclass is None and wmname is None:
window = window.query_tree().parent
wmname = window.get_wm_name()
print "WM Name: %s" % ( wmname, )
A:
A somewhat nicer solution, especially for a long-running app rather than a script, would use libwnck to track the _NET_ACTIVE_WINDOW hint. (See the EWMH specification for the definition of the hint.)
| How do I detect the currently focused application? | I'd like to be able to track which application is currently focused on my X11 display from Python. The intent is to tie it into a timetracking tool so that I can keep track of how much time I spend being unproductive.
I already found this code at http://thpinfo.com/2007/09/x11-idle-time-and-focused-window-in.html:
import Xlib.display
display = Xlib.display.Display()
focus = display.get_input_focus()
print "WM Class: %s" % ( focus.focus.get_wm_class(), )
print "WM Name: %s" % ( focus.focus.get_wm_name(), )
However, it doesn't seem to work for me. Apparently, no matter which application is focused, both get_wm_class() and get_wm_name() just return None.
Of course the solution needs to work with all these new fangled window managers like Compiz and such.
| [
"Whoo! I figured it out myself:\nimport Xlib.display\ndisplay = Xlib.display.Display()\nwindow = display.get_input_focus().focus\nwmname = window.get_wm_name()\nwmclass = window.get_wm_class()\nif wmclass is None and wmname is None:\n window = window.query_tree().parent\n wmname = window.get_wm_name()\nprint ... | [
13,
0
] | [] | [] | [
"python",
"x11",
"xlib"
] | stackoverflow_0003130912_python_x11_xlib.txt |
Q:
Django form don't show specific inputs
Say, I've got a model like this:
class Fleet(models.Model):
user = models.ForeignKey(User)
[...]
ship1 = models.IntegerField(default=0)
ship2 = models.IntegerField(default=0)
ship3 = models.IntegerField(default=0)
ship4 = models.IntegerField(default=0)
And a form:
class sendFleet(forms.Form):
[...]
ship1 = forms.IntegerField(initial=0)
ship2 = forms.IntegerField(initial=0)
ship3 = forms.IntegerField(initial=0)
ship4 = forms.IntegerField(initial=0)
How I can make the fields in the form hidden if the user has no 'ships' available (i.e = 0 in the Fleet model)?
A:
You can override the visible_fields (or hidden_fields if you really want a hidden field) methods in your form to flag them as "invisible" (or hidden inputs). See the docs for details.
EDIT: Something like this should work ...
class sendFleet(forms.Form):
[...]
ship1 = forms.IntegerField(initial=0)
ship2 = forms.IntegerField(initial=0)
def visible_fields(self):
# create a list of fields you don't want to display
invisibles = []
if self.instance.ship1 == 0:
invisibles.append(self.fields['ship1'])
# remove fields from the list of visible fields
visibles = super(MailForm, self).visible_fields()
return [v for v in visibles if v.field not in invisibles]
Then in your template:
{% for field in form.visible_fields %}
{{ field.label_tag }} : {{ field }}
{% endfor %}
A:
It seems like this problem would be better solved by using a ManyToManyField from Fleet to Ship, or a ForeignKey from Ship to Form, and then simply a ModelMultipleChoiceField in your form... but maybe there's something I'm not understanding.
Either way, a MultipleChoiceField would probably better than this set of IntegerFields. This is basically what a MultipleChoiceField is for.
| Django form don't show specific inputs | Say, I've got a model like this:
class Fleet(models.Model):
user = models.ForeignKey(User)
[...]
ship1 = models.IntegerField(default=0)
ship2 = models.IntegerField(default=0)
ship3 = models.IntegerField(default=0)
ship4 = models.IntegerField(default=0)
And a form:
class sendFleet(forms.Form):
[...]
ship1 = forms.IntegerField(initial=0)
ship2 = forms.IntegerField(initial=0)
ship3 = forms.IntegerField(initial=0)
ship4 = forms.IntegerField(initial=0)
How I can make the fields in the form hidden if the user has no 'ships' available (i.e = 0 in the Fleet model)?
| [
"You can override the visible_fields (or hidden_fields if you really want a hidden field) methods in your form to flag them as \"invisible\" (or hidden inputs). See the docs for details.\nEDIT: Something like this should work ...\nclass sendFleet(forms.Form):\n [...]\n ship1 = forms.IntegerField(initial=0)\n... | [
2,
1
] | [] | [] | [
"django",
"forms",
"python"
] | stackoverflow_0003314569_django_forms_python.txt |
Q:
Is it possible to cut away transparent areas of Gtk-pixbuf?
I am currently using rsvg in Python to separate Svggroups.
I got it working so that rsvg loads a single group ... alas all the transparent space around that still remains.
Is there a gtk functionallity to cut away all this space?
Thanks for all the answers!
A:
There's nothing built-in but it's fairly simple to code it (walk over pixels, find the transparent ones). Unfortunately walking over pixels in python is probably slow.
| Is it possible to cut away transparent areas of Gtk-pixbuf? | I am currently using rsvg in Python to separate Svggroups.
I got it working so that rsvg loads a single group ... alas all the transparent space around that still remains.
Is there a gtk functionallity to cut away all this space?
Thanks for all the answers!
| [
"There's nothing built-in but it's fairly simple to code it (walk over pixels, find the transparent ones). Unfortunately walking over pixels in python is probably slow.\n"
] | [
0
] | [] | [] | [
"gtk",
"python",
"rsvg"
] | stackoverflow_0003270704_gtk_python_rsvg.txt |
Q:
Dump Contents of Python Module loaded in memory
I ran the Python REPL tool and imported a Python Module. Can I can dump the contents of that Module into a file? Is this possible? Thanks in advance.
A:
In what format do you want to write the file? If you want exactly the same format that got imported, that's not hard -- but basically it's done with a file-to-file copy. For example, if the module in question is called blah, you can do:
>>> import shutil
>>> shutil.copy(blah.__file__, '/tmp/blahblah.pyc')
A:
Do you mean something like this?
http://www.datamech.com/devan/trypython/trypython.py
I don't think it is possible, as this is a very restricted environment.
The __file__ attribute is faked, so doesn't map to a real file
A:
You might get a start by getting a reference to your module object:
modobject = __import__("modulename")
Unfortunately those aren't pickleable. You might be able to iterate over dir(modobject) and get some good info out catching errors along the way... or is a string representation of dir(modobject) itself what you want?
| Dump Contents of Python Module loaded in memory | I ran the Python REPL tool and imported a Python Module. Can I can dump the contents of that Module into a file? Is this possible? Thanks in advance.
| [
"In what format do you want to write the file? If you want exactly the same format that got imported, that's not hard -- but basically it's done with a file-to-file copy. For example, if the module in question is called blah, you can do:\n>>> import shutil\n>>> shutil.copy(blah.__file__, '/tmp/blahblah.pyc')\n\n"... | [
1,
0,
0
] | [] | [] | [
"module",
"python"
] | stackoverflow_0003314259_module_python.txt |
Q:
Handling large dense matrices in python
Basically, what is the best way to go about storing and using dense matrices in python?
I have a project that generates similarity metrics between every item in an array.
Each item is a custom class, and stores a pointer to the other class and a number representing it's "closeness" to that class.
Right now, it works brilliantly up to about ~8000 items, after which it fails with a out-of-memory error.
Basically, if you assume that each comparison uses ~30 (seems accurate based on testing) bytes to store the similarity, that means the total required memory is:
numItems^2 * itemSize = Memory
So the memory usage is exponential based on the number of items.
In my case, the memory size is ~30 bytes per link, so:
8000 * 8000 * 30 = 1,920,000,000 bytes, or 1.9 GB
which is right at the memory limit for a single thread.
It seems to me that there has to be a more effective way of doing this. I've looked at memmapping, but it's already computationally intensive just to generate the similarity values, and bottlenecking it all through a harddrive seems a little ridiculous.
Edit
I've looked at numpy and scipy. Unfortunatly, they don't support very large arrays either.
>>> np.zeros((20000,20000), dtype=np.uint16)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
MemoryError
>>>
Further Edit
Numpy seems to be popular. However, numpy won't really do what I want, at least without another abstraction layer.
I don't want to store numbers, I want to store reference to classes. Numpy supports objects, but that doesn't really address the array size issues. I brought up numpy just as an example of what isn't working.
Any advice?
Edit Well, I wound up just rewriting all the logic so it no longer stores any redundant values, reducing the memory useage from O*n^2 to O*((n*(n-1))/2).
Basically, this whole affair is a version of the handshake problem, so I've switched from storing all links to only a single version of each link.
It's not a complete solution, but I generally don't have any datasets large enough to overflow it, so I think it will work out. PyTables is really interesting, but I don't know any SQL, and there doesn't appear to be any nice traditional slicing or index based way to access the table data. I may revisit the issue in the future.
A:
Well, I've found my solution:
h5py
It's a library that basically presents a numpy-like interface, but uses compressed memmapped files to store arrays of arbitrary size (It's basically a wrapper for HDF5).
PyTables is built on it, and PyTables actually led me to it. However, I do not need any of the SQL functionality that is the main offering of PyTables, and PyTables does not provide the clean array-like interface I was really looking for.
h5py basically acts like a numpy array, and simply stores the data in a different format.
It also seems to have no limit to array size, except perhaps disk space. I'm currently doing testing on a 100,000 * 100,000 array of uint16.
A:
PyTables can handle tables of arbitrary size (Millions of columns!), through using memmap and some clever compression.
Ostensibly, it provides SQL like performance, to python. It will, however, require significant code modifications.
I'm not going to accept this answer until I've done a more thorough vetting, to ensure it can actually do what I want. Or someone provides a better solution.
A:
For the 20,000 x 20,000 you are looking at 12GB of RAM?
Aren't you going to end up in swap hell trying to work with a 12GB in win32 which artificially limits the memory that the OS can address?
I'd be looking for an OS that can support the 12GB (32 bin win 2003 server can if you need to stick with 32bit windows), but a 64 bit machine with 64bit OS and 16GB of RAM would seem a better fit.
Good excuse for an upgrade :)
64 bit numpy can support your matrix
Python 2.5.2 (r252:60911, Jan 20 2010, 23:14:04)
[GCC 4.2.4 (Ubuntu 4.2.4-1ubuntu3)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
>>> np.zeros((20000,20000),dtype=np.uint16)
array([[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]], dtype=uint16)
A:
You might find some advice in the NumPy (see SciPy) documentation (arrays/matrices):
http://www.scipy.org/NumPy_for_Matlab_Users
http://www.scipy.org/Tentative_NumPy_Tutorial
A:
If you have N objects, kept in a list L, and you wish to store the similarity betwen each object and each other object, that's O(N**2) similarities. Under the common conditions that similarity(A, B) == similarity(B, A) and similarity(A, A) == 0, all that you need is a triangular array S of similarities. The number of elements in that array will be N*(N-1)//2. You should be able to use an array.array for this purpose. Keeping your similarity as a float will take only 8 bytes. If you can represent your similarity as an integer in range(256), you use an unsigned byte as the array.array element.
So that's about 8000 * 8000 / 2 * 8 i.e. about 256 MB. Using only a byte for the similarity means only 32 MB. You could avoid the slow S[i*N-i*(i+1)//2+j] index calculation of the triangle thingie by simulating a square array instead using S[i*N+j]`; memory would double to (512 MB for float, 64 MB for byte).
If the above doesn't suit you, then perhaps you could explain """Each item [in which container?] is a custom class, and stores a pointer to the other class and a number representing it's "closeness" to that class."" and """I don't want to store numbers, I want to store reference to classes""". Even after replacing "class(es)" by "object(s)", I'm struggling to understand what you mean.
| Handling large dense matrices in python | Basically, what is the best way to go about storing and using dense matrices in python?
I have a project that generates similarity metrics between every item in an array.
Each item is a custom class, and stores a pointer to the other class and a number representing it's "closeness" to that class.
Right now, it works brilliantly up to about ~8000 items, after which it fails with a out-of-memory error.
Basically, if you assume that each comparison uses ~30 (seems accurate based on testing) bytes to store the similarity, that means the total required memory is:
numItems^2 * itemSize = Memory
So the memory usage is exponential based on the number of items.
In my case, the memory size is ~30 bytes per link, so:
8000 * 8000 * 30 = 1,920,000,000 bytes, or 1.9 GB
which is right at the memory limit for a single thread.
It seems to me that there has to be a more effective way of doing this. I've looked at memmapping, but it's already computationally intensive just to generate the similarity values, and bottlenecking it all through a harddrive seems a little ridiculous.
Edit
I've looked at numpy and scipy. Unfortunatly, they don't support very large arrays either.
>>> np.zeros((20000,20000), dtype=np.uint16)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
MemoryError
>>>
Further Edit
Numpy seems to be popular. However, numpy won't really do what I want, at least without another abstraction layer.
I don't want to store numbers, I want to store reference to classes. Numpy supports objects, but that doesn't really address the array size issues. I brought up numpy just as an example of what isn't working.
Any advice?
Edit Well, I wound up just rewriting all the logic so it no longer stores any redundant values, reducing the memory useage from O*n^2 to O*((n*(n-1))/2).
Basically, this whole affair is a version of the handshake problem, so I've switched from storing all links to only a single version of each link.
It's not a complete solution, but I generally don't have any datasets large enough to overflow it, so I think it will work out. PyTables is really interesting, but I don't know any SQL, and there doesn't appear to be any nice traditional slicing or index based way to access the table data. I may revisit the issue in the future.
| [
"Well, I've found my solution:\nh5py\nIt's a library that basically presents a numpy-like interface, but uses compressed memmapped files to store arrays of arbitrary size (It's basically a wrapper for HDF5).\nPyTables is built on it, and PyTables actually led me to it. However, I do not need any of the SQL function... | [
11,
3,
1,
0,
0
] | [
"You can reduce the memory use by using uint8, but be careful to avoid overflow errors. A uint16 requires two bytes, so the minimal memory requirement in your example is 8000*8000*30*2 bytes = 3.84 Gb.\nIf the second example fails then you need a new machine. The memory requirement is 20000*20000*2*bytes =800 Mb. ... | [
-1
] | [
"32_bit",
"matrix",
"python",
"python_2.6",
"windows_xp"
] | stackoverflow_0003218645_32_bit_matrix_python_python_2.6_windows_xp.txt |
Q:
How do I create a unix timestamp that doesn't adjust for localtime?
So I have datetime objects in UTC time and I want to convert them to UTC timestamps. The problem is, time.mktime makes adjustments for localtime.
So here is some code:
import os
import pytz
import time
import datetime
epoch = pytz.utc.localize(datetime.datetime(1970, 1, 1))
print time.mktime(epoch.timetuple())
os.environ['TZ'] = 'UTC+0'
time.tzset()
print time.mktime(epoch.timetuple())
Here is some output:
Python 2.6.4 (r264:75706, Dec 25 2009, 08:52:16)
[GCC 4.2.1 (Apple Inc. build 5646) (dot 1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> import pytz
>>> import time
>>> import datetime
>>>
>>> epoch = pytz.utc.localize(datetime.datetime(1970, 1, 1))
>>> print time.mktime(epoch.timetuple())
25200.0
>>>
>>> os.environ['TZ'] = 'UTC+0'
>>> time.tzset()
>>> print time.mktime(epoch.timetuple())
0.0
So obviously if the system is in UTC time no problem, but when it's not, it is a problem. Setting the environment variable and calling time.tzset works but is that safe? I don't want to adjust it for the whole system.
Is there another way to do this? Or is it safe to call time.tzset this way.
A:
The calendar module contains calendar.timegm which solves this problem.
calendar.timegm(tuple)
An unrelated but handy function that takes a time tuple such as returned by the gmtime() function in the time module, and returns the corresponding Unix timestamp value, assuming an epoch of 1970, and the POSIX encoding. In fact, time.gmtime() and timegm() are each others’ inverse.
| How do I create a unix timestamp that doesn't adjust for localtime? | So I have datetime objects in UTC time and I want to convert them to UTC timestamps. The problem is, time.mktime makes adjustments for localtime.
So here is some code:
import os
import pytz
import time
import datetime
epoch = pytz.utc.localize(datetime.datetime(1970, 1, 1))
print time.mktime(epoch.timetuple())
os.environ['TZ'] = 'UTC+0'
time.tzset()
print time.mktime(epoch.timetuple())
Here is some output:
Python 2.6.4 (r264:75706, Dec 25 2009, 08:52:16)
[GCC 4.2.1 (Apple Inc. build 5646) (dot 1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> import pytz
>>> import time
>>> import datetime
>>>
>>> epoch = pytz.utc.localize(datetime.datetime(1970, 1, 1))
>>> print time.mktime(epoch.timetuple())
25200.0
>>>
>>> os.environ['TZ'] = 'UTC+0'
>>> time.tzset()
>>> print time.mktime(epoch.timetuple())
0.0
So obviously if the system is in UTC time no problem, but when it's not, it is a problem. Setting the environment variable and calling time.tzset works but is that safe? I don't want to adjust it for the whole system.
Is there another way to do this? Or is it safe to call time.tzset this way.
| [
"The calendar module contains calendar.timegm which solves this problem.\ncalendar.timegm(tuple)\n\n\nAn unrelated but handy function that takes a time tuple such as returned by the gmtime() function in the time module, and returns the corresponding Unix timestamp value, assuming an epoch of 1970, and the POSIX enc... | [
6
] | [] | [] | [
"python",
"pytz",
"timezone"
] | stackoverflow_0003315092_python_pytz_timezone.txt |
Q:
Django LocaleMiddleware determines the language for me. How do I know what language it determined?
I need to know what language it returns, so that I can perform my own actions.
A:
from django.utils import translation
def myview(...):
...
lang = translation.get_language()
...
This will return the language code used in the current thread, so in your case, that set by the middleware.
A:
templates with RequestContext have {{ LANGUAGE_CODE }} by default. http://docs.djangoproject.com/en/dev/topics/i18n/internationalization/#other-tags
| Django LocaleMiddleware determines the language for me. How do I know what language it determined? | I need to know what language it returns, so that I can perform my own actions.
| [
"from django.utils import translation\n\ndef myview(...):\n ...\n lang = translation.get_language()\n ...\n\nThis will return the language code used in the current thread, so in your case, that set by the middleware.\n",
"templates with RequestContext have {{ LANGUAGE_CODE }} by default. http://docs.djan... | [
6,
1
] | [] | [] | [
"django",
"python"
] | stackoverflow_0002900969_django_python.txt |
Q:
Problems with django deployment
Im trying to deploy my django and I always get one of these erros: (they alternate as I refresh the page)
The model Page has already been registered ( its from feincms, but i dont get this on my computer )
unable to open database file (the database is sqlite3 and was successfully created with syncdb on the server )
Any ideas on what might be the problem ?
A:
First one is probably because on your local computer you run Django as CGI, or some other "new request - different process" way. So if you registering Page model in every request, it's works because you have single request. But on web server your app is loaded as FCGI or some other way like this, so only first request can be served well (when second request is send, your app tries to register Page model again).
Second one is probably because you have relative path to db file. So if you type
./manage syncdb
in your project dir '/my/project/dir'. Django searches for file in '/my/projec/dir/mydb.sqlite'.
But if you run it in web server, you have different path '/some/http/server/path', so your program is confused.
| Problems with django deployment | Im trying to deploy my django and I always get one of these erros: (they alternate as I refresh the page)
The model Page has already been registered ( its from feincms, but i dont get this on my computer )
unable to open database file (the database is sqlite3 and was successfully created with syncdb on the server )
Any ideas on what might be the problem ?
| [
"First one is probably because on your local computer you run Django as CGI, or some other \"new request - different process\" way. So if you registering Page model in every request, it's works because you have single request. But on web server your app is loaded as FCGI or some other way like this, so only first r... | [
2
] | [] | [] | [
"deployment",
"django",
"django_database",
"python"
] | stackoverflow_0003315314_deployment_django_django_database_python.txt |
Q:
Consecutive, conflicting regex substitutions
I've tried this for the italics:
r = re.compile(r"(\*[^ ]+\*)")
r.sub(r'<i>"\1"</i>', foo)
but it doesn't work, as I am sure anyone in the regex know will see right away.
A:
It would easily work, if you switch the order of substitutions. Handling the bold case first would prevent italics taking over.
A:
Your regex and substitution need a few tweaks.
r = re.compile(r"(\*[^ ]+\*)")
You are capturing a bit too much here -- the asterisks are preserved in \1.
r.sub(r'<i>"\1"</i>', foo)
You are substituting a bit too much here -- the double-quote marks are included in the substitution. Example:
r.sub(r'<i>"\1"</i>', '*foo*') # -> '<i>"*foo*"</i>'
Try something like this:
foo = '***foo***'
bold = re.compile(r'''\*\*([^ ]+)\*\*''')
ital = re.compile(r'''\*([^ ]+)\*''')
ital.sub(r'''<i>\1</i>''', bold.sub(r'''<b>\1</b>''', foo)) # '<b><i>foo</i></b>'
| Consecutive, conflicting regex substitutions | I've tried this for the italics:
r = re.compile(r"(\*[^ ]+\*)")
r.sub(r'<i>"\1"</i>', foo)
but it doesn't work, as I am sure anyone in the regex know will see right away.
| [
"It would easily work, if you switch the order of substitutions. Handling the bold case first would prevent italics taking over.\n",
"Your regex and substitution need a few tweaks.\nr = re.compile(r\"(\\*[^ ]+\\*)\")\n\nYou are capturing a bit too much here -- the asterisks are preserved in \\1.\nr.sub(r'<i>\"\\1... | [
2,
1
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0003314682_python_regex.txt |
Q:
how can I call view method in different files
If I have one view which called myview1.py and I want to call a view which is located in myview2.py, how can I do that?
should I import myview2.py somehow?
A:
I think you need to read about modules, but here's the cheat sheet:
$ cat gilliam.py
def spam():
print 'eggs'
$ cat jones.py
import gilliam
gilliam.spam()
$ python jones.py
eggs
A:
just import it
from myview2 import viewname1, viewname2
value = viewname1(params)
A:
You should be able to just do
import myview2
and be able to access it's methods from there, assuming myview2.py is in your include path.
| how can I call view method in different files | If I have one view which called myview1.py and I want to call a view which is located in myview2.py, how can I do that?
should I import myview2.py somehow?
| [
"I think you need to read about modules, but here's the cheat sheet:\n$ cat gilliam.py\ndef spam():\n print 'eggs'\n$ cat jones.py\nimport gilliam\ngilliam.spam()\n$ python jones.py\neggs\n\n",
"just import it\nfrom myview2 import viewname1, viewname2\n\nvalue = viewname1(params)\n\n",
"You should be able to... | [
3,
3,
2
] | [] | [] | [
"django",
"python"
] | stackoverflow_0003315362_django_python.txt |
Q:
Python - Referencing class name from inside class body
In Python, I want to have a class attribute, a dictionary, with initialized values. I wrote this code:
class MetaDataElement:
(MD_INVALID, MD_CATEGORY, MD_TAG) = range(3)
mapInitiator2Type = {'!':MetaDataElement.MD_CATEGORY,
'#':MetaDataElement.MD_TAG}
But when I try to run this code, I get an error message with "NameError: name 'MetaDataElement' is not defined". Could you help me?
Thanks in advance.
A:
You cannot refer to MetaDataElement while it is being constructed, since it does not yet exist. Thus,
class MetaDataElement:
(MD_INVALID, MD_CATEGORY, MD_TAG) = range(3)
mapInitiator2Type = {'!':MetaDataElement.MD_CATEGORY,
'#':MetaDataElement.MD_TAG}
fails because the very construction of mapInitiator2Type requires MetaDataElement to have attributes, which it does not yet have. You can think of your constants MD_INVALID, etc. as variables that are local to the construction of your class. This is why the following works, as icktoofay wrote:
class MetaDataElement:
(MD_INVALID, MD_CATEGORY, MD_TAG) = range(3)
mapInitiator2Type = {'!': MD_CATEGORY, # MD_CATEGORY is like a local variable!
'#': MD_TAG}
However, you can refer to the class MetaDataElement in any yet un-interpreted piece of code, as in
def method_of_MetaDataElement(self):
print MetaDataElement.MD_TAG
You even have to refer to MetaDataElement, here, because MD_TAG is not a kind of local variable when method_of_MetaDataElement() is executed (MD_TAG was only defined as a kind of local variable during class construction). Once the class MetaDataElement is created, MD_TAG is simply a class attribute, which is why method_of_MetaDataElement() must refer to it as such.
A:
First of all, you're using old-style classes. You should probably use new-style classes, like so:
class MetaDataElement(object):
...
Note the (object). Anyway, though, simply remove the MetaDataElement. when referring to the class attributes. This is what it'd look like when that's done:
class MetaDataElement(object):
(MD_INVALID, MD_CATEGORY, MD_TAG) = range(3)
mapInitiator2Type = {'!': MD_CATEGORY,
'#': MD_TAG}
| Python - Referencing class name from inside class body | In Python, I want to have a class attribute, a dictionary, with initialized values. I wrote this code:
class MetaDataElement:
(MD_INVALID, MD_CATEGORY, MD_TAG) = range(3)
mapInitiator2Type = {'!':MetaDataElement.MD_CATEGORY,
'#':MetaDataElement.MD_TAG}
But when I try to run this code, I get an error message with "NameError: name 'MetaDataElement' is not defined". Could you help me?
Thanks in advance.
| [
"You cannot refer to MetaDataElement while it is being constructed, since it does not yet exist. Thus,\nclass MetaDataElement:\n (MD_INVALID, MD_CATEGORY, MD_TAG) = range(3)\n mapInitiator2Type = {'!':MetaDataElement.MD_CATEGORY, \n '#':MetaDataElement.MD_TAG}\n\nfails because the ver... | [
3,
1
] | [] | [] | [
"class_attributes",
"python",
"static"
] | stackoverflow_0003315510_class_attributes_python_static.txt |
Q:
Optimal extraction of columns from numpy matrix
Say I have a numpy matrix like so:
[[ x1, x2, x3, ... ],
[ y1, y2, y3, ... ],
[ z1, z2, z3, ... ],
[ 1, 1, 1, ... ]]
From which I want to extract a list of lists like so:
[[x1, y1, z1], [x2, y2, z2], [x3, y3, z3], ... ]
What is the most optimal way of doing this?
At the moment I have:
tpoints = [pt[:3].tolist() for pt in numpy.asarray(tptmat.T)]
And the call to tolist() is taking up disproportionate amount of time, approximately a third of the time spent in the most time consuming function of my program.
ncalls tottime percall cumtime percall filename:lineno(function)
14422540 69.777 0.000 69.777 0.000 {method 'tolist' of 'numpy.ndarray' objects}
20 64.258 3.213 178.057 8.903 trans.py:152(_apply)
...
A:
Why not remove the last row before the transpose?
m[:3].T.tolist()
# ^^^^^^^^^ optional
Micro-benchmark shows this method is faster than yours by 61%, and if you don't convert it into a list of list it is 45 times faster, for a 100×4 matrix.
$ python2.5 -m timeit -s 'import numpy; m = numpy.matrix([[5]*100,[6]*100,[7]*100,[1]*100])' 'm[:3].T'
100000 loops, best of 3: 6.26 usec per loop
$ python2.5 -m timeit -s 'import numpy; m = numpy.matrix([[5]*100,[6]*100,[7]*100,[1]*100])' 'm[:3].T.tolist()'
10000 loops, best of 3: 180 usec per loop
$ python2.5 -m timeit -s 'import numpy; m = numpy.matrix([[5]*100,[6]*100,[7]*100,[1]*100])' 'numpy.asarray(m[:3].T)'
100000 loops, best of 3: 10.9 usec per loop
$ python2.5 -m timeit -s 'import numpy; m = numpy.matrix([[5]*100,[6]*100,[7]*100,[1]*100])' '[p[:3].tolist()for p in numpy.asarray(m.T)]'
1000 loops, best of 3: 289 usec per loop
A:
Have you tried zip(*matrix)? This will leave you with
[[x1, y1, z1, 1], [x2, y2, z2, 1], [x3, y3, z3, 1], ... ]
But list generation will probably still happen...
Wait (slaps palm on forehead)! This should do the trick:
zip(*matrix[:3])
In the interactive shell:
>>> matrix = [[ 11, 12, 13, 14],
... [ 21, 22, 23, 24],
... [ 31, 32, 33, 34],
... [ 1, 1, 1, 1]]
>>> zip(*matrix[:3])
[(11, 21, 31), (12, 22, 32), (13, 23, 33), (14, 24, 34)]
>>>
This is a list of tuples, though, but does that really matter?
| Optimal extraction of columns from numpy matrix | Say I have a numpy matrix like so:
[[ x1, x2, x3, ... ],
[ y1, y2, y3, ... ],
[ z1, z2, z3, ... ],
[ 1, 1, 1, ... ]]
From which I want to extract a list of lists like so:
[[x1, y1, z1], [x2, y2, z2], [x3, y3, z3], ... ]
What is the most optimal way of doing this?
At the moment I have:
tpoints = [pt[:3].tolist() for pt in numpy.asarray(tptmat.T)]
And the call to tolist() is taking up disproportionate amount of time, approximately a third of the time spent in the most time consuming function of my program.
ncalls tottime percall cumtime percall filename:lineno(function)
14422540 69.777 0.000 69.777 0.000 {method 'tolist' of 'numpy.ndarray' objects}
20 64.258 3.213 178.057 8.903 trans.py:152(_apply)
...
| [
"Why not remove the last row before the transpose?\nm[:3].T.tolist()\n# ^^^^^^^^^ optional\n\nMicro-benchmark shows this method is faster than yours by 61%, and if you don't convert it into a list of list it is 45 times faster, for a 100×4 matrix.\n$ python2.5 -m timeit -s 'import numpy; m = numpy.matrix([[5]*... | [
3,
1
] | [] | [] | [
"numpy",
"optimization",
"python"
] | stackoverflow_0003315894_numpy_optimization_python.txt |
Q:
Python Command Line "characters" returns 'characters'
Thanks in advance for your help.
When entering "example" at the command line, Python returns 'example'. I can not find anything on the web to explain this. All reference materials speaks to strings in the context of the print command, and I get all of the material about using double quotes, singles quotes, triple quotes, escape commands, etc.
I can not, however, find anything explaining why entering text surrounded by double quotes at the command line always returns the same text surrounded by single quotes. What gives? Thanks.
A:
In Python both 'string' and "string" are used to represent string literals. It's not like Java where single and double quotes represent different data types to the compiler.
The interpreter evaluates each line you enter and displays this value to you. In both cases the interpreter is evaluating what you enter, getting a string, and displaying this value. The default way of displaying strings is in single quotes so both times the string is displayed enclosed in single quotes.
It does seem odd - in that it breaks Python's rule of There should be one - and preferably only one - obvious way to do it - but I think disallowing one of the options would have been worse.
You can also enter a string literal using triple quotes:
>>> """characters
... and
... newlines"""
'characters\nand\nnewlines'
You can use the command line to confirm that these are the same thing:
>>> type("characters")
<type 'str'>
>>> type('characters')
<type 'str'>
>>> "characters" == 'characters'
True
The interpreter uses the __repr__ method of an object to get the display to print to you. So on your own objects you can determine how they are displayed in the interpreter. We can't change the __repr__ method for built in types, but we can customise the interpreter output using sys.displayhook:
>>> import sys
>>> def customoutput(value):
... if isinstance(value,str):
... print '"%s"' % value
... else:
... sys.__displayhook__(value)
...
>>> sys.displayhook = customoutput
>>> 'string'
"string"
A:
In python, single quotes and double quotes are semantically the same.
It struck me as strange at first, since in C++ and other strongly-typed languages single quotes are a char and doubles are a string.
Just get used to the fact that python doesn't care about types, and so there's no special syntax for marking a string vs. a character. Don't let it cloud your perception of a great language
A:
Don't get confused.
In python single quotes and double quotes are same. The creates an string object.
| Python Command Line "characters" returns 'characters' | Thanks in advance for your help.
When entering "example" at the command line, Python returns 'example'. I can not find anything on the web to explain this. All reference materials speaks to strings in the context of the print command, and I get all of the material about using double quotes, singles quotes, triple quotes, escape commands, etc.
I can not, however, find anything explaining why entering text surrounded by double quotes at the command line always returns the same text surrounded by single quotes. What gives? Thanks.
| [
"In Python both 'string' and \"string\" are used to represent string literals. It's not like Java where single and double quotes represent different data types to the compiler. \nThe interpreter evaluates each line you enter and displays this value to you. In both cases the interpreter is evaluating what you ent... | [
6,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0003315346_python.txt |
Q:
Why would you use lambda instead of def?
Is there any reason that will make you use:
add2 = lambda n: n+2
instead of :
def add2(n): return n+2
I tend to prefer the def way but every now and then I see the lambda way being used.
EDIT :
The question is not about lambda as unnamed function, but about lambda as a named function.
There is a good answer to the same question here.
A:
lambda is nice for small, unnamed functions but in this case it would serve no purpose other than make functionalists happy.
A:
I usually use a lambda for throwaway functions I'm going to use only in one place, for example as an argument to a function. This keeps the logic together and avoids filling the namespace with function names I don't need.
For example, I think
map(lambda x: x * 2,xrange(1,10))
is neater than:
def bytwo(x):
return x * 2
map(bytwo,xrange(1,10))
A:
Guido Van Rossum (the creator of Python) prefers def over lambdas (lambdas only made it to Python 3.0 at the last moment) and there is nothing you can't do with def that you can do with lambdas.
People who like the functional programming paradigm seem to prefer them. Some times using an anonymous function (that you can create with a lambda expression) gives more readable code.
A:
I recall David Mertz writing that he preferred to use the form add2 = lambda n: n+2 because it emphasised that add2 is a pure function that has no side effects. However I think 99% of Python programmers would use the def statement and only use lambda for anonymous functions, since that is what it is intended for.
A:
One place you can't use def is in expressions: A def is a statement. So that is about the only place I'd use lambda.
| Why would you use lambda instead of def? | Is there any reason that will make you use:
add2 = lambda n: n+2
instead of :
def add2(n): return n+2
I tend to prefer the def way but every now and then I see the lambda way being used.
EDIT :
The question is not about lambda as unnamed function, but about lambda as a named function.
There is a good answer to the same question here.
| [
"lambda is nice for small, unnamed functions but in this case it would serve no purpose other than make functionalists happy.\n",
"I usually use a lambda for throwaway functions I'm going to use only in one place, for example as an argument to a function. This keeps the logic together and avoids filling the name... | [
4,
2,
1,
1,
0
] | [] | [] | [
"lambda",
"python"
] | stackoverflow_0003288421_lambda_python.txt |
Q:
Executing a function every 1 second
Possible Duplicate:
Python timer countdown
Hi guys,
I want to know about timer in Python.
Suppose i have a code snippet something like:
def abc()
print 'Hi'
print 'Hello'
print 'Hai'
And i want to print it every 1 second. Max three times;ie; 1st second i need to check the printf, 2nd second I need to check as well in 3rd second.
In my actual code variables value will be updated.
I need to capture at what second all the variables are getting updated.
Can anybody tell me how to do this.
A:
You can use a while loop, and the time module, with time.sleep to wait one second, and time.clock to know at what time you print your variables.
A:
You can see, if you can utilize the sched module from the standard library:
The sched module defines a class which implements a general purpose event scheduler.
>>> import sched, time
>>> s = sched.scheduler(time.time, time.sleep)
>>> def print_time(): print "From print_time", time.time()
...
>>> def print_some_times():
... print time.time()
... s.enter(5, 1, print_time, ())
... s.enter(10, 1, print_time, ())
... s.run()
... print time.time()
...
>>> print_some_times()
930343690.257
From print_time 930343695.274
From print_time 930343700.273
930343700.276
| Executing a function every 1 second |
Possible Duplicate:
Python timer countdown
Hi guys,
I want to know about timer in Python.
Suppose i have a code snippet something like:
def abc()
print 'Hi'
print 'Hello'
print 'Hai'
And i want to print it every 1 second. Max three times;ie; 1st second i need to check the printf, 2nd second I need to check as well in 3rd second.
In my actual code variables value will be updated.
I need to capture at what second all the variables are getting updated.
Can anybody tell me how to do this.
| [
"You can use a while loop, and the time module, with time.sleep to wait one second, and time.clock to know at what time you print your variables.\n",
"You can see, if you can utilize the sched module from the standard library:\n\nThe sched module defines a class which implements a general purpose event scheduler.... | [
3,
2
] | [] | [] | [
"python",
"timer"
] | stackoverflow_0003316240_python_timer.txt |
Q:
Unable to get correct link in BeautifulSoup
I'm trying to parse a bit of HTML and I'd like to extract the link that matches a particular pattern. I'm using the find method with a regular expression but it doesn't get me the correct link. Here's my snippet. Could someone tell me what I'm doing wrong?
from BeautifulSoup import BeautifulSoup
import re
html = """
<div class="entry">
<a target="_blank" href="http://www.rottentomatoes.com/m/diary_of_a_wimpy_kid/">RT</a>
<a target="_blank" href="http://www.imdb.com/video/imdb/vi2496267289/">Trailer</a> –
<a target="_blank" href="http://www.imdb.com/title/tt1196141/">IMDB</a> –
</div>
"""
soup = BeautifulSoup(html)
print soup.find('a', href = re.compile(r".*title/tt.*"))['href']
I should be getting the second link but BS always returns the first link. The href of the first link doesn't even match my regex so why does it return it?
Thanks.
A:
find only returns the first <a> tag. You want findAll.
A:
Can't answer your question, but anyway your (originally) posted code has an import typo. Change
import BeautifulSoup
to
from BeautifulSoup import BeautifulSoup
Then, your output (using beautifulsoup version 3.1.0.1) will be:
http://www.imdb.com/title/tt1196141/
| Unable to get correct link in BeautifulSoup | I'm trying to parse a bit of HTML and I'd like to extract the link that matches a particular pattern. I'm using the find method with a regular expression but it doesn't get me the correct link. Here's my snippet. Could someone tell me what I'm doing wrong?
from BeautifulSoup import BeautifulSoup
import re
html = """
<div class="entry">
<a target="_blank" href="http://www.rottentomatoes.com/m/diary_of_a_wimpy_kid/">RT</a>
<a target="_blank" href="http://www.imdb.com/video/imdb/vi2496267289/">Trailer</a> –
<a target="_blank" href="http://www.imdb.com/title/tt1196141/">IMDB</a> –
</div>
"""
soup = BeautifulSoup(html)
print soup.find('a', href = re.compile(r".*title/tt.*"))['href']
I should be getting the second link but BS always returns the first link. The href of the first link doesn't even match my regex so why does it return it?
Thanks.
| [
"find only returns the first <a> tag. You want findAll.\n",
"Can't answer your question, but anyway your (originally) posted code has an import typo. Change\nimport BeautifulSoup\n\nto \nfrom BeautifulSoup import BeautifulSoup\n\nThen, your output (using beautifulsoup version 3.1.0.1) will be:\nhttp://www.imdb.co... | [
2,
0
] | [] | [] | [
"beautifulsoup",
"python"
] | stackoverflow_0003316415_beautifulsoup_python.txt |
Q:
Python and/or django solution for reading log files on linux?
I would like my Django application to be able to display local syslog etc files. I would like to avoid writing the logic for managing .1,.2 etc rotated files, and get an object for each log that I can retrieve a set of rows from.
Is there any such python library, or even better, any such django app?
Clarification: I don't want to write messages to a log, I want to read the messages that are already there.
| Python and/or django solution for reading log files on linux? | I would like my Django application to be able to display local syslog etc files. I would like to avoid writing the logic for managing .1,.2 etc rotated files, and get an object for each log that I can retrieve a set of rows from.
Is there any such python library, or even better, any such django app?
Clarification: I don't want to write messages to a log, I want to read the messages that are already there.
| [] | [] | [
"Python has a syslog module. You can also use SysLogHandler.\n"
] | [
-1
] | [
"django",
"logging",
"logrotate",
"python",
"syslog"
] | stackoverflow_0003316736_django_logging_logrotate_python_syslog.txt |
Q:
XML Question relating to Java and Python
I am trying to write an application using Python 2.7 that will allow the user to open a dialog box, pick a file, have it read into a structure of some sort (ArrayList, List etc...) open to suggestions here and then to find basic statistical measures from the data (things like mean, standard deviation etc...) and to output the original file with a summary of the statistical measurements in XML format. Just wondering how is the best way to accomplish this. I have code for opening a window to allow a user to pick a file (from another website) but not sure how to use it to pass the selected file into the function that will read the xml file selected.
Here is the code for the window:
from Tkinter import *
from tkMessageBox import *
from tkColorChooser import askcolor
from tkFileDialog import askopenfilename
def callback():
askopenfilename()
Button(text='Please Select File', command=callback).pack(fill=X)
mainloop()
pass
def quit(event):
if tkMessageBox.askokcancel('Quit','Do you really want to quit?'):
root.destroy()
I think there is more to the pass function.
I have a version of the XMLReader in Java (That is run in BlueJ so don't know if it will run out side of it) code:
import java.util.*;
import java.io.File;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import org.w3c.dom.Document;
import org.w3c.dom.Element;
import org.w3c.dom.Node;
import org.w3c.dom.NodeList;
public class XMLReader
{
public static void main(String argv[]) {
ArrayList XTimeStamp = new ArrayList();
ArrayList XY = new ArrayList();
ArrayList Price = new ArrayList();
try {
File file = new File("<PATH>\shares.xml");
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
DocumentBuilder db = dbf.newDocumentBuilder();
Document doc = db.parse(file);
doc.getDocumentElement().normalize();
System.out.println("Root element " + doc.getDocumentElement().getNodeName());
NodeList nodeLst = doc.getElementsByTagName("shareprice");
System.out.println("Share Price");
for (int s = 0; s < nodeLst.getLength(); s++) {
Node fstNode = nodeLst.item(s);
if (fstNode.getNodeType() == Node.ELEMENT_NODE) {
Element fstElmnt = (Element) fstNode;
NodeList fstNmElmntLst = fstElmnt.getElementsByTagName("timeStamp");
Element fstNmElmnt = (Element) fstNmElmntLst.item(0);
NodeList fstNm = fstNmElmnt.getChildNodes();
String timeStamp = fstNm.item(0).getNodeValue();
XTimeStamp.add(timeStamp);
System.out.println("timeStamp : " + ((Node) fstNm.item(0)).getNodeValue());
NodeList lstNmElmntLst = fstElmnt.getElementsByTagName("Price");
Element lstNmElmnt = (Element) lstNmElmntLst.item(0);
NodeList lstNm = lstNmElmnt.getChildNodes();
String YValue = lstNm.item(0).getNodeValue();
Price.add(YValue);
System.out.println("Price : " + ((Node) lstNm.item(0)).getNodeValue());
}
}
} catch (Exception e) {
e.printStackTrace();
}
System.out.println(XTimeStamp);
System.out.println(Price);
XY.add (XTimeStamp);
XY.add (Price);
System.out.println(XY);
}
}
The one thing I don't like about the Java code is that I have to include the path of the file. I would like to allow the user to pick the file in the Java version as well. The reason I started with java is I have a little bit more experience (but not much). To create the application what is the best way forward, Python or Java and depending on which one any help on getting started sorting the issues would be greatly appreciated.
A:
Well, you could do this in either Python or Java without too much difficulty, but I would make up your mind as to which one you want! I'll talk about Python because I much prefer it to Java, but I'm sure you can find people who will help you with Java.
So first, you want a file selection box with Tkinter. Well, that's pretty standard and there's lots of documentation and code boilerplate on Google. Try:
import Tkinter,tkFileDialog
root = Tkinter.Tk()
file = tkFileDialog.askopenfile(parent=root,mode='rb',title='Choose a file')
for starters; you can make it much more complicated if you'd like. Then to parse the file you could use Python's xml.parsers.expat module:
import xml.parsers.expat
xmlparser.ParseFile( file )
You'll have to write functions for what to do when the parser meets each type of node, but that's in the documentation. Then, you want to do some statistics on what you parse; scipy has all the functions you'll ever need to do so. This is the bit where you'll need to roll your own code the most.
Finally, you want to write the statistics you've generated to a new file (I think?).
statsfile = open( "stats.txt", w )
try:
statsfile.write( stats )
finally:
statsfile.close()
Does that help?
| XML Question relating to Java and Python | I am trying to write an application using Python 2.7 that will allow the user to open a dialog box, pick a file, have it read into a structure of some sort (ArrayList, List etc...) open to suggestions here and then to find basic statistical measures from the data (things like mean, standard deviation etc...) and to output the original file with a summary of the statistical measurements in XML format. Just wondering how is the best way to accomplish this. I have code for opening a window to allow a user to pick a file (from another website) but not sure how to use it to pass the selected file into the function that will read the xml file selected.
Here is the code for the window:
from Tkinter import *
from tkMessageBox import *
from tkColorChooser import askcolor
from tkFileDialog import askopenfilename
def callback():
askopenfilename()
Button(text='Please Select File', command=callback).pack(fill=X)
mainloop()
pass
def quit(event):
if tkMessageBox.askokcancel('Quit','Do you really want to quit?'):
root.destroy()
I think there is more to the pass function.
I have a version of the XMLReader in Java (That is run in BlueJ so don't know if it will run out side of it) code:
import java.util.*;
import java.io.File;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import org.w3c.dom.Document;
import org.w3c.dom.Element;
import org.w3c.dom.Node;
import org.w3c.dom.NodeList;
public class XMLReader
{
public static void main(String argv[]) {
ArrayList XTimeStamp = new ArrayList();
ArrayList XY = new ArrayList();
ArrayList Price = new ArrayList();
try {
File file = new File("<PATH>\shares.xml");
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
DocumentBuilder db = dbf.newDocumentBuilder();
Document doc = db.parse(file);
doc.getDocumentElement().normalize();
System.out.println("Root element " + doc.getDocumentElement().getNodeName());
NodeList nodeLst = doc.getElementsByTagName("shareprice");
System.out.println("Share Price");
for (int s = 0; s < nodeLst.getLength(); s++) {
Node fstNode = nodeLst.item(s);
if (fstNode.getNodeType() == Node.ELEMENT_NODE) {
Element fstElmnt = (Element) fstNode;
NodeList fstNmElmntLst = fstElmnt.getElementsByTagName("timeStamp");
Element fstNmElmnt = (Element) fstNmElmntLst.item(0);
NodeList fstNm = fstNmElmnt.getChildNodes();
String timeStamp = fstNm.item(0).getNodeValue();
XTimeStamp.add(timeStamp);
System.out.println("timeStamp : " + ((Node) fstNm.item(0)).getNodeValue());
NodeList lstNmElmntLst = fstElmnt.getElementsByTagName("Price");
Element lstNmElmnt = (Element) lstNmElmntLst.item(0);
NodeList lstNm = lstNmElmnt.getChildNodes();
String YValue = lstNm.item(0).getNodeValue();
Price.add(YValue);
System.out.println("Price : " + ((Node) lstNm.item(0)).getNodeValue());
}
}
} catch (Exception e) {
e.printStackTrace();
}
System.out.println(XTimeStamp);
System.out.println(Price);
XY.add (XTimeStamp);
XY.add (Price);
System.out.println(XY);
}
}
The one thing I don't like about the Java code is that I have to include the path of the file. I would like to allow the user to pick the file in the Java version as well. The reason I started with java is I have a little bit more experience (but not much). To create the application what is the best way forward, Python or Java and depending on which one any help on getting started sorting the issues would be greatly appreciated.
| [
"Well, you could do this in either Python or Java without too much difficulty, but I would make up your mind as to which one you want! I'll talk about Python because I much prefer it to Java, but I'm sure you can find people who will help you with Java.\nSo first, you want a file selection box with Tkinter. Well, t... | [
1
] | [] | [] | [
"java",
"parsing",
"python",
"xml"
] | stackoverflow_0003316835_java_parsing_python_xml.txt |
Q:
Hiding a django website while development
I developed a django website on my local machine and now is the time to upload it on a server. I would like that during the time i work on it, only logged in users can see it.
I thought about a
{% if is_logged_in %}
{% else %}
{% endif %}
structure in my base.py template but not all my views return a Context so it doesn't always work.
Is there any simple way without having to change much of code to hide every pages?
A:
Use django.contrib.auth.decorators.login_required. It is a decorator, that will prevent users from viewing anything, if they are not logged in. Or you can find middleware for this: http://djangosnippets.org/snippets/1179/.
Middleware will be better, becuase it is unobtrusive and you can remove it later.
A:
There are 2 reasonable solutions for this.
Using a middleware to demand authentication (if needed I can put an example online, but the code should be trivial)
Using authentication in your webservers. That way you can simply add a couple of IP addresses and/or users to have access. These days it's pretty easy to link your http authentication to Django aswell so with both mod_wsgi and mod_python you can let Apache authenticate it's users via Django.
A:
Another reasonable way to do this would be to client certificates. That way you can also test the parts that don't require you to be logged in
A:
or protect the whole directory on the server with .htaccess and this also prevents google finding the site in development.
| Hiding a django website while development | I developed a django website on my local machine and now is the time to upload it on a server. I would like that during the time i work on it, only logged in users can see it.
I thought about a
{% if is_logged_in %}
{% else %}
{% endif %}
structure in my base.py template but not all my views return a Context so it doesn't always work.
Is there any simple way without having to change much of code to hide every pages?
| [
"Use django.contrib.auth.decorators.login_required. It is a decorator, that will prevent users from viewing anything, if they are not logged in. Or you can find middleware for this: http://djangosnippets.org/snippets/1179/.\nMiddleware will be better, becuase it is unobtrusive and you can remove it later.\n",
"Th... | [
4,
4,
0,
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0003308379_django_python.txt |
Q:
Python Create Byte Array for Web Service Expecting Byte[]
I'm using a SOAP based web service that expects an image element in the form of a 'ByteArray' described in their docs as being of type 'byte[]' - the client I am using is the Python based suds library.
Problem is that I am not exactly sure how to represent the ByteArray in for this service - I presume that it should look something like the following list:
[71,73,70,56,57,97,1,0,1,0,128,0,0,255,255,255,0,0,0,33,249,4,0,0,0,0,0,44,0,0,0,0,1,0,1,0,0,2,2,68,1,0,59]
Now when I send this as part of the request, the service complains with the message: Base64 sequence length (105) not valid. Must be a multiple of 4. Does this mean that I would have to pad each member with zeroes to make them 4 long, i.e. [0071,0073,0070,...]?
A:
I got it figured in the end - what the web service meant by a ByteArray (byte[]) looked something like:
/9j/4AAQSkZJRgABAgEAYABgAAD/7gAOQWRvYmUAZAAAAAAB...
... aha, base 64 (not anywhere in their docs, I hasten to add)...
so I managed to get it working by using this:
encoded_data = base64.b64encode(open(file_name, 'rb').read())
strg = ''
for i in xrange((len(encoded_data)/40)+1):
strg += encoded_data[i*40:(i+1)*40]
# strg then contains data required
I found the inspiration right here - thanks to Doug Hellman
A:
Try a bytearray.
| Python Create Byte Array for Web Service Expecting Byte[] | I'm using a SOAP based web service that expects an image element in the form of a 'ByteArray' described in their docs as being of type 'byte[]' - the client I am using is the Python based suds library.
Problem is that I am not exactly sure how to represent the ByteArray in for this service - I presume that it should look something like the following list:
[71,73,70,56,57,97,1,0,1,0,128,0,0,255,255,255,0,0,0,33,249,4,0,0,0,0,0,44,0,0,0,0,1,0,1,0,0,2,2,68,1,0,59]
Now when I send this as part of the request, the service complains with the message: Base64 sequence length (105) not valid. Must be a multiple of 4. Does this mean that I would have to pad each member with zeroes to make them 4 long, i.e. [0071,0073,0070,...]?
| [
"I got it figured in the end - what the web service meant by a ByteArray (byte[]) looked something like:\n/9j/4AAQSkZJRgABAgEAYABgAAD/7gAOQWRvYmUAZAAAAAAB...\n\n... aha, base 64 (not anywhere in their docs, I hasten to add)...\nso I managed to get it working by using this:\nencoded_data = base64.b64encode(open(file... | [
2,
0
] | [] | [] | [
"bytearray",
"python"
] | stackoverflow_0003309868_bytearray_python.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.