content stringlengths 85 101k | title stringlengths 0 150 | question stringlengths 15 48k | answers list | answers_scores list | non_answers list | non_answers_scores list | tags list | name stringlengths 35 137 |
|---|---|---|---|---|---|---|---|---|
Q:
Google app engine handle html form post array
My HTML code as following:
<INPUT type="text" name="txt[]">
<INPUT type="checkbox" name="chk[]"/>
I get the value in PHP by
<?php
$chkbox = $_POST['chk'];
$txtbox = $_POST['txt'];
foreach($txtbox as $a => $b)
echo "$chkbox[$a] - $txtbox[$a] <br />";
?>
How do get the value in Google App Engine using Python?
A:
You don't need that trick in Python. You can have for example many fields with the same names:
<INPUT type="text" name="txt">
<INPUT type="text" name="txt">
<INPUT type="text" name="txt">
<INPUT type="checkbox" name="chk">
<INPUT type="checkbox" name="chk">
<INPUT type="checkbox" name="chk">
Then get a list of all posted values for those names and merge them using zip(). Example for webapp (which uses webob as request wrapper):
txt = self.request.POST.getall('txt')
chk = self.request.POST.getall('chk')
for txt_value, chk_value in zip(txt, chk):
print '%s - %s<br />' % (txt_value, chk_value)
| Google app engine handle html form post array | My HTML code as following:
<INPUT type="text" name="txt[]">
<INPUT type="checkbox" name="chk[]"/>
I get the value in PHP by
<?php
$chkbox = $_POST['chk'];
$txtbox = $_POST['txt'];
foreach($txtbox as $a => $b)
echo "$chkbox[$a] - $txtbox[$a] <br />";
?>
How do get the value in Google App Engine using Python?
| [
"You don't need that trick in Python. You can have for example many fields with the same names:\n<INPUT type=\"text\" name=\"txt\">\n<INPUT type=\"text\" name=\"txt\">\n<INPUT type=\"text\" name=\"txt\">\n\n<INPUT type=\"checkbox\" name=\"chk\">\n<INPUT type=\"checkbox\" name=\"chk\">\n<INPUT type=\"checkbox\" name... | [
8
] | [] | [] | [
"google_app_engine",
"python"
] | stackoverflow_0003342012_google_app_engine_python.txt |
Q:
django-mptt children selection works on localhost but not on server
I have the same code on localhost and on server (thanks to mercurial), but it works a little bit different. I want to render category and its subcategories in template using this code:
views.py:
def category(request, category_slug):
try:
category = Category.objects.get(slug=category_slug)
except:
raise Http404
subcats = category.get_children()
return render_to_response('catalogue.html',
{'category': category,
'subcats': subcats,
'header_template':'common/includes/header_%s.html' % flixwood_settings.CURRENT_SITE
},
context_instance=RequestContext(request))
template:
<div class='subcats'>
{% for subcat in subcats %}
{% ifequal subcat.level 1 %}
<div class="item">
<a href="{% url flixwood.views.category category_slug=subcat.slug %}"><img src="{% thumbnail subcat.image 66x66 %}" class="thumb"></a>
<a href="{% url flixwood.views.category category_slug=subcat.slug %}" class="name">{{ subcat.category }}</a>
{{ subcat.short_description|safe }}
<div class="clear_left"></div>
</div>
{% cycle '' '' '<div class="clear_left"></div>'|safe %}
{% endifequal %}
{% endfor %}
</div>
but however this code works perfectly on localhost (subcategories are rendering right) - it doesn't work on server, and the {{ subcats|length }} returns 0.
I compared values from MySQL bases on localhost and on server - they are right and inheritance should work. The funniest thing is that the same query works perfectly in manage.py shell on server.
What the hack is wrong with it?
A:
The problem was solved - it was in .pyc files, which are recreating only after apache is restarted. That's why the right code in .py files didn't work.
| django-mptt children selection works on localhost but not on server | I have the same code on localhost and on server (thanks to mercurial), but it works a little bit different. I want to render category and its subcategories in template using this code:
views.py:
def category(request, category_slug):
try:
category = Category.objects.get(slug=category_slug)
except:
raise Http404
subcats = category.get_children()
return render_to_response('catalogue.html',
{'category': category,
'subcats': subcats,
'header_template':'common/includes/header_%s.html' % flixwood_settings.CURRENT_SITE
},
context_instance=RequestContext(request))
template:
<div class='subcats'>
{% for subcat in subcats %}
{% ifequal subcat.level 1 %}
<div class="item">
<a href="{% url flixwood.views.category category_slug=subcat.slug %}"><img src="{% thumbnail subcat.image 66x66 %}" class="thumb"></a>
<a href="{% url flixwood.views.category category_slug=subcat.slug %}" class="name">{{ subcat.category }}</a>
{{ subcat.short_description|safe }}
<div class="clear_left"></div>
</div>
{% cycle '' '' '<div class="clear_left"></div>'|safe %}
{% endifequal %}
{% endfor %}
</div>
but however this code works perfectly on localhost (subcategories are rendering right) - it doesn't work on server, and the {{ subcats|length }} returns 0.
I compared values from MySQL bases on localhost and on server - they are right and inheritance should work. The funniest thing is that the same query works perfectly in manage.py shell on server.
What the hack is wrong with it?
| [
"The problem was solved - it was in .pyc files, which are recreating only after apache is restarted. That's why the right code in .py files didn't work.\n"
] | [
0
] | [] | [] | [
"django",
"django_mptt",
"python"
] | stackoverflow_0003337565_django_django_mptt_python.txt |
Q:
I want one backslash - not two
I have a string that after print is like this: \x4d\xff\xfd\x00\x02\x8f\x0e\x80\x66\x48\x71
But I want to change this string to "\x4d\xff\xfd\x00\x02\x8f\x0e\x80\x66\x48\x71" which is not printable (it is necessary to write to serial port). I know that it ist problem with '\'. how can I replace this printable backslashes to unprintable?
A:
If you want to decode your string, use decode() with 'string_escape' as parameter which will interpret the literals in your variable as python literal string (as if it were typed as constant string in your code).
mystr.decode('string_escape')
A:
Use decode():
>>> st = r'\x4d\xff\xfd\x00\x02\x8f\x0e\x80\x66\x48\x71'
>>> print st
\x4d\xff\xfd\x00\x02\x8f\x0e\x80\x66\x48\x71
>>> print st.decode('string-escape')
MÿýfHq
That last garbage is what my Python prints when trying to print your unprintable string.
A:
You are confusing the printable representation of a string literal with the string itself:
>>> c = '\x4d\xff\xfd\x00\x02\x8f\x0e\x80\x66\x48\x71'
>>> c
'M\xff\xfd\x00\x02\x8f\x0e\x80fHq'
>>> len(c)
11
>>> len('\x4d\xff\xfd\x00\x02\x8f\x0e\x80\x66\x48\x71')
11
>>> len(r'\x4d\xff\xfd\x00\x02\x8f\x0e\x80\x66\x48\x71')
44
A:
your_string.decode('string_escape')
| I want one backslash - not two | I have a string that after print is like this: \x4d\xff\xfd\x00\x02\x8f\x0e\x80\x66\x48\x71
But I want to change this string to "\x4d\xff\xfd\x00\x02\x8f\x0e\x80\x66\x48\x71" which is not printable (it is necessary to write to serial port). I know that it ist problem with '\'. how can I replace this printable backslashes to unprintable?
| [
"If you want to decode your string, use decode() with 'string_escape' as parameter which will interpret the literals in your variable as python literal string (as if it were typed as constant string in your code).\nmystr.decode('string_escape')\n\n",
"Use decode():\n>>> st = r'\\x4d\\xff\\xfd\\x00\\x02\\x8f\\x0e\... | [
5,
2,
1,
1
] | [] | [] | [
"backslash",
"python",
"string"
] | stackoverflow_0003342681_backslash_python_string.txt |
Q:
urllib and proxies
I need in using Tor+Privoxy with my python-script.
proxies = {
'http' : '127.0.0.1:8118',
'ssl' : '127.0.0.1:8118',
'socks' : '127.0.0.1:9050'
}
The first question: is the 'socks' name right? Maybe there should be something like 'socks5'?
The next step is that I should pass user-agent string with this proxies to the urllib and save information from loaded site into .html file. I don't know how to pass headers information with proxies.
A:
not sure whether urllib is able to deal with socks proxy..
you may try socksipy
A:
I suggest you use urllib2 instead. This post might be useful.
hope it helps
| urllib and proxies | I need in using Tor+Privoxy with my python-script.
proxies = {
'http' : '127.0.0.1:8118',
'ssl' : '127.0.0.1:8118',
'socks' : '127.0.0.1:9050'
}
The first question: is the 'socks' name right? Maybe there should be something like 'socks5'?
The next step is that I should pass user-agent string with this proxies to the urllib and save information from loaded site into .html file. I don't know how to pass headers information with proxies.
| [
"not sure whether urllib is able to deal with socks proxy..\nyou may try socksipy\n",
"I suggest you use urllib2 instead. This post might be useful.\nhope it helps\n"
] | [
0,
0
] | [] | [] | [
"header",
"proxy",
"python",
"urllib"
] | stackoverflow_0002829919_header_proxy_python_urllib.txt |
Q:
lxml version problem - unable to call fndall method !
lxml gives following error on version 1.3 for the below line..
self.doc.findall('.//field[@on_change]')
File "/home/.../code_generator/xmlGenerator.py", line 158, in processOnChange
onchangeNodes = self.doc.findall('.//field[@on_change]')
File "etree.pyx", line 1042, in etree._Element.findall
File "/usr/lib/python2.5/site-packages/lxml/_elementpath.py", line 193, in findall
return _compile(path).findall(element)
File "/usr/lib/python2.5/site-packages/lxml/_elementpath.py", line 171, in _compile
p = Path(path)
File "/usr/lib/python2.5/site-packages/lxml/_elementpath.py", line 88, in __init__
"expected path separator (%s)" % (op or tag)
SyntaxError: expected path separator ([)
It works perfectly on local machine having lxml = 2.1.
my question is whats alternative for it, I tried to update the server's lxml version but failed to do that as operating system is gusty - ubuntu 7.10 related post
A:
Predicates in ElementPath expressions were only added in a later version. The original (c)ElementTree module (included in stdlib) has this only as of version 1.3 (in stdlib python 2.7). Lxml started using ElementTree 1.3 compatible expressions from version 2.0 on I think (when ElementTree 1.3 was still alpha)
The easiest solution: use the xpath() method, which can use real xpath expressions instead of only the subset that ElementPath supports (the lxml faq explains why they have the both xpath() and findall())
self.doc.xpath('.//field[@on_change]')
or filter on the attribute yourself (in case you would want something that works with the stdlib ElementTree as well):
[i for i in self.doc.findall('.//field') if i.get('on_change') is not None]
| lxml version problem - unable to call fndall method ! | lxml gives following error on version 1.3 for the below line..
self.doc.findall('.//field[@on_change]')
File "/home/.../code_generator/xmlGenerator.py", line 158, in processOnChange
onchangeNodes = self.doc.findall('.//field[@on_change]')
File "etree.pyx", line 1042, in etree._Element.findall
File "/usr/lib/python2.5/site-packages/lxml/_elementpath.py", line 193, in findall
return _compile(path).findall(element)
File "/usr/lib/python2.5/site-packages/lxml/_elementpath.py", line 171, in _compile
p = Path(path)
File "/usr/lib/python2.5/site-packages/lxml/_elementpath.py", line 88, in __init__
"expected path separator (%s)" % (op or tag)
SyntaxError: expected path separator ([)
It works perfectly on local machine having lxml = 2.1.
my question is whats alternative for it, I tried to update the server's lxml version but failed to do that as operating system is gusty - ubuntu 7.10 related post
| [
"Predicates in ElementPath expressions were only added in a later version. The original (c)ElementTree module (included in stdlib) has this only as of version 1.3 (in stdlib python 2.7). Lxml started using ElementTree 1.3 compatible expressions from version 2.0 on I think (when ElementTree 1.3 was still alpha)\nThe... | [
3
] | [] | [] | [
"lxml",
"python"
] | stackoverflow_0003342942_lxml_python.txt |
Q:
Does XML-RPC in general allows to call few functions at once?
Can I ask for few question in one post to XML-RPC server?
If yes, how can I do it in python and xmlrpclib?
I'm using XML-RPC server on slow connection, so I would like to call few functions at once, because each call costs me 700ms.
A:
http://docs.python.org/library/xmlrpclib.html#multicall-objects
A:
Whether or not possible support of multicall makes any difference to you depends on where the 700ms is going.
How did you measure your 700ms?
Run a packet capture of a query and analyse the results. It should be possible to infer roughly round-trip-time, bandwidth constraints, whether it's the application layer of the server or even the name resolution of your client machine.
| Does XML-RPC in general allows to call few functions at once? | Can I ask for few question in one post to XML-RPC server?
If yes, how can I do it in python and xmlrpclib?
I'm using XML-RPC server on slow connection, so I would like to call few functions at once, because each call costs me 700ms.
| [
"http://docs.python.org/library/xmlrpclib.html#multicall-objects\n",
"Whether or not possible support of multicall makes any difference to you depends on where the 700ms is going.\nHow did you measure your 700ms?\nRun a packet capture of a query and analyse the results. It should be possible to infer roughly roun... | [
0,
0
] | [] | [] | [
"python",
"soap",
"xml_rpc",
"xmlrpclib"
] | stackoverflow_0003343082_python_soap_xml_rpc_xmlrpclib.txt |
Q:
Django comment moderation error: AlreadyModerated at /
I'm trying to add the comments framework to a weblog I'm creating in Django. Adding the comments system appears to be working fine until I attempt to enable comment moderation.
I add the following code to my models.py as per the instructions on the above link. My model is called Post which represents a post in the weblog.
class PostModerator(CommentModerator):
email_notification = False
enable_field = 'allow_comments'
moderator.register(Post, PostModerator)
If I attempt to preview the site I get error AlreadyModerated at / with the exception The model 'post' is already being moderated. I have no idea why I'm getting this error as I have only just enabled comments and am not sure why Post would already be moderated.
A:
Just had a similar problem today, but I think I've solved it :)
In my case the issue was that django was loading models.py twice and therefore trying to register the model for comment moderation twice as well. I fixed this by modifying the code from:
moderator.register(Post, PostModerator)
to:
if Post not in moderator._registry:
moderator.register(Post, PostModerator)
A:
I imagine that CommentModerator (the superclass for PostModerator) is moderated by default?
| Django comment moderation error: AlreadyModerated at / | I'm trying to add the comments framework to a weblog I'm creating in Django. Adding the comments system appears to be working fine until I attempt to enable comment moderation.
I add the following code to my models.py as per the instructions on the above link. My model is called Post which represents a post in the weblog.
class PostModerator(CommentModerator):
email_notification = False
enable_field = 'allow_comments'
moderator.register(Post, PostModerator)
If I attempt to preview the site I get error AlreadyModerated at / with the exception The model 'post' is already being moderated. I have no idea why I'm getting this error as I have only just enabled comments and am not sure why Post would already be moderated.
| [
"Just had a similar problem today, but I think I've solved it :)\nIn my case the issue was that django was loading models.py twice and therefore trying to register the model for comment moderation twice as well. I fixed this by modifying the code from:\nmoderator.register(Post, PostModerator)\n\nto:\nif Post not in... | [
8,
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0003277474_django_python.txt |
Q:
Django: construct form without validating fields?
I have a form MyForm which I update using ajax as the user fills it out. I have a view method which updates the form by constructing a MyForm from request.POST and feeding it back.
def update_form(request):
if request.method == 'POST':
dict = {}
dict['form'] = MyForm(request.POST).as_p()
return HttpResponse(json.dumps(dict), mimetype='application/javascript')
return HttpResponseBadRequest()
However, this invokes the cleaning/validation routines, and I don't want to show error messages to the user until they've actually actively clicked "submit".
So the question is: how can I construct a django.forms.Form from existing data without invoking validation?
A:
Validation never invokes until you call form.is_valid().
But as i am guessing, you want your form filled with data user types in, until user clicks submit.
def update_form(request):
if request.method == 'POST':
if not request.POST.get('submit'):
dict = {}
dict['form'] = MyForm(initial = dict(request.POST.items())).as_p()
return HttpResponse(json.dumps(dict), mimetype='application/javascript')
else:
form = MyForm(request.POST)
if form.is_valid():
# Your Final Stuff
pass
return HttpResponseBadRequest()
Happy Coding.
| Django: construct form without validating fields? | I have a form MyForm which I update using ajax as the user fills it out. I have a view method which updates the form by constructing a MyForm from request.POST and feeding it back.
def update_form(request):
if request.method == 'POST':
dict = {}
dict['form'] = MyForm(request.POST).as_p()
return HttpResponse(json.dumps(dict), mimetype='application/javascript')
return HttpResponseBadRequest()
However, this invokes the cleaning/validation routines, and I don't want to show error messages to the user until they've actually actively clicked "submit".
So the question is: how can I construct a django.forms.Form from existing data without invoking validation?
| [
"Validation never invokes until you call form.is_valid().\nBut as i am guessing, you want your form filled with data user types in, until user clicks submit.\ndef update_form(request):\n if request.method == 'POST':\n if not request.POST.get('submit'):\n dict = {}\n dict['form'] = My... | [
3
] | [] | [] | [
"django",
"django_forms",
"forms",
"python",
"validation"
] | stackoverflow_0003343723_django_django_forms_forms_python_validation.txt |
Q:
Sqlite db, multiple row updates: handling text and floats
I have a large SQLite database with a mix of text and lots of other columns var1 ... var 50. Most of these are numeric, though some are text based.
I am trying to extract data from the database, process it in python and write it back - I need to do this for all rows in the db.
So far, the below sort of works:
# get row using select and process
fields = (','.join(keys)) # "var1, var2, var3 ... var50"
results = ','.join([results[key] for key in keys]) # "value for var1, ... value for var50"
cur.execute('INSERT OR REPLACE INTO results (id, %s) VALUES (%s, %s);' %(fields, id, results))
This however, nulls the columns that I don't explicitly add back. I can fix this by re-writing the code, but this feels quite messy, as I would have to surround with quotes using string concatenation and rewrite data that was there to begin with (i.e. the columns I didn't change).
Apparently the way to run updates on rows is something like this:
update table set var1 = 4, var2 = 5, var3="some text" where id = 666;
Presumably the way for me would be to run map , and add the = signs somehow (not sure how), but how would I quote all of the results appropriately (Since I would have to quote the text fields, and they might contain quotes within them too .. )?
I'm a bit confused. Any pointers would be very helpful.
Thanks!
A:
As others have stressed, use parametrized arguments. Here is an example of how you might construct the SQL statement when it has a variable number of keys:
sql=('UPDATE results SET '
+ ', '.join(key+' = ?' for key in keys)
+ 'WHERE id = ?')
args = [results[key] for key in keys] + [id]
cur.execute(sql,args)
A:
Use parameter substitution. It's more robust (and safer I think) than string formatting.
So if you did something like
query = 'UPDATE TABLE SET ' + ', '.join(str(f) + '=?,' for f in fields) + ';'
Or alternatively
query = 'UPDATE TABLE SET %s;' % (', '.join(str(f) + '=?,' for f in fields))
Or using new style formatting:
query = 'UPDATE TABLE SET {0};'.format(', '.join(str(f) + '=?,' for f in fields))
So the complete program would look something like this:
vals = {'var1': 'foo', 'var2': 3, 'var24':999}
fields = vals.keys()
results = vals.values()
query = 'UPDATE TABLE SET {0};'.format(', '.join(str(f) + '=?,' for f in fields))
conn.execute(query, results)
And that should work - and I presume do what you want it to.
A:
You don't have to care about things like quotations etc, and in fact you shouldn't. If you do it like this, it's not only more convenient but also takes care of security issues known as sql injections:
sql = "update table set var1=%s, var2=%s, var3=%s where id=666"
cursor.execute(sql, (4, 5, "some text"))
the key point here ist that the sql and the values in the second statement aren't separated by a "%", but by a "," - this is not a string manipulation, but instead you pass two arguments to the execute function, the actual sql and the values. Each %s is replaced by a value from the value tuple. the database driver then knows how to take care of the individual types of the values.
the insert statement can be rewritten the same way, although I'm not sure and currently can't test whether you can also replace field names that way (the first %s in your insert-sql statement)
so to come back to your overall problem, you can loop over your values and dynamically add ", var%d=%%s" % i for your i-th variable while adding the actual value to a list at the same time
| Sqlite db, multiple row updates: handling text and floats | I have a large SQLite database with a mix of text and lots of other columns var1 ... var 50. Most of these are numeric, though some are text based.
I am trying to extract data from the database, process it in python and write it back - I need to do this for all rows in the db.
So far, the below sort of works:
# get row using select and process
fields = (','.join(keys)) # "var1, var2, var3 ... var50"
results = ','.join([results[key] for key in keys]) # "value for var1, ... value for var50"
cur.execute('INSERT OR REPLACE INTO results (id, %s) VALUES (%s, %s);' %(fields, id, results))
This however, nulls the columns that I don't explicitly add back. I can fix this by re-writing the code, but this feels quite messy, as I would have to surround with quotes using string concatenation and rewrite data that was there to begin with (i.e. the columns I didn't change).
Apparently the way to run updates on rows is something like this:
update table set var1 = 4, var2 = 5, var3="some text" where id = 666;
Presumably the way for me would be to run map , and add the = signs somehow (not sure how), but how would I quote all of the results appropriately (Since I would have to quote the text fields, and they might contain quotes within them too .. )?
I'm a bit confused. Any pointers would be very helpful.
Thanks!
| [
"As others have stressed, use parametrized arguments. Here is an example of how you might construct the SQL statement when it has a variable number of keys:\nsql=('UPDATE results SET '\n + ', '.join(key+' = ?' for key in keys)\n + 'WHERE id = ?')\nargs = [results[key] for key in keys] + [id]\ncur.execute(sq... | [
3,
1,
0
] | [] | [] | [
"python",
"sql",
"sqlite"
] | stackoverflow_0003343565_python_sql_sqlite.txt |
Q:
Merging two datasets in Python efficiently
What would anyone consider the most efficient way to merge two datasets using Python?
A little background - this code will take 100K+ records in the following format:
{user: aUser, transaction: UsersTransactionNumber}, ...
and using the following data
{transaction: aTransactionNumber, activationNumber: assoiciatedActivationNumber}, ...
to create
{user: aUser, activationNumber: assoiciatedActivationNumber}, ...
N.B These are not Python dictionaries, just the closest thing to portraying record format cleanly.
So in theory, all I am trying to do is create a view of two lists (or tables) joining on a common key - at first this points me towards sets (unions etc), but before I start learning these in depth, are they the way to go? So far I felt this could be implemented as:
Create a list of dictionaries and iterate over the list comparing the key each time, however, worst case scenario this could run up to len(inputDict)*len(outputDict) <- Not sure?
Manipulate the data as an in-memory SQLite Table? Peferrably not as although there is no strict requirement for Python 2.4, it would make life easier.
Some kind of Set based magic?
Clarification
The whole purpose of this script is to summarise, the actual data sets are coming from two different sources. The user and transaction numbers are coming in the form of a CSV as an output from a performance test that is testing email activation code throughput. The second dataset comes from parsing the test mailboxes, which contain the transaction id and activation code. The output of this test is then a CSV that will get pumped back into stage 2 of the performance test, activating user accounts using the activation codes that were paired up.
Apologies if my notation for the records was misleading, I have updated them accordingly.
Thanks for the replies, I am going to give two ideas a try:
Sorting the lists first (I don't know
how expensive this is)
Creating a
dictionary with the transactionCodes
as the key then store the user and
activation code in a list as the
value
Performance isn't overly paramount for me, I just want to try and get into good habits with my Python Programming.
A:
Here's a radical approach.
Don't.
You have two CSV files; one (users) is clearly the driver. Leave this alone.
The other -- transaction codes for a user -- can be turned into a simple dictionary.
Don't "combine" or "join" anything except when absolutely necessary. Certainly don't "merge" or "pre-join".
Write your application do simply do simple lookups in the other collection.
Create a list of dictionaries and iterate over the list comparing the key each time,
Close. It looks like this. Note: No Sort.
import csv
with open('activations.csv','rb') as act_data:
rdr= csv.DictReader( act_data)
activations = dict( (row['user'],row) for row in rdr )
with open('users.csv','rb') as user_data:
rdr= csv.DictReader( user_data )
with open( 'users_2.csv','wb') as updated_data:
wtr= csv.DictWriter( updated_data, ['some','list','of','columns'])
for user in rdr:
user['some_field']= activations[user['user_id_column']]['some_field']
wtr.writerow( user )
This is fast and simple. Save the dictionaries (use shelve or pickle).
however, worst case scenario this could run up to len(inputDict)*len(outputDict) <- Not sure?
False.
One list is the "driving" list. The other is the lookup list. You'll drive by iterating through users and lookup appropriate values for transaction. This is O( n ) on the list of users. The lookup is O( 1 ) because dictionaries are hashes.
A:
Sort the two data sets by transaction number. That way, you always only need to keep one row of each in memory.
A:
This looks like a typical use for dictionaries with transaction number as key. But you don't have to create the common structure, just build the lookup dictionnaries and use them as needed.
A:
I'd create a map myTransactionNumber -> {transaction: myTransactionNumber, activationNumber: myActivationNumber} and then iterate on {user: myUser, transaction: myTransactionNumber} entries and search in the map for needed myTransactionNumber. The complexity of a search should be O(log N) where N is amount of the entries in the set. So the overal complexity would be O(M*log N) where M is amount of user entries.
| Merging two datasets in Python efficiently | What would anyone consider the most efficient way to merge two datasets using Python?
A little background - this code will take 100K+ records in the following format:
{user: aUser, transaction: UsersTransactionNumber}, ...
and using the following data
{transaction: aTransactionNumber, activationNumber: assoiciatedActivationNumber}, ...
to create
{user: aUser, activationNumber: assoiciatedActivationNumber}, ...
N.B These are not Python dictionaries, just the closest thing to portraying record format cleanly.
So in theory, all I am trying to do is create a view of two lists (or tables) joining on a common key - at first this points me towards sets (unions etc), but before I start learning these in depth, are they the way to go? So far I felt this could be implemented as:
Create a list of dictionaries and iterate over the list comparing the key each time, however, worst case scenario this could run up to len(inputDict)*len(outputDict) <- Not sure?
Manipulate the data as an in-memory SQLite Table? Peferrably not as although there is no strict requirement for Python 2.4, it would make life easier.
Some kind of Set based magic?
Clarification
The whole purpose of this script is to summarise, the actual data sets are coming from two different sources. The user and transaction numbers are coming in the form of a CSV as an output from a performance test that is testing email activation code throughput. The second dataset comes from parsing the test mailboxes, which contain the transaction id and activation code. The output of this test is then a CSV that will get pumped back into stage 2 of the performance test, activating user accounts using the activation codes that were paired up.
Apologies if my notation for the records was misleading, I have updated them accordingly.
Thanks for the replies, I am going to give two ideas a try:
Sorting the lists first (I don't know
how expensive this is)
Creating a
dictionary with the transactionCodes
as the key then store the user and
activation code in a list as the
value
Performance isn't overly paramount for me, I just want to try and get into good habits with my Python Programming.
| [
"Here's a radical approach.\nDon't.\nYou have two CSV files; one (users) is clearly the driver. Leave this alone.\nThe other -- transaction codes for a user -- can be turned into a simple dictionary.\nDon't \"combine\" or \"join\" anything except when absolutely necessary. Certainly don't \"merge\" or \"pre-join\... | [
6,
1,
1,
0
] | [] | [] | [
"data_structures",
"performance",
"python"
] | stackoverflow_0003343768_data_structures_performance_python.txt |
Q:
Python package for Microsoft Active Accessibility library?
Is there a package for Microsoft Active Accessibility library other than
http://pypi.python.org/pypi/pyAA/2.0
which seems to have been abandoned (I can't seem to get the source code from sourceforge )and does not support Python 2.6.
Thanks.
A:
I hate to answer my own question, but here it is for those who are interested:
ja.nishimotz.com/pyaa
is what I was looking for.
A:
Since MSAA is, I believe, COM-based, you could just use pywin32's general purpose Python-to-COM interface to access anything in that package. Could you please explain why this is not the case? Thanks!
| Python package for Microsoft Active Accessibility library? | Is there a package for Microsoft Active Accessibility library other than
http://pypi.python.org/pypi/pyAA/2.0
which seems to have been abandoned (I can't seem to get the source code from sourceforge )and does not support Python 2.6.
Thanks.
| [
"I hate to answer my own question, but here it is for those who are interested:\nja.nishimotz.com/pyaa \nis what I was looking for.\n",
"Since MSAA is, I believe, COM-based, you could just use pywin32's general purpose Python-to-COM interface to access anything in that package. Could you please explain why this ... | [
1,
0
] | [] | [] | [
"accessibility",
"automation",
"python",
"windows"
] | stackoverflow_0003313843_accessibility_automation_python_windows.txt |
Q:
Django - complex forms with multiple models
Django 1.1
models.py:
class Property(models.Model):
name = models.CharField()
addr = models.CharField()
phone = models.CharField()
etc....
class PropertyComment(models.Model):
user = models.ForeignKey(User)
prop = models.ForeignKey(Property)
text = models.TextField()
etc...
I have a form which needs to display several entries from my Property model each with a corresponding PropertyComment form to collect a user's comments on that property. In other words, allowing a User to comment on multiple Property instances on the same page.
This seems outside the intended usage of an Inline formset since it is multi-model to multi-model vs. single-model to multi-model. It seems like trying to iterate through the Property instances and create an inline formset for each is not only clunky, but I'm not even sure it could work.
Any ideas on where to start on this?
A:
Have you thought about using the comment framework:
http://docs.djangoproject.com/en/dev/ref/contrib/comments/
If that doesnt work for you then maybe look into inlineformset_factory:
http://docs.djangoproject.com/en/dev/topics/forms/modelforms/#inline-formsets
from django.forms.models import inlineformset_factory
PropertyCommentFormSet = inlineformset_factory(Property, PropertyComment)
property= Property.objects.get(name=u'some property name')
formset = PropertyCommentFormSet(instance=property)
etc...
| Django - complex forms with multiple models | Django 1.1
models.py:
class Property(models.Model):
name = models.CharField()
addr = models.CharField()
phone = models.CharField()
etc....
class PropertyComment(models.Model):
user = models.ForeignKey(User)
prop = models.ForeignKey(Property)
text = models.TextField()
etc...
I have a form which needs to display several entries from my Property model each with a corresponding PropertyComment form to collect a user's comments on that property. In other words, allowing a User to comment on multiple Property instances on the same page.
This seems outside the intended usage of an Inline formset since it is multi-model to multi-model vs. single-model to multi-model. It seems like trying to iterate through the Property instances and create an inline formset for each is not only clunky, but I'm not even sure it could work.
Any ideas on where to start on this?
| [
"Have you thought about using the comment framework:\nhttp://docs.djangoproject.com/en/dev/ref/contrib/comments/\nIf that doesnt work for you then maybe look into inlineformset_factory:\nhttp://docs.djangoproject.com/en/dev/topics/forms/modelforms/#inline-formsets\nfrom django.forms.models import inlineformset_fact... | [
2
] | [] | [] | [
"django",
"django_forms",
"python"
] | stackoverflow_0003343747_django_django_forms_python.txt |
Q:
Why does the istitle() string method return false if the string is clearly in title-case?
Of the istitle() string method, the Python 2.6.5 manual reads:
Return true if the string is a titlecased string and there is at least one character, for example uppercase characters may only follow uncased characters and lowercase characters only cased ones. Return false otherwise.
But in this case it returns false:
>>> book = 'what every programmer must know'
>>> book.title()
'What Every Programmer Must Know'
>>> book.istitle()
False
What am I missing?
A:
book.title() does not change the variable book. It just returns the string in title case.
>>> book.title()
'What Every Programmer Must Know'
>>> book # still not in title case
'what every programmer must know'
>>> book.istitle() # hence it returns False.
False
>>> book.title().istitle() # returns True as expected
True
A:
The method title() doesn't mutate the string (strings are immutable in Python). It creates a new string which you must assign to your variable:
>>> book = 'what every programmer must know'
>>> book = book.title()
>>> book.istitle()
True
A:
Probably because you are still calling istitle() on the original book.
Try book.title().istitle() instead....
A:
Do the following:
print book
after you do book.title(). You will see that book hasn't changed.
The reason is that book.title() creates a new string. The name book still refers to the original string.
| Why does the istitle() string method return false if the string is clearly in title-case? | Of the istitle() string method, the Python 2.6.5 manual reads:
Return true if the string is a titlecased string and there is at least one character, for example uppercase characters may only follow uncased characters and lowercase characters only cased ones. Return false otherwise.
But in this case it returns false:
>>> book = 'what every programmer must know'
>>> book.title()
'What Every Programmer Must Know'
>>> book.istitle()
False
What am I missing?
| [
"book.title() does not change the variable book. It just returns the string in title case.\n>>> book.title()\n'What Every Programmer Must Know'\n>>> book # still not in title case\n'what every programmer must know'\n>>> book.istitle() # hence it returns False.\nFalse\n>>> book.title().istitle() # re... | [
8,
7,
3,
1
] | [] | [] | [
"python",
"string"
] | stackoverflow_0003344218_python_string.txt |
Q:
How can I use decorators in Python to specify and document repeated used arguments of methods?
One of my classes has a logical numpy array as parameter in many methods repeated (idx_vector=None).
How can I use a decorator to:
automatically specify idx_vector
automatically insert the description into the docstring
Example without decorator:
import numpy as np
class myarray(object):
def __init__(self, data):
self.data = np.array(data)
def get_sum(self, idx_vector=None):
"""
Input parameter:
``idx_vector``: logical indexing
...
... (further description)
"""
if idx_vector == None:
idx_vector = np.ones(len(self.data))
return sum(self.data*idx_vector)
def get_max(self, idx_vector=None):
"""
Input parameter:
``idx_vector``: logical indexing
...
... (further description)
"""
if idx_vector == None:
idx_vector = np.ones(len(self.data))
return max(self.data*idx_vector)
Usage:
a = [1,2,3,4,5.0]
b = np.array([True, True, False, False, False])
ma = myarray(a)
print(ma.get_max())
# ----> 5.0
print(ma.get_max(b))
# ----> 2.0
A:
I'm not sure I understand you correctly. As I see it, your problem is that you have a lot of functions which all need to take the argument idx_vector, and you don't want to add it to each of their argument lists. If that's the case:
Short answer: you can't.
Longer answer: well, you could, but the function needs some way to refer to the argument which contains idx_vector. Now you could use *args to pick it up, and then args[0] to refer to it, but that doesn't change the fact that you still need to define some name within each of the functions for the variable which will hold idx_vector.
How about making idx_vector an attribute of myarray? I.e., if self.idx_vector is not None?
You can use a decorator to modify the docstring of a function, but remember that if the docstring is different in each case then there's not much point trying to abstract it away. And if it's the same for each function, well, it's not very good documentation! However, for the sake of example, here's the code:
def addDocstring( func ):
func.__doc__ = "Some description of the function."
return func
A:
Decorators are not the lexical equivalent of the C-preprocessor, nor should you try to make them such. Even if you could, your suggested use is both implicit (and therefore "ugly" per PEP 20) and uninformative which also violates "If the implementation is hard to explain, it's a bad idea."
Better would be:
import numpy as np
class myarray(object):
"""an array that has additional spam, eggs, and spam capabilities
an idx_vector argument, if present, contains an mask for
a subset the array, if not specified one will be created based
on the Frobnitz Conjecture"""
def __init__(self, data):
self.data = np.array(data)
def sum(self, idx_vector=None):
"""returns the sum of the elements of the array masked per idx_vector"""
if idx_vector is None:
idx_vector = np.ones(len(self.data))
def max(self, idx_vector=None):
"""returns the largest element of the array masked per idx_vector"""
Boilerplate comments are bad as they reduce readability. Redundancies like get_ are to be avoided as they are syntactic junk which also reduce readability. Finally if myarray is-a form of another array you might want to inherit from that instead of object. If you are trying to comply with some corporate standard that requires
"""
Input parameter:
``idx_vector``: logical indexing
..."""
then the standard is broken.
A:
This leads to a rather unobvious interface. Therefore, I don't recommend using the following for anything but play. However, it is possible to create a decorator to do what you request:
import numpy as np
import functools
def add_idx_vector(method):
@functools.wraps(method)
def wrapper(self, idx_vector=None):
if idx_vector == None: idx_vector = np.ones(len(self.data))
# This uses func_globals to inject a key,value pair into method's
# globals to spoof a default value.
method.func_globals['idx_vector']=idx_vector
return method(self)
wrapper.__doc__='''
Input parameter:
``idx_vector``: logical indexing
'''+wrapper.__doc__
return wrapper
class myarray(object):
def __init__(self, data):
self.data = np.array(data)
@add_idx_vector
def get_sum(self):
'''(further description)
'''
return sum(self.data*idx_vector)
a = [1,2,3,4,5.0]
b = np.array([True, True, False, False, False])
ma = myarray(a)
print(ma.get_sum())
# ----> 15.0
print(ma.get_sum(b))
# ----> 3.0
This shows the doc string has been modified too:
help(ma.get_sum)
# Help on method get_sum in module __main__:
# get_sum(self, idx_vector=None) method of __main__.myarray instance
# Input parameter:
# ``idx_vector``: logical indexing
# (further description)
| How can I use decorators in Python to specify and document repeated used arguments of methods? | One of my classes has a logical numpy array as parameter in many methods repeated (idx_vector=None).
How can I use a decorator to:
automatically specify idx_vector
automatically insert the description into the docstring
Example without decorator:
import numpy as np
class myarray(object):
def __init__(self, data):
self.data = np.array(data)
def get_sum(self, idx_vector=None):
"""
Input parameter:
``idx_vector``: logical indexing
...
... (further description)
"""
if idx_vector == None:
idx_vector = np.ones(len(self.data))
return sum(self.data*idx_vector)
def get_max(self, idx_vector=None):
"""
Input parameter:
``idx_vector``: logical indexing
...
... (further description)
"""
if idx_vector == None:
idx_vector = np.ones(len(self.data))
return max(self.data*idx_vector)
Usage:
a = [1,2,3,4,5.0]
b = np.array([True, True, False, False, False])
ma = myarray(a)
print(ma.get_max())
# ----> 5.0
print(ma.get_max(b))
# ----> 2.0
| [
"I'm not sure I understand you correctly. As I see it, your problem is that you have a lot of functions which all need to take the argument idx_vector, and you don't want to add it to each of their argument lists. If that's the case:\nShort answer: you can't.\nLonger answer: well, you could, but the function needs ... | [
2,
2,
1
] | [] | [] | [
"decorator",
"python"
] | stackoverflow_0003341752_decorator_python.txt |
Q:
Templates in different directory
I want to place the project hierarchy in this way
project
|-app
| \app1
| |-templates
| \app1_templ.html
| |- views.py
| \-models.py
|-templates
| \main.html
|-views.py
|-models.py
...
But in the main.html i want to use `{%include app1_templ.html%}. Assuming the views.py from the app1:
def main(request):
info= Informations.objects.all()
return render_to_response('main.html', {'a':info})
and the app1_templ.html
{% for b in a %}
<li> title:{{ b.title }}</li>
{%empty %}
EMPTY
{% endfor %}
I have put in settings.py the TEMPLATE_DIRS an extra folder info
os.path.join(os.path.dirname(__file__), 'apps/app1/app1_templ').replace('\\','/'),
Rendering the page always seems to give EMPTY no matter what.
Why the a list seems to be empty?
A:
Try passing in the context_instance:
return render_to_response('main.html', {'a': info}, context_instance=RequestContext(request))
| Templates in different directory | I want to place the project hierarchy in this way
project
|-app
| \app1
| |-templates
| \app1_templ.html
| |- views.py
| \-models.py
|-templates
| \main.html
|-views.py
|-models.py
...
But in the main.html i want to use `{%include app1_templ.html%}. Assuming the views.py from the app1:
def main(request):
info= Informations.objects.all()
return render_to_response('main.html', {'a':info})
and the app1_templ.html
{% for b in a %}
<li> title:{{ b.title }}</li>
{%empty %}
EMPTY
{% endfor %}
I have put in settings.py the TEMPLATE_DIRS an extra folder info
os.path.join(os.path.dirname(__file__), 'apps/app1/app1_templ').replace('\\','/'),
Rendering the page always seems to give EMPTY no matter what.
Why the a list seems to be empty?
| [
"Try passing in the context_instance:\nreturn render_to_response('main.html', {'a': info}, context_instance=RequestContext(request))\n\n"
] | [
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0003337615_django_python.txt |
Q:
Hiding the time in a datetime model field in Django?
I'm using Thauber's lovely Django schedule app, but have run into a problem: I can't figure out how to exclude the time portion of the datetime field.
My form class and lame exclusion attempt looks like this:
class LaundryDeliveryForm(EventForm):
start = forms.DateTimeField(widget=forms.SplitDateTimeWidget)
end = forms.DateTimeField(widget=forms.SplitDateTimeWidget, help_text = ("The end time must be later than start time."))
class Meta:
model = LaundryDelivery
exclude = EventForm.Meta.exclude + ('active', 'start.time')
Ideally, I'd like to just store the day, or store the start.time as some arbitrary time like 8 a.m., but while the active field is happily excluded, the time field remains open.
Tried sifting through the documentation but couldn't track what I needed down.
A:
You can use, DateInput widget as widget for current form, or forms.DateField(), which has default DateInput widget.
| Hiding the time in a datetime model field in Django? | I'm using Thauber's lovely Django schedule app, but have run into a problem: I can't figure out how to exclude the time portion of the datetime field.
My form class and lame exclusion attempt looks like this:
class LaundryDeliveryForm(EventForm):
start = forms.DateTimeField(widget=forms.SplitDateTimeWidget)
end = forms.DateTimeField(widget=forms.SplitDateTimeWidget, help_text = ("The end time must be later than start time."))
class Meta:
model = LaundryDelivery
exclude = EventForm.Meta.exclude + ('active', 'start.time')
Ideally, I'd like to just store the day, or store the start.time as some arbitrary time like 8 a.m., but while the active field is happily excluded, the time field remains open.
Tried sifting through the documentation but couldn't track what I needed down.
| [
"You can use, DateInput widget as widget for current form, or forms.DateField(), which has default DateInput widget.\n"
] | [
3
] | [] | [] | [
"datetime",
"django",
"python",
"schedule"
] | stackoverflow_0003344630_datetime_django_python_schedule.txt |
Q:
sqlalchemy in a single python script file
I've read about using sqlalchemywithin the pylons framework.
How will things work if I need it for a simple script file?
I have like importer.py that is spidering a site and I want to save to mysql.
If things are in a single file, can I still using sqlalchemy?
How do I setup my model/mappings then?
A:
If things are in a single file, can I still using sqlalchemy?
Yes, SQLAlchemy does not impose any restrictions on the way you use it.
You can see example on single-script initialization here: http://www.sqlalchemy.org/trac/attachment/ticket/1328/sqlalchemy-bug-query_Employee_company.py
| sqlalchemy in a single python script file | I've read about using sqlalchemywithin the pylons framework.
How will things work if I need it for a simple script file?
I have like importer.py that is spidering a site and I want to save to mysql.
If things are in a single file, can I still using sqlalchemy?
How do I setup my model/mappings then?
| [
"\nIf things are in a single file, can I still using sqlalchemy? \n\nYes, SQLAlchemy does not impose any restrictions on the way you use it. \nYou can see example on single-script initialization here: http://www.sqlalchemy.org/trac/attachment/ticket/1328/sqlalchemy-bug-query_Employee_company.py\n"
] | [
3
] | [] | [] | [
"python",
"sqlalchemy"
] | stackoverflow_0003344607_python_sqlalchemy.txt |
Q:
Python - When Is It Ok to Use os.system() to issue common Linux commands
Spinning off from another thread, when is it appropriate to use os.system() to issue commands like rm -rf, cd, make, xterm, ls ?
Considering there are analog versions of the above commands (except make and xterm), I'm assuming it's safer to use these built-in python commands instead of using os.system()
Any thoughts? I'd love to hear them.
A:
Rule of thumb: if there's a built-in Python function to achieve this functionality use this function. Why? It makes your code portable across different systems, more secure and probably faster as there will be no need to spawn an additional process.
A:
One of the problems with system() is that it implies knowledge of the shell's syntax and language for parsing and executing your command line. This creates potential for a bug where you didn't validate input properly, and the shell might interpet something like variable substitution or determining where an argument begins or ends in a way you don't expect. Also, another OS's shell might have divergent syntax from your own, including very subtle divergence that you won't notice right away. For reasons like these I prefer to use execve() instead of system() -- you can pass argv tokens directly and not have to worry about something in the middle (mis-)parsing your input.
Another problem with system() (this also applies to using execve()) is that when you code that, you are saying, "look for this program, and pass it these args". This makes a couple of assumptions which may lead to bugs. First is that the program exists and can be found in $PATH. Maybe on some system it won't. Second, maybe on some system, or even a future version of your own OS, it will support a different set of options. In this sense, I would avoid doing this unless you are absolutely certain the system you will run on will have the program. (Like maybe you put the callee program on the system to begin with, or the way you invoke it is mandated by something like POSIX.)
Lastly... There's also a performance hit associated with looking for the right program, creating a new process, loading the program, etc. If you are doing something simple like a mv, it's much more efficient to use the system call directly.
These are just a few of the reasons to avoid system(). Surely there are more.
A:
Darin's answer is a good start.
Beyond that, it's a matter of how portable you plan to be. If your program is only ever going to run on a reasonably "standard" and "modern" Linux then there's no reason for you to re-invent the wheel; if you tried to re-write make or xterm they'd be sending the men in the white coats for you. If it works and you don't have platform concerns, knock yourself out and simply use Python as glue!
If compatibility across unknown systems was a big deal you could try looking for libraries to do what you need done in a platform independent way. Or you need to look into a way to call on-board utilities with different names, paths and mechanisms depending on which kind of system you're on.
A:
The only time that os.system might be appropriate is for a quick-and-dirty solution for a non-production script or some kind of testing. Otherwise, it is best to use built-in functions.
A:
Your question seems to have two parts. You mention calling commands like "xterm", "rm -rf", and "cd".
Side Note: you cannot call 'cd' in a sub-shell. I bet that was a trick question ...
As far as other command-level things you might want to do, like "rm -rf SOMETHING", there is already a python equivalent. This answers the first part of your question. But I suspect you are really asking about the second part.
The second part of your question can be rephrased as "should I use system() or something like the subprocess module?".
I have a simple answer for you: just say NO to using "system()", except for prototyping.
It's fine for verifying that something works, or for that "quick and dirty" script, but there are just too many problems with os.system():
It forks a shell for you -- fine if you need one
It expands wild cards for you -- fine unless you don't have any
It handles redirect -- fine if you want that
It dumps output to stderr/stdout and reads from stdin by default
It tries to understand quoting, but it doesn't do very well (try 'Cmd" > "Ofile')
Related to #5, it doesn't always grok argument boundaries (i.e. arguments with spaces in them might get screwed up)
Just say no to "system()"!
A:
I would suggest that you only use use os.system for things that there are not already equivalents for within the os module. Why make your life harder?
A:
The os.system call is starting to be 'frowned upon' in python. The 'new' replacement would be subprocess.call or subprocess.Popen in the subprocess module. Check the docs for subprocess
The other nice thing about subprocess is you can read the stdout and stderr into variables, and process that without having to redirect to other file(s).
Like others have said above, there are modules for most things. Unless you're trying to glue together many other commands, I'd stick with the things included in the library. If you're copying files, use shutil, working with archives you've got modules like tarfile/zipfile and so on.
Good luck.
| Python - When Is It Ok to Use os.system() to issue common Linux commands | Spinning off from another thread, when is it appropriate to use os.system() to issue commands like rm -rf, cd, make, xterm, ls ?
Considering there are analog versions of the above commands (except make and xterm), I'm assuming it's safer to use these built-in python commands instead of using os.system()
Any thoughts? I'd love to hear them.
| [
"Rule of thumb: if there's a built-in Python function to achieve this functionality use this function. Why? It makes your code portable across different systems, more secure and probably faster as there will be no need to spawn an additional process.\n",
"One of the problems with system() is that it implies knowl... | [
19,
5,
4,
3,
3,
2,
1
] | [] | [] | [
"centos",
"linux",
"python"
] | stackoverflow_0003338616_centos_linux_python.txt |
Q:
Splitting a string separated by "\r\n" into a list of lines?
I am reading in some data from the subprocess module's communicate method. It is coming in as a large string separated by "\r\n"s. I want to split this into a list of lines. How is this performed in python?
A:
Use the splitlines method on the string.
From the docs:
str.splitlines([keepends])
Return a list of the lines in the string, breaking at line boundaries.
Line breaks are not included in the
resulting list unless keepends is
given and true.
This will do the right thing whether the line endings are "\r\n", "\r" or "\n" regardless of the OS.
NB a line ending of "\n\r" will also split, but you will get an empty string between each line since it will consider "\n" as a valid line ending and "\r" as the ending of the next line. e.g.
>>> "foo\n\rbar".splitlines()
['foo', '', 'bar']
A:
Check out the doc for string methods. In particular the split method.
http://docs.python.org/library/stdtypes.html#string-methods
A:
s = re.split(r"[~\r\n]+", string_to_split)
This will give you a list of strings in s.
| Splitting a string separated by "\r\n" into a list of lines? | I am reading in some data from the subprocess module's communicate method. It is coming in as a large string separated by "\r\n"s. I want to split this into a list of lines. How is this performed in python?
| [
"Use the splitlines method on the string.\nFrom the docs:\n\nstr.splitlines([keepends])\n Return a list of the lines in the string, breaking at line boundaries.\n Line breaks are not included in the\n resulting list unless keepends is\n given and true.\n\nThis will do the right thing whether the line ending... | [
56,
2,
1
] | [] | [] | [
"python",
"string"
] | stackoverflow_0003345030_python_string.txt |
Q:
Dynamically update wxPython staticText
i was wondering how to update a StaticText dynamically in wxpython?
I have a script that goes every five minutes and reads a status from a webpage, then prints using wxpython the status in a static input.
How would i dynamically, every 5 minutes update the statictext to reflect the status?
thanks alot
-soule
A:
Use a wx.Timer. You bind the timer to an event and in the event handler you call the StaticText control's SetLabel.
See the following page for an example on timers:
http://www.blog.pythonlibrary.org/2009/08/25/wxpython-using-wx-timers/
As for setting the label, the code would look something like this:
self.myStaticText.SetLabel("foobar")
Hope that helps!
A:
Call the SetLabel method in your static text instance. So you don't run into conflict with the size, make sure your StaticText instance is created with enough space to write the future labels you'll want to write to it.
| Dynamically update wxPython staticText | i was wondering how to update a StaticText dynamically in wxpython?
I have a script that goes every five minutes and reads a status from a webpage, then prints using wxpython the status in a static input.
How would i dynamically, every 5 minutes update the statictext to reflect the status?
thanks alot
-soule
| [
"Use a wx.Timer. You bind the timer to an event and in the event handler you call the StaticText control's SetLabel.\nSee the following page for an example on timers:\nhttp://www.blog.pythonlibrary.org/2009/08/25/wxpython-using-wx-timers/\nAs for setting the label, the code would look something like this:\nself.myS... | [
11,
1
] | [] | [] | [
"python",
"refresh",
"wxpython"
] | stackoverflow_0003339263_python_refresh_wxpython.txt |
Q:
I'm trying to make a wx.Frame with variable transparancy (based on the png mapped in the erase bk event)
I'm trying to make a special splash screen that is displayed while the application is loading,
it outputs messages of the various components loading and features a progress bar.
The first job I am tackling is mapping a .png image to the frame that will host the splash screen.
import wx
class edSplash(wx.Frame):
def __init__(self, parent, title):
wx.Frame.__init__(self, parent, -1, title, size=(410, 410), style=wx.NO_BORDER)
self.SetBackgroundStyle(wx.BG_STYLE_CUSTOM)
self.Center()
self.Bind(wx.EVT_ERASE_BACKGROUND, self.OnEraseBackground)
return
def OnEraseBackground(self, evt):
dc = evt.GetDC()
if not dc:
dc = wx.ClientDC(self)
rect = self.GetUpdateRegion().GetBox()
dc.SetClippingRect(rect)
tempBrush = wx.Brush((0,0,0,0),wx.TRANSPARENT)
print tempBrush
dc.SetBackground(tempBrush)
dc.SetBackgroundMode(wx.TRANSPARENT)
#dc.Clear()
img = wx.Image("splash.png", wx.BITMAP_TYPE_PNG, -1)
bmp = wx.BitmapFromImage(img)
dc.DrawBitmap(bmp, 0, 0, True)
def PushMessage(self, mesage):
print mesage
class edApp(wx.App):
def OnInit(self):
splash = edSplash(None, 'Ed')
self.SetTopWindow(splash)
splash.Show(True)
return True
if __name__ == '__main__':
edApp(redirect=False).MainLoop()
The problem is that dc.Clear() clears to an opaque rectangle, although i have set it's brush and mode to transparent (I think :D). Commenting out dc.Clear() gives me the desired variable transparency based on the .png's alpha channel but the window gathers image noise from the neighboring windows.
How could I get both the .png's transparency and have the background clearing to an transparent brush to keep from gathering image noise?
A:
Maybe you should try putting the background image onto a panel rather than the frame. Here's one way to do it:
http://www.blog.pythonlibrary.org/2010/03/18/wxpython-putting-a-background-image-on-a-panel/
| I'm trying to make a wx.Frame with variable transparancy (based on the png mapped in the erase bk event) | I'm trying to make a special splash screen that is displayed while the application is loading,
it outputs messages of the various components loading and features a progress bar.
The first job I am tackling is mapping a .png image to the frame that will host the splash screen.
import wx
class edSplash(wx.Frame):
def __init__(self, parent, title):
wx.Frame.__init__(self, parent, -1, title, size=(410, 410), style=wx.NO_BORDER)
self.SetBackgroundStyle(wx.BG_STYLE_CUSTOM)
self.Center()
self.Bind(wx.EVT_ERASE_BACKGROUND, self.OnEraseBackground)
return
def OnEraseBackground(self, evt):
dc = evt.GetDC()
if not dc:
dc = wx.ClientDC(self)
rect = self.GetUpdateRegion().GetBox()
dc.SetClippingRect(rect)
tempBrush = wx.Brush((0,0,0,0),wx.TRANSPARENT)
print tempBrush
dc.SetBackground(tempBrush)
dc.SetBackgroundMode(wx.TRANSPARENT)
#dc.Clear()
img = wx.Image("splash.png", wx.BITMAP_TYPE_PNG, -1)
bmp = wx.BitmapFromImage(img)
dc.DrawBitmap(bmp, 0, 0, True)
def PushMessage(self, mesage):
print mesage
class edApp(wx.App):
def OnInit(self):
splash = edSplash(None, 'Ed')
self.SetTopWindow(splash)
splash.Show(True)
return True
if __name__ == '__main__':
edApp(redirect=False).MainLoop()
The problem is that dc.Clear() clears to an opaque rectangle, although i have set it's brush and mode to transparent (I think :D). Commenting out dc.Clear() gives me the desired variable transparency based on the .png's alpha channel but the window gathers image noise from the neighboring windows.
How could I get both the .png's transparency and have the background clearing to an transparent brush to keep from gathering image noise?
| [
"Maybe you should try putting the background image onto a panel rather than the frame. Here's one way to do it:\nhttp://www.blog.pythonlibrary.org/2010/03/18/wxpython-putting-a-background-image-on-a-panel/\n"
] | [
1
] | [] | [] | [
"python",
"wxpython",
"wxwidgets"
] | stackoverflow_0003343095_python_wxpython_wxwidgets.txt |
Q:
What are the existing open-source Python WxWidgets designers?
What are the usable tools?
I am aware of wxformbuilder and wxGlade, but none of them seems to be complete yet.
A:
Here are a few of the most popular wxPython related GUI builders:
Boa Constructor (mostly dead)
wxGlade
wxFormBuilder
XRCed
wxDesigner (not FOSS)
Dabo - one of their videos shows a way to interactively design an app...
I personally just use a Python IDE to hand code my applications. My current favorite IDE is Wing.
A:
I've been looking for them too and sadly I've come up empty handed. I used to like Boa Constructor and PythonCard back in the day but both projects seem to have stalled. There is an attempt to get PythonCard going again @ http://trac.medianix.org/wiki/Pycard ; the site was down when I checked last but the mailing list seemed moderately active. Dabo's another actively-developed option.
A:
afaik... none. I'll follow the answers to see if someone has one and try it of course but I'm not convinced this will be THAT useful. When using wxPython, you usually work with sizers (at least I think the results are better) so you don't really need to "place" the controls on the frame and I think a GUI "design" would be longer to do. The only part where I think it could have some interest is to fill atributes for the controls but a good auto-complete with wx (or a good cheat sheet or a "template class" with all the options you use) solves the problem in my opinion. I stopped seeking for a GUI designer for wx after trying Pydev that auto-completes wx very nicely (a lot better than everything I tried before... and that's a lot!).
| What are the existing open-source Python WxWidgets designers? | What are the usable tools?
I am aware of wxformbuilder and wxGlade, but none of them seems to be complete yet.
| [
"Here are a few of the most popular wxPython related GUI builders:\n\nBoa Constructor (mostly dead) \nwxGlade\nwxFormBuilder \nXRCed \nwxDesigner (not FOSS)\nDabo - one of their videos shows a way to interactively design an app...\n\nI personally just use a Python IDE to hand code my applications. My current favori... | [
2,
1,
1
] | [] | [] | [
"python",
"wxpython",
"wxwidgets"
] | stackoverflow_0003329762_python_wxpython_wxwidgets.txt |
Q:
Python: drawbacks to using `signal.alert` to timeout I/O?
What are the disadvantages to using signal.alert to timeout Python I/O?
I ask because I have found that socket.settimeout isn't entirely reliable[0] and I'd like finer control over the timeouts for different operations[1].
So, as far as I can tell, the drawbacks are:
Added signal call overhead (but if you're doing I/O, this shouldn't be surprising)
It can get "more complex" if the application is using threading (but if threading is already being used, spawning a monitor thread would be easier)
Am I missing anything?
[0]: for example, it won't timeout slow DNS lookups.
[1]: for example, I want some reads to have a short timeout, while other reads should have a longer timeout.
A:
You will likely not get far when using signals and threads together. From the signal documentation:
...
only the main thread can set a new
signal handler, and the main thread
will be the only one to receive
signals (this is enforced by the
Python signal module, even if the
underlying thread implementation
supports sending signals to individual
threads)...
As far as drawbacks go, the interpreter will only call the signal handler after the interpreter returns to executing in the main thread context. If you have a long-running C extension call (SQL query, system I/O operation, etc) then SIGINT and SIGTERM signals will not get handled until after they return. If you have strict timing requirements that you are enforcing, then this will not help. The best way I have come up for getting around this is to distribute tasks to a child process, use SIGKILL to kill the child process on a timeout, and then create a replacement child process (a custom process pool).
Also, if you want to use the signal handler as a way of jumping out of a code block via asynchronously-raised exceptions, there is no guarantee for the sanity of your program state. The exception itself could even end up getting raised in unintended places.
For your needs with socket programming, you should start with using non-blocking I/O and a select loop to poll for readiness. Or you can use the asyncore module or the non-standard twisted library which abstract away a lot of those details.
| Python: drawbacks to using `signal.alert` to timeout I/O? | What are the disadvantages to using signal.alert to timeout Python I/O?
I ask because I have found that socket.settimeout isn't entirely reliable[0] and I'd like finer control over the timeouts for different operations[1].
So, as far as I can tell, the drawbacks are:
Added signal call overhead (but if you're doing I/O, this shouldn't be surprising)
It can get "more complex" if the application is using threading (but if threading is already being used, spawning a monitor thread would be easier)
Am I missing anything?
[0]: for example, it won't timeout slow DNS lookups.
[1]: for example, I want some reads to have a short timeout, while other reads should have a longer timeout.
| [
"You will likely not get far when using signals and threads together. From the signal documentation:\n\n...\n only the main thread can set a new\n signal handler, and the main thread\n will be the only one to receive\n signals (this is enforced by the\n Python signal module, even if the\n underlying thread i... | [
2
] | [] | [] | [
"python",
"timeout"
] | stackoverflow_0003345519_python_timeout.txt |
Q:
Python: Deleting files of a certain age
So at the moment I'm trying to delete files listed in the directory that are 1 minute old, I will change that value once I have the script working.
The code below returns the error: AttributeError: 'str' object has no attribute 'mtime'
import time
import os
#from path import path
seven_days_ago = time.time() - 60
folder = '/home/rv/Desktop/test'
for somefile in os.listdir(folder):
if int(somefile.mtime) < seven_days_ago:
somefile.remove()
A:
import time
import os
one_minute_ago = time.time() - 60
folder = '/home/rv/Desktop/test'
os.chdir(folder)
for somefile in os.listdir('.'):
st=os.stat(somefile)
mtime=st.st_mtime
if mtime < one_minute_ago:
print('remove %s'%somefile)
# os.unlink(somefile) # uncomment only if you are sure
A:
That's because somefile is a string, a relative filename. What you need to do is to construct the full path (i.e., absolute path) of the file, which you can do with the os.path.join function, and pass it to the os.stat, the return value will have an attribute st_mtime which will contain your desired value as an integer.
| Python: Deleting files of a certain age | So at the moment I'm trying to delete files listed in the directory that are 1 minute old, I will change that value once I have the script working.
The code below returns the error: AttributeError: 'str' object has no attribute 'mtime'
import time
import os
#from path import path
seven_days_ago = time.time() - 60
folder = '/home/rv/Desktop/test'
for somefile in os.listdir(folder):
if int(somefile.mtime) < seven_days_ago:
somefile.remove()
| [
"import time\nimport os\n\none_minute_ago = time.time() - 60 \nfolder = '/home/rv/Desktop/test'\nos.chdir(folder)\nfor somefile in os.listdir('.'):\n st=os.stat(somefile)\n mtime=st.st_mtime\n if mtime < one_minute_ago:\n print('remove %s'%somefile)\n # os.unlink(somefile) # uncomment only if... | [
13,
7
] | [] | [] | [
"python"
] | stackoverflow_0003345953_python.txt |
Q:
Efficient way of adding one character at a time from one string to another in Python
I'm currently making a function using pygame that draws a message on the screen, adding one character each frame (i.e. The Hunt for Red October). I know that I could simply copy (or pass) gradually bigger slices from the original string, but I know that it would be very resource-intensive. Is there a better way to do this?
Code, using gradually bigger slices:
def full_screen_dialog_tt(thesurface, thefont, theclock, message, thebeep):
i = 0
while(i < len(message)): # Initialize the string display
theclock.tick(60)
thesurface.fill((0, 0, 0))
thesurface.blit(thefont.render(message[i]+"_"))
pygame.display.flip()
thebeep.play()
while(1): # Whole string is here now
theclock.tick(60)
for event in pygame.events.get():
if event.type == MOUSEBUTTONDOWN: return
A:
In a place where you are intentionally slowing down the game (for the text fade-in) - does it really matter? You could pass the whole string, and change the display routine to display one more letter in every frame.
A:
Can't you just print the characters one at a time displaced without clearing the background? You can get the character using slicing.
A:
Assuming you're having a screen object available, this might work:
import time
text = 'The Hunt for Red October'
myfont = pygame.font.SysFont("arial", 16)
for index, ch in enumerate(text):
letter = myfont.render(ch, True, (0, 0, 0), (255, 255, 255))
screen.blit(letter, (index * 20, 20))
time.sleep(1)
| Efficient way of adding one character at a time from one string to another in Python | I'm currently making a function using pygame that draws a message on the screen, adding one character each frame (i.e. The Hunt for Red October). I know that I could simply copy (or pass) gradually bigger slices from the original string, but I know that it would be very resource-intensive. Is there a better way to do this?
Code, using gradually bigger slices:
def full_screen_dialog_tt(thesurface, thefont, theclock, message, thebeep):
i = 0
while(i < len(message)): # Initialize the string display
theclock.tick(60)
thesurface.fill((0, 0, 0))
thesurface.blit(thefont.render(message[i]+"_"))
pygame.display.flip()
thebeep.play()
while(1): # Whole string is here now
theclock.tick(60)
for event in pygame.events.get():
if event.type == MOUSEBUTTONDOWN: return
| [
"In a place where you are intentionally slowing down the game (for the text fade-in) - does it really matter? You could pass the whole string, and change the display routine to display one more letter in every frame.\n",
"Can't you just print the characters one at a time displaced without clearing the background?... | [
4,
1,
1
] | [
"You can access a single character from a string using indexes:\n>>> s = 'string'\n>>> s[2]\n'r'\n\n"
] | [
-2
] | [
"processing_efficiency",
"python",
"string"
] | stackoverflow_0003346005_processing_efficiency_python_string.txt |
Q:
Why is it not safe to modify sequence being iterated on?
It is not safe to modify the sequence being iterated over in the loop (this can only happen for mutable sequence types, such as lists). If you need to modify the list you are iterating over (for example, to duplicate selected items) you must iterate over a copy. The slice notation makes this particularly convenient:
>>> for x in a[:]: # make a slice copy of the entire list
... if len(x) > 6: a.insert(0, x)
...
>>> a
['defenestrate', 'cat', 'window', 'defenestrate']
why is it not safe to just do for x in a ??
A:
Without getting too technical:
If you're iterating through a mutable sequence in Python and the sequence is changed while it's being iterated through, it is not always entirely clear what will happen. If you insert an element in the sequence while iterating through it, what would now reasonably be considered the "next" element in the sequence? What if you delete the next object?
For this reason, iterating through a mutable sequence while you're changing it leads to unspecified behaviour. Anything might happen, depending on exactly how the list is implemented. :-)
A:
This is a common problem in many languages. If you have a linear data structure, and you are iterating over it, something must keep track of where you are in the structure. It might be a current index, or a pointer, but it's some kind of finger pointing to the "current place".
If you modify the list while the iteration is happening, that cursor will likely be incorrect.
A common problem is that you remove the item under the cursor, everything slides down one, the next iteration of the loop increments the cursor, and you've inadvertently skipped an item.
Some data structure implementations offer the ability to delete items while iterating, but most do not.
A:
When you modify a collection that you are iterating over the iterator can behave unexpectedly, for example missing out items or returning the same item twice.
This code loops indefinitely when I run it:
>>> a = [ 'foo', 'bar', 'baz' ]
>>> for x in a:
... if x == 'bar': a.insert(0, 'oops')
This is because the iterator uses a numeric index to keep track of where it is in the list. Adding an item at the beginning of the list results in the item 'bar' being returned again instead of the iterator advancing to the next item, because it has been shifted forward by the insertion.
| Why is it not safe to modify sequence being iterated on? |
It is not safe to modify the sequence being iterated over in the loop (this can only happen for mutable sequence types, such as lists). If you need to modify the list you are iterating over (for example, to duplicate selected items) you must iterate over a copy. The slice notation makes this particularly convenient:
>>> for x in a[:]: # make a slice copy of the entire list
... if len(x) > 6: a.insert(0, x)
...
>>> a
['defenestrate', 'cat', 'window', 'defenestrate']
why is it not safe to just do for x in a ??
| [
"Without getting too technical:\nIf you're iterating through a mutable sequence in Python and the sequence is changed while it's being iterated through, it is not always entirely clear what will happen. If you insert an element in the sequence while iterating through it, what would now reasonably be considered the ... | [
17,
13,
3
] | [] | [] | [
"python"
] | stackoverflow_0003346696_python.txt |
Q:
PYTHONPATH hell with overlapping package structures
I'm having problems with my PythonPath on windows XP, and I'm wondering if I'm doing something wrong.
Say that I have a project (created with Pydev) that has an src directory. Under src I have a single package, named common, and in it a single class module, named service.py with a class name Service
Say now that I have another project (also created with Pydev) with an src directory and a common package. In the common package, I have a single script, client.py, that imports service.
So in other words, two separate disk locations, but same package.
I've noticed that even if I set my PYTHONPATH to include both src directories, the import fails unless the files are both in the same directory. I get the dreaded no module found.
Am I misunderstanding how python resolves module names? I'm used to Java and its classpath hell.
A:
If you really must have a split package like this, read up on the module level attribute __path__.
In short, make one of the 'src' directories the main one, and give it an __init__.py that appends the path of other 'src' to the __path__ list. Python will now look in both places when looking up submodules of 'src'.
I really don't recommend this for the long term though. It is kind of brittle and breaks if you move things around.
A:
I think in Python you are best off avoiding this problem by providing each package with a unique name. Don't name both packages common. Then you can import both with something like
import common1.service as cs
import common2.client as cc
A:
If you try to import like this:
import src.common.service
Python will look on the Python path for a directory named "src" (or an egg, etc). Once it finds "src", it will not consider another one. If the first "src" doesn't have common and service inside it, then you will get an ImportError, even if another "src" directory in the path does have those things.
| PYTHONPATH hell with overlapping package structures | I'm having problems with my PythonPath on windows XP, and I'm wondering if I'm doing something wrong.
Say that I have a project (created with Pydev) that has an src directory. Under src I have a single package, named common, and in it a single class module, named service.py with a class name Service
Say now that I have another project (also created with Pydev) with an src directory and a common package. In the common package, I have a single script, client.py, that imports service.
So in other words, two separate disk locations, but same package.
I've noticed that even if I set my PYTHONPATH to include both src directories, the import fails unless the files are both in the same directory. I get the dreaded no module found.
Am I misunderstanding how python resolves module names? I'm used to Java and its classpath hell.
| [
"If you really must have a split package like this, read up on the module level attribute __path__.\nIn short, make one of the 'src' directories the main one, and give it an __init__.py that appends the path of other 'src' to the __path__ list. Python will now look in both places when looking up submodules of 'src'... | [
2,
1,
1
] | [] | [] | [
"python",
"pythonpath"
] | stackoverflow_0003346482_python_pythonpath.txt |
Q:
Python Base64 print problem
I have a base64 encoded string
When I decode the string this way:
>>> import base64
>>> base64.b64decode("XH13fXM=")
'\\}w}s'
The output is fine.
But when i use it like this:
>>> d = base64.b64decode("XH13fXM=")
>>> print d
\}w}s
some characters are missing
Can anyone advise ?
Thank you in advanced.
A:
It is just a matter of presentation:
>>> '\\}w}s'
'\\}w}s'
>>> print(_, len(_))
\}w}s 5
This string has 5 characters. When you use it in code you need to escape backslash, or use raw string literals:
>>> r'\}w}s'
'\\}w}s'
>>> r'\}w}s' == '\\}w}s'
True
A:
When you print a string, the characters in the string are output. When the interactive shell shows you the value of your last statement, it prints the __repr__ of the string, not the string itself. That's why there are single-quotes around it, and your backslash has been escaped.
No characters are missing from your second example, those are the 5 characters in your string. The first example has had characters add to make the output a legal Python string literal.
If you want to use the print statement and have the output look like the first example, then use:
print repr(d)
| Python Base64 print problem | I have a base64 encoded string
When I decode the string this way:
>>> import base64
>>> base64.b64decode("XH13fXM=")
'\\}w}s'
The output is fine.
But when i use it like this:
>>> d = base64.b64decode("XH13fXM=")
>>> print d
\}w}s
some characters are missing
Can anyone advise ?
Thank you in advanced.
| [
"It is just a matter of presentation:\n>>> '\\\\}w}s'\n'\\\\}w}s'\n>>> print(_, len(_))\n\\}w}s 5\n\nThis string has 5 characters. When you use it in code you need to escape backslash, or use raw string literals:\n>>> r'\\}w}s'\n'\\\\}w}s'\n>>> r'\\}w}s' == '\\\\}w}s'\nTrue\n\n",
"When you print a string, the cha... | [
3,
1
] | [] | [] | [
"python",
"string"
] | stackoverflow_0003346824_python_string.txt |
Q:
Set custom 'name' attribute for RadioSelect in Django
I'm trying to set custom 'name' attribute in django form.
I've been trying this kind of approach:
class BaseQuestionForm(forms.Form):
question_id = forms.CharField(widget=forms.HiddenInput)
answer = forms.ChoiceField(choices = [ ... ], widget=forms.RadioSelect)
and then setting the 'name'-attr on answer with:
form.fields['answer'].widget.name = 'new_name'
But this does not work, and name is always set to 'answer' as in field name. Is there some way to do this?
A:
First try:
print form.fields['answer'].widget.name
I believe widget doesn't have a name (ok, I am even pretty sure ;-)).
To achieve what you want, you would have to:
form.fields['new_name'] = form.fields['answer']
del form.fields['answer']
This however will move new_name field to the bottom of fields if you use simply {{ form }} in the template (this dictionary is ordered). Django builds the form fields names in template using names of the keys.
| Set custom 'name' attribute for RadioSelect in Django | I'm trying to set custom 'name' attribute in django form.
I've been trying this kind of approach:
class BaseQuestionForm(forms.Form):
question_id = forms.CharField(widget=forms.HiddenInput)
answer = forms.ChoiceField(choices = [ ... ], widget=forms.RadioSelect)
and then setting the 'name'-attr on answer with:
form.fields['answer'].widget.name = 'new_name'
But this does not work, and name is always set to 'answer' as in field name. Is there some way to do this?
| [
"First try:\nprint form.fields['answer'].widget.name\n\nI believe widget doesn't have a name (ok, I am even pretty sure ;-)).\nTo achieve what you want, you would have to:\nform.fields['new_name'] = form.fields['answer']\ndel form.fields['answer']\n\nThis however will move new_name field to the bottom of fields if ... | [
1
] | [] | [] | [
"django",
"django_forms",
"python"
] | stackoverflow_0003346373_django_django_forms_python.txt |
Q:
Geopy in Django: JSONDecodeError
I have followed the tutorials in http://code.google.com/p/geopy/wiki/GettingStarted
This works fine:
g = geocoders.Google(resource='maps')
I want to use json as the output format because I want to handle the results in javascript.
BUT everytime I use:
g = geocoders.Google(resource='maps', output_format='json')
I get the error below:
Environment:
Request Method: GET
Request URL: http://localhost:8000/business/view/13/
Django Version: 1.2.1
Python Version: 2.5.1
Installed Applications:
['django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'registration',
'pley.business',
'pley.review',
'django.contrib.admin',
'django.contrib.webdesign']
Installed Middleware:
('django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware')
Traceback:
File "C:\Python25\Lib\site-packages\django\core\handlers\base.py" in get_response
100. response = callback(request, *callback_args, **callback_kwargs)
File "C:\django\pley\..\pley\business\views.py" in business_view
57. for place, (lat, lng) in g.geocode(string_location,exactly_one=False):
File "c:\python25\lib\site-packages\geopy-0.94-py2.5.egg\geopy\geocoders\google.py" in geocode
66. return self.geocode_url(url, exactly_one)
File "c:\python25\lib\site-packages\geopy-0.94-py2.5.egg\geopy\geocoders\google.py" in geocode_url
73. return dispatch(page, exactly_one)
File "c:\python25\lib\site-packages\geopy-0.94-py2.5.egg\geopy\geocoders\google.py" in parse_json
123. json = simplejson.loads(page)
File "C:\Python25\lib\site-packages\simplejson-2.1.1-py2.5-win32.egg\simplejson\__init__.py" in loads
384. return _default_decoder.decode(s)
File "C:\Python25\lib\site-packages\simplejson-2.1.1-py2.5-win32.egg\simplejson\decoder.py" in decode
402. obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Python25\lib\site-packages\simplejson-2.1.1-py2.5-win32.egg\simplejson\decoder.py" in raw_decode
420. raise JSONDecodeError("No JSON object could be decoded", s, idx)
Exception Type: JSONDecodeError at /business/view/13/
Exception Value: No JSON object could be decoded: line 1 column 0 (char 0)
Any advice / reply would be greatly appreciated.
Thanks!
A:
Did you read the comments on the geopy page?
Comment by gregor.horvath, Sep 11,
2009
Be aware that although there are
different output_format parameters for
the geocoders, the actual output
format of the geocode function is
always the same:
location, (latttude, longitude)
The output_format refers only to the
output from the geocoder service
(Google etc.), not the output of the
geocode function.
| Geopy in Django: JSONDecodeError | I have followed the tutorials in http://code.google.com/p/geopy/wiki/GettingStarted
This works fine:
g = geocoders.Google(resource='maps')
I want to use json as the output format because I want to handle the results in javascript.
BUT everytime I use:
g = geocoders.Google(resource='maps', output_format='json')
I get the error below:
Environment:
Request Method: GET
Request URL: http://localhost:8000/business/view/13/
Django Version: 1.2.1
Python Version: 2.5.1
Installed Applications:
['django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'registration',
'pley.business',
'pley.review',
'django.contrib.admin',
'django.contrib.webdesign']
Installed Middleware:
('django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware')
Traceback:
File "C:\Python25\Lib\site-packages\django\core\handlers\base.py" in get_response
100. response = callback(request, *callback_args, **callback_kwargs)
File "C:\django\pley\..\pley\business\views.py" in business_view
57. for place, (lat, lng) in g.geocode(string_location,exactly_one=False):
File "c:\python25\lib\site-packages\geopy-0.94-py2.5.egg\geopy\geocoders\google.py" in geocode
66. return self.geocode_url(url, exactly_one)
File "c:\python25\lib\site-packages\geopy-0.94-py2.5.egg\geopy\geocoders\google.py" in geocode_url
73. return dispatch(page, exactly_one)
File "c:\python25\lib\site-packages\geopy-0.94-py2.5.egg\geopy\geocoders\google.py" in parse_json
123. json = simplejson.loads(page)
File "C:\Python25\lib\site-packages\simplejson-2.1.1-py2.5-win32.egg\simplejson\__init__.py" in loads
384. return _default_decoder.decode(s)
File "C:\Python25\lib\site-packages\simplejson-2.1.1-py2.5-win32.egg\simplejson\decoder.py" in decode
402. obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Python25\lib\site-packages\simplejson-2.1.1-py2.5-win32.egg\simplejson\decoder.py" in raw_decode
420. raise JSONDecodeError("No JSON object could be decoded", s, idx)
Exception Type: JSONDecodeError at /business/view/13/
Exception Value: No JSON object could be decoded: line 1 column 0 (char 0)
Any advice / reply would be greatly appreciated.
Thanks!
| [
"Did you read the comments on the geopy page?\n\nComment by gregor.horvath, Sep 11,\n 2009\nBe aware that although there are\n different output_format parameters for\n the geocoders, the actual output\n format of the geocode function is\n always the same:\nlocation, (latttude, longitude)\nThe output_format re... | [
0
] | [] | [] | [
"django",
"geopy",
"python",
"simplejson"
] | stackoverflow_0003344747_django_geopy_python_simplejson.txt |
Q:
Can a process running a 32-bit compiled binary use more than 4GB of memory?
Is it possible for a single process running a 32-bit compiled version of python in Snow Leopard (64-bit machine) to appear to consume > 4GB (say 5.4GB) of virtual memory as seen by the top command?
I did a file ...python to see that the binary was not x86, yet it appeared to be consuming over 5GB of memory.
My guess is that the libraries that were used (RPy) were 'mmap'ing chunks of data, and the in-memory cache was appearing under the memory footprint of my process.
Or maybe I haven't verified that the Python binaries were 32-bit. Or maybe there's some 32-bit/64-bit commingling going (libffi?).
Totally confused.
A:
No, it's physically impossible. That doesn't stop the OS assigning more than it can use due to alignment and fragmentation, say, it could have a whole page and not actually map in all of it. However it's impossible to actually use over 4GB for any process, and most likely substantially less than that for kernel space.
A:
It is possible if the processes is using some kind of insane long/far/extended pointers and mapping data into and outof a 32-bit address space as it needs it, but at that point it hardly qualifies as 32-bit anyway. (Python most definitely does not do this, so @DeadMG's answer is almost certainly what is actually happening.)
| Can a process running a 32-bit compiled binary use more than 4GB of memory? | Is it possible for a single process running a 32-bit compiled version of python in Snow Leopard (64-bit machine) to appear to consume > 4GB (say 5.4GB) of virtual memory as seen by the top command?
I did a file ...python to see that the binary was not x86, yet it appeared to be consuming over 5GB of memory.
My guess is that the libraries that were used (RPy) were 'mmap'ing chunks of data, and the in-memory cache was appearing under the memory footprint of my process.
Or maybe I haven't verified that the Python binaries were 32-bit. Or maybe there's some 32-bit/64-bit commingling going (libffi?).
Totally confused.
| [
"No, it's physically impossible. That doesn't stop the OS assigning more than it can use due to alignment and fragmentation, say, it could have a whole page and not actually map in all of it. However it's impossible to actually use over 4GB for any process, and most likely substantially less than that for kernel sp... | [
2,
1
] | [] | [] | [
"i386",
"memory_management",
"osx_snow_leopard",
"python",
"x86_64"
] | stackoverflow_0003330643_i386_memory_management_osx_snow_leopard_python_x86_64.txt |
Q:
How can one use the logging module in python with the unittest module?
I would like to use the python logging module to log all of the output from unittest so that I can incorporate it into a testing framework I am trying to write. The goal of this is to run the tests with 2 sets of output, one with simple output that tells the test case steps and a more debug level output so that when things go wrong we have as much info as possible. The output would be placed into two files, one that I could email out to people, and another kept in case of failures. I noticed that the TextTestRunner can use a stream, could this be used with the logging module? I'm planning on using some of the new features in python 2.7.
A:
You could, but I'm not sure it's your best approach.
For this approach, you would:
Instantiate an in-memory stream that can be used by TextTestRunner. This is the sort of thing io.StringIO would be nearly perfect for, except that it works with Unicode input only, and I'm not sure that TextTestRunner writes Unicode properly to the stream. Your other option would be to code up your own in-memory stream that serves your purpose, perhaps wrapping StringIO with an encoder.
Create your own TextTestRunner and initialize it with this in-memory stream.
Create a class that reads from the stream and writes it to the log.
This could be as simple as:
class StreamLogger(object):
def __init__(self, input_stream, output_logger):
self.input_stream = input_stream
self.output_logger
def run(self):
while True:
line = input_stream.readline()
if not line:
break
output_logger.error(line)
Problems with this approach:
You don't have much if any flexibility in directing different parts of TextTestRunner output to different log levels.
TextTestRunner, if misconfigured, will write a bunch of stuff that you probably don't want. The default verbosity is 1, which will write progress dots while you're testing... which will probably only get in your way in the logging output.
If you do this naïvely, you'll call stream_logger.run() only after you've written all your output to the stream. Thus, you won't get your logging output in real time, and your logging timestamps will be useless. You could solve this by spawning a separate thread for reading, for example, but then you'd need to choose/roll a stream that can handle a reader and writer thread working simultaneously, or fork a process and ditch the in-memory stream, or something relatively complicated.
The approach I'd suggest instead is to forego streams and simply roll your own test runner – called, say, LoggingTestRunner – that writes the test output to the logger in precisely the way you want it output. This will allow you to avoid all three of these problems.
| How can one use the logging module in python with the unittest module? | I would like to use the python logging module to log all of the output from unittest so that I can incorporate it into a testing framework I am trying to write. The goal of this is to run the tests with 2 sets of output, one with simple output that tells the test case steps and a more debug level output so that when things go wrong we have as much info as possible. The output would be placed into two files, one that I could email out to people, and another kept in case of failures. I noticed that the TextTestRunner can use a stream, could this be used with the logging module? I'm planning on using some of the new features in python 2.7.
| [
"You could, but I'm not sure it's your best approach.\nFor this approach, you would:\n\nInstantiate an in-memory stream that can be used by TextTestRunner. This is the sort of thing io.StringIO would be nearly perfect for, except that it works with Unicode input only, and I'm not sure that TextTestRunner writes Uni... | [
1
] | [] | [] | [
"logging",
"python",
"unit_testing"
] | stackoverflow_0003347019_logging_python_unit_testing.txt |
Q:
python -- for with an if statement
I dont understand why, when i run my code the for each loop under the if statement isn't run. Even when the number of found is greater than 0!
def findpattern(commit_msg):
pattern = re.compile("\w\w*-\d\d*")
group = pattern.finditer(commit_msg)
found = getIterLength(group)
print found
if found > 0:
issues = 0
for match in group:
print " print matched issues:"
auth = soap.login(jirauser,passwd)
print match.group(0)
getIssue(auth,match.group(0))
issues = issues + 1
else:
sys.exit("No issue patterns found.")
print "Retrieved issues: " + str(issues)
Any help would be appreciated, I have been banging my head on this for an hour.
A:
Your getIterLength() function is finding the length by exhausting the iterator returned by finditer(). You would then need a new iterator instance for the for loop. Instead, I would restructure your code like this:
def findpattern(commit_msg):
pattern = re.compile("\w\w*-\d\d*")
group = pattern.finditer(commit_msg)
found = 0
issues = 0
for match in group:
print " print matched issues:"
auth = soap.login(jirauser,passwd)
print match.group(0)
getIssue(auth,match.group(0))
issues = issues + 1
found += 1
if found == 0:
sys.exit("No issue patterns found.")
print "Retrieved issues: " + str(issues)
OR, you could use the findall() method instead of finditer() to give you a list (which is an iterable, not an iterator) on which you can run len(group) to get the size and then use it to iterate over in your for loop.
A:
Check your code formatting it looks like under the for you have a double tab instead of a single tab, remember python is very picky about indentation
| python -- for with an if statement | I dont understand why, when i run my code the for each loop under the if statement isn't run. Even when the number of found is greater than 0!
def findpattern(commit_msg):
pattern = re.compile("\w\w*-\d\d*")
group = pattern.finditer(commit_msg)
found = getIterLength(group)
print found
if found > 0:
issues = 0
for match in group:
print " print matched issues:"
auth = soap.login(jirauser,passwd)
print match.group(0)
getIssue(auth,match.group(0))
issues = issues + 1
else:
sys.exit("No issue patterns found.")
print "Retrieved issues: " + str(issues)
Any help would be appreciated, I have been banging my head on this for an hour.
| [
"Your getIterLength() function is finding the length by exhausting the iterator returned by finditer(). You would then need a new iterator instance for the for loop. Instead, I would restructure your code like this:\ndef findpattern(commit_msg):\n pattern = re.compile(\"\\w\\w*-\\d\\d*\")\n group = pattern.... | [
8,
1
] | [] | [] | [
"jira",
"python"
] | stackoverflow_0003347693_jira_python.txt |
Q:
Using urllib2 for posting data, following redirects and maintaining cookies
I am using urllib2 in Python to post login data to a web site.
After successful login, the site redirects my request to another page. Can someone provide a simple code sample on how to do this in Python with urllib2? I guess I will need cookies also to be logged in when I get redirected to another page. Right?
Thanks a lot in advace.
A:
First, get mechanize: http://wwwsearch.sourceforge.net/mechanize/
You could do this kind of stuff with just urllib2, but you will be writing tons of boilerplate code, and it will be buggy.
Then:
import mechanize
br = mechanize.Browser()
br.open('http://somesite.com/account/signin/')
br.select_form('loginForm')
br['username'] = 'jekyll'
br['password'] = 'bananas'
br.submit()
# At this point, you're logged in, redirected, and the
# br object has the cookies and all that.
br.geturl() # e.g. http://somesite.com/loggedin/
Then you can use the Browser object br and do whatever you have to do, click on links, etc. Check the samples on the mechanize site
| Using urllib2 for posting data, following redirects and maintaining cookies | I am using urllib2 in Python to post login data to a web site.
After successful login, the site redirects my request to another page. Can someone provide a simple code sample on how to do this in Python with urllib2? I guess I will need cookies also to be logged in when I get redirected to another page. Right?
Thanks a lot in advace.
| [
"First, get mechanize: http://wwwsearch.sourceforge.net/mechanize/\nYou could do this kind of stuff with just urllib2, but you will be writing tons of boilerplate code, and it will be buggy.\nThen:\nimport mechanize\n\nbr = mechanize.Browser()\nbr.open('http://somesite.com/account/signin/')\n\nbr.select_form('login... | [
6
] | [] | [] | [
"python",
"urllib2"
] | stackoverflow_0003346960_python_urllib2.txt |
Q:
script as module and executable
I have a script in python that can be invoked from the command-line and uses optparse.
script -i arg1 -j arg2
In this case I use (options, args) = parser.parse_args() to create options then use options.arg1 to get arguments.
But I also want it to be importable as a module.
from script import *
function(arg1=arg1, arg2=arg2)
I've managed to do this using a really lame solution: by providing a dummy object.
def main():
class Options:
''' dummy object '''
def __init__(self):
pass
options = Options
for k in kwargs:
setattr(options, k, kwargs[k])
The rest of the script doesn't know the difference but I think this solution is fugly.
Is there a better solution to this?
A:
Separate the CLI from the workhorse class:
class Main(object):
def __init__(self,arg1,arg2):
...
def run(self):
pass
if __name__=='__main__':
import optparse
class CLI(object):
def parse_options(self):
usage = 'usage: %prog [options]'+__usage__
parser = optparse.OptionParser(usage=usage)
parser.add_option('-i', dest='arg1')
parser.add_option('-j', dest='arg2')
(self.opt, self.args) = parser.parse_args()
def run(self):
self.parse_options()
Main(self.opt.arg1,self.opt.arg2).run()
CLI().run()
A:
What you usually do is:
def foo():
print 'This is available when imported as a module.'
if __name__ == '__main__':
print 'This is executed when run as script, but not when imported.'
| script as module and executable |
I have a script in python that can be invoked from the command-line and uses optparse.
script -i arg1 -j arg2
In this case I use (options, args) = parser.parse_args() to create options then use options.arg1 to get arguments.
But I also want it to be importable as a module.
from script import *
function(arg1=arg1, arg2=arg2)
I've managed to do this using a really lame solution: by providing a dummy object.
def main():
class Options:
''' dummy object '''
def __init__(self):
pass
options = Options
for k in kwargs:
setattr(options, k, kwargs[k])
The rest of the script doesn't know the difference but I think this solution is fugly.
Is there a better solution to this?
| [
"Separate the CLI from the workhorse class:\nclass Main(object):\n def __init__(self,arg1,arg2):\n ...\n def run(self):\n pass\n\nif __name__=='__main__':\n import optparse\n class CLI(object):\n def parse_options(self):\n usage = 'usage: %prog [options]'+__usage__\n ... | [
6,
2
] | [] | [] | [
"python"
] | stackoverflow_0003348041_python.txt |
Q:
Common pitfalls in Python
Today I was bitten again by mutable default arguments after many years. I usually don't use mutable default arguments unless needed, but I think with time I forgot about that. Today in the application I added tocElements=[] in a PDF generation function's argument list and now "Table of Contents" gets longer and longer after each invocation of "generate pdf". :)
What else should I add to my list of things to MUST avoid?
Always import modules the same way, e.g. from y import x and import x are treated as different modules.
Do not use range in place of lists because range() will become an iterator anyway, the following will fail:
myIndexList = [0, 1, 3]
isListSorted = myIndexList == range(3) # will fail in 3.0
isListSorted = myIndexList == list(range(3)) # will not
Same thing can be mistakenly done with xrange:
myIndexList == xrange(3)
Be careful catching multiple exception types:
try:
raise KeyError("hmm bug")
except KeyError, TypeError:
print TypeError
This prints "hmm bug", though it is not a bug; it looks like we are catching exceptions of both types, but instead we are catching KeyError only as variable TypeError, use this instead:
try:
raise KeyError("hmm bug")
except (KeyError, TypeError):
print TypeError
A:
Don't use index to loop over a sequence
Don't :
for i in range(len(tab)) :
print tab[i]
Do :
for elem in tab :
print elem
For will automate most iteration operations for you.
Use enumerate if you really need both the index and the element.
for i, elem in enumerate(tab):
print i, elem
Be careful when using "==" to check against True or False
if (var == True) :
# this will execute if var is True or 1, 1.0, 1L
if (var != True) :
# this will execute if var is neither True nor 1
if (var == False) :
# this will execute if var is False or 0 (or 0.0, 0L, 0j)
if (var == None) :
# only execute if var is None
if var :
# execute if var is a non-empty string/list/dictionary/tuple, non-0, etc
if not var :
# execute if var is "", {}, [], (), 0, None, etc.
if var is True :
# only execute if var is boolean True, not 1
if var is False :
# only execute if var is boolean False, not 0
if var is None :
# same as var == None
Do not check if you can, just do it and handle the error
Pythonistas usually say "It's easier to ask for forgiveness than permission".
Don't :
if os.path.isfile(file_path) :
file = open(file_path)
else :
# do something
Do :
try :
file = open(file_path)
except OSError as e:
# do something
Or even better with python 2.6+ / 3:
with open(file_path) as file :
It is much better because it's much more generical. You can apply "try / except" to almost anything. You don't need to care about what to do to prevent it, just about the error you are risking.
Do not check against type
Python is dynamically typed, therefore checking for type makes you lose flexibility. Instead, use duck typing by checking behavior. E.G, you expect a string in a function, then use str() to convert any object in a string. You expect a list, use list() to convert any iterable in a list.
Don't :
def foo(name) :
if isinstance(name, str) :
print name.lower()
def bar(listing) :
if isinstance(listing, list) :
listing.extend((1, 2, 3))
return ", ".join(listing)
Do :
def foo(name) :
print str(name).lower()
def bar(listing) :
l = list(listing)
l.extend((1, 2, 3))
return ", ".join(l)
Using the last way, foo will accept any object. Bar will accept strings, tuples, sets, lists and much more. Cheap DRY :-)
Don't mix spaces and tabs
Just don't. You would cry.
Use object as first parent
This is tricky, but it will bite you as your program grows. There are old and new classes in Python 2.x. The old ones are, well, old. They lack some features, and can have awkward behavior with inheritance. To be usable, any of your class must be of the "new style". To do so, make it inherit from "object" :
Don't :
class Father :
pass
class Child(Father) :
pass
Do :
class Father(object) :
pass
class Child(Father) :
pass
In Python 3.x all classes are new style so you can declare class Father: is fine.
Don't initialize class attributes outside the __init__ method
People coming from other languages find it tempting because that what you do the job in Java or PHP. You write the class name, then list your attributes and give them a default value. It seems to work in Python, however, this doesn't work the way you think.
Doing that will setup class attributes (static attributes), then when you will try to get the object attribute, it will gives you its value unless it's empty. In that case it will return the class attributes.
It implies two big hazards :
If the class attribute is changed, then the initial value is changed.
If you set a mutable object as a default value, you'll get the same object shared across instances.
Don't (unless you want static) :
class Car(object):
color = "red"
wheels = [wheel(), Wheel(), Wheel(), Wheel()]
Do :
class Car(object):
def __init__(self):
self.color = "red"
self.wheels = [wheel(), Wheel(), Wheel(), Wheel()]
A:
When you need a population of arrays you might be tempted to type something like this:
>>> a=[[1,2,3,4,5]]*4
And sure enough it will give you what you expect when you look at it
>>> from pprint import pprint
>>> pprint(a)
[[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5]]
But don't expect the elements of your population to be seperate objects:
>>> a[0][0] = 2
>>> pprint(a)
[[2, 2, 3, 4, 5],
[2, 2, 3, 4, 5],
[2, 2, 3, 4, 5],
[2, 2, 3, 4, 5]]
Unless this is what you need...
It is worth mentioning a workaround:
a = [[1,2,3,4,5] for _ in range(4)]
A:
Python Language Gotchas -- things that fail in very obscure ways
Using mutable default arguments.
Leading zeroes mean octal. 09 is a very obscure syntax error in Python 2.x
Misspelling overridden method names in a superclass or subclass. The superclass misspelling mistake is worse, because none of the subclasses override it correctly.
Python Design Gotchas
Spending time on introspection (e.g. trying to automatically determine types or superclass identity or other stuff). First, it's obvious from reading the source. More importantly, time spent on weird Python introspection usually indicates a fundamental failure to grasp polymorphism. 80% of the Python introspection questions on SO are failure to get Polymorphism.
Spending time on code golf. Just because your mental model of your application is four keywords ("do", "what", "I", "mean"), doesn't mean you should build a hyper-complex introspective decorator-driven framework to do that. Python allows you to take DRY to a level that is silliness. The rest of the Python introspection questions on SO attempts to reduce complex problems to code golf exercises.
Monkeypatching.
Failure to actually read through the standard library, and reinventing the wheel.
Conflating interactive type-as-you go Python with a proper program. While you're typing interactively, you may lose track of a variable and have to use globals(). Also, while you're typing, almost everything is global. In proper programs, you'll never "lose track of" a variable, and nothing will be global.
A:
Mutating a default argument:
def foo(bar=[]):
bar.append('baz')
return bar
The default value is evaluated only once, and not every time the function is called. Repeated calls to foo() would return ['baz'], ['baz', 'baz'], ['baz', 'baz', 'baz'], ...
If you want to mutate bar do something like this:
def foo(bar=None):
if bar is None:
bar = []
bar.append('baz')
return bar
Or, if you like arguments to be final:
def foo(bar=[]):
not_bar = bar[:]
not_bar.append('baz')
return not_bar
A:
I don't know whether this is a common mistake, but while Python doesn't have increment and decrement operators, double signs are allowed, so
++i
and
--i
is syntactically correct code, but doesn't do anything "useful" or that you may be expecting.
A:
Rolling your own code before looking in the standard library. For example, writing this:
def repeat_list(items):
while True:
for item in items:
yield item
When you could just use this:
from itertools import cycle
Examples of frequently overlooked modules (besides itertools) include:
optparse for creating command line parsers
ConfigParser for reading configuration files in a standard manner
tempfile for creating and managing temporary files
shelve for storing Python objects to disk, handy when a full fledged database is overkill
A:
Avoid using keywords as your own identifiers.
Also, it's always good to not use from somemodule import *.
A:
Surprised that nobody said this:
Mix tab and spaces when indenting.
Really, it's a killer. Believe me. In particular, if it runs.
A:
Not using functional tools. This isn't just a mistake from a style standpoint, it's a mistake from a speed standpoint because a lot of the functional tools are optimized in C.
This is the most common example:
temporary = []
for item in itemlist:
temporary.append(somefunction(item))
itemlist = temporary
The correct way to do it:
itemlist = map(somefunction, itemlist)
The just as correct way to do it:
itemlist = [somefunction(x) for x in itemlist]
And if you only need the processed items available one at a time, rather than all at once, you can save memory and improve speed by using the iterable equivalents
# itertools-based iterator
itemiter = itertools.imap(somefunction, itemlist)
# generator expression-based iterator
itemiter = (somefunction(x) for x in itemlist)
A:
If you're coming from C++, realize that variables declared in a class definition are static. You can initialize nonstatic members in the init method.
Example:
class MyClass:
static_member = 1
def __init__(self):
self.non_static_member = random()
A:
Code Like a Pythonista: Idiomatic Python
A:
Normal copying (assigning) is done by reference, so filling a container by adapting the same object and inserting, ends up with a container with references to the last added object.
Use copy.deepcopy instead.
A:
Importing re and using the full regular expression approach to string matching/transformation, when perfectly good string methods exist for every common operation (e.g. capitalisation, simple matching/searching).
A:
Using the %s formatter in error messages. In almost every circumstance, %r should be used.
For example, imagine code like this:
try:
get_person(person)
except NoSuchPerson:
logger.error("Person %s not found." %(person))
Printed this error:
ERROR: Person wolever not found.
It's impossible to tell if the person variable is the string "wolever", the unicode string u"wolever" or an instance of the Person class (which has __str__ defined as def __str__(self): return self.name). Whereas, if %r was used, there would be three different error messages:
...
logger.error("Person %r not found." %(person))
Would produce the much more helpful errors:
ERROR: Person 'wolever' not found.
ERROR: Person u'wolever' not found.
ERROR: Person not found.
Another good reason for this is that paths are a whole lot easier to copy/paste. Imagine:
try:
stuff = open(path).read()
except IOError:
logger.error("Could not open %s" %(path))
If path is some path/with 'strange' "characters", the error message will be:
ERROR: Could not open some path/with 'strange' "characters"
Which is hard to visually parse and hard to copy/paste into a shell.
Whereas, if %r is used, the error would be:
ERROR: Could not open 'some path/with \'strange\' "characters"'
Easy to visually parse, easy to copy-paste, all around better.
A:
don't write large output messages to standard output
strings are immutable - build them not using "+" operator but rather
using str.join() function.
read those articles:
python gotchas
things to avoid
Gotchas for Python users
Python Landmines
Last link is the original one, this SO question is an duplicate.
A:
I would stop using deprecated methods in 2.6, so that your app or script will be ready and easier to convert to Python 3.
A:
A bad habit I had to train myself out of was using X and Y or Z for inline logic.
Unless you can 100% always guarantee that Y will be a true value, even when your code changes in 18 months time, you set yourself up for some unexpected behaviour.
Thankfully, in later versions you can use Y if X else Z.
A:
Some personal opinions, but I find it best NOT to:
use deprecated modules (use warnings for them)
overuse classes and inheritance (typical of static languages legacy maybe)
explicitly use declarative algorithms (as iteration with for vs use of
itertools)
reimplement functions from the standard lib, "because I don't need all of those features"
using features for the sake of it (reducing compatibility with older Python versions)
using metaclasses when you really don't have to and more generally make things too "magic"
avoid using generators
(more personal) try to micro-optimize CPython code on a low-level basis. Better spend time on algorithms and then optimize by making a small C shared lib called by ctypes (it's so easy to gain 5x perf boosts on an inner loop)
use unnecessary lists when iterators would suffice
code a project directly for 3.x before the libs you need are all available (this point may be a bit controversial now!)
A:
++n and --n may not work as expected by people coming from C or Java background.
++n is positive of a positive number, which is simply n.
--n is negative of a negative number, which is simply n.
A:
I've started learning Python as well and one of the bigest mistakes I made is constantly using C++/C# indexed "for" loop. Python have for(i ; i < length ; i++) type loop and for a good reason - most of the time there are better ways to do there same thing.
Example:
I had a method that iterated over a list and returned the indexes of selected items:
for i in range(len(myList)):
if myList[i].selected:
retVal.append(i)
Instead Python has list comprehension that solves the same problem in a more elegant and easy to read way:
retVal = [index for index, item in enumerate(myList) if item.selected]
A:
import this
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than right now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
import not_this
Write ugly code.
Write implicit code.
Write complex code.
Write nested code.
Write dense code.
Write unreadable code.
Write special cases.
Strive for purity.
Ignore errors and exceptions.
Write optimal code before releasing.
Every implementation needs a flowchart.
Don't use namespaces.
A:
Never assume that having a multi-threaded Python application and a SMP capable machine (for instance one equipped with a multi-core CPU) will give you the benefit of introducing true parallelism into your application. Most likely it will not because of GIL (Global Interpreter Lock) which synchronizes your application on the byte-code interpreter level.
There are some workarounds like taking advantage of SMP by putting the concurrent code in C API calls or using multiple processes (instead of threads) via wrappers (for instance like the one available at http://www.parallelpython.org) but if one needs true multi-threading in Python one should look at things like Jython, IronPython etc. (GIL is a feature of the CPython interpreter so other implementations are not affected).
According to Python 3000 FAQ (available at Artima) the above still stands even for the latest Python versions.
A:
Somewhat related to the default mutable argument, how one checks for the "missing" case results in differences when an empty list is passed:
def func1(toc=None):
if not toc:
toc = []
toc.append('bar')
def func2(toc=None):
if toc is None:
toc = []
toc.append('bar')
def demo(toc, func):
print func.__name__
print ' before:', toc
func(toc)
print ' after:', toc
demo([], func1)
demo([], func2)
Here's the output:
func1
before: []
after: []
func2
before: []
after: ['bar']
A:
The very first mistake before you even start: don't be afraid of whitespace.
When you show someone a piece of Python code, they are impressed until you tell them that they have to indent correctly. For some reason, most people feel that a language shouldn't force a certain style on them while all of them will indent the code nonetheless.
A:
Common pitfall: default arguments are evaluated once:
def x(a, l=[]):
l.append(a)
return l
print x(1)
print x(2)
prints:
[1]
[1, 2]
i.e. you always get the same list.
A:
my_variable = <something>
...
my_varaible = f(my_variable)
...
use my_variable and thinking it contains the result from f, and not the initial value
Python won't warn you in any way that on the second assignment you misspelled the variable name and created a new one.
A:
You've mentioned default arguments... One that's almost as bad as mutable default arguments: default values which aren't None.
Consider a function which will cook some food:
def cook(breakfast="spam"):
arrange_ingredients_for(breakfast)
heat_ingredients_for(breakfast)
serve(breakfast)
Because it specifies a default value for breakfast, it is impossible for some other function to say "cook your default breakfast" without a special-case:
def order(breakfast=None):
if breakfast is None:
cook()
else:
cook(breakfast)
However, this could be avoided if cook used None as a default value:
def cook(breakfast=None):
if breakfast is None:
breakfast = "spam"
def order(breakfast=None):
cook(breakfast)
A good example of this is Django bug #6988. Django's caching module had a "save to cache" function which looked like this:
def set(key, value, timeout=0):
if timeout == 0:
timeout = settings.DEFAULT_TIMEOUT
_caching_backend.set(key, value, timeout)
But, for the memcached backend, a timeout of 0 means "never timeout"… Which, as you can see, would be impossible to specify.
A:
Don't modify a list while iterating over it.
odd = lambda x : bool(x % 2)
numbers = range(10)
for i in range(len(numbers)):
if odd(numbers[i]):
del numbers[i]
One common suggestion to work around this problem is to iterate over the list in reverse:
for i in range(len(numbers)-1,0,-1):
if odd(numbers[i]):
del numbers[i]
But even better is to use a list comprehension to build a new list to replace the old:
numbers[:] = [n for n in numbers if not odd(n)]
A:
Creating a local module with the same name as one from the stdlib. This is almost always done by accident (as reported in this question), but usually results in cryptic error messages.
A:
Promiscuous Exception Handling
This is something that I see a surprising amount in production code and it makes me cringe.
try:
do_something() # do_something can raise a lot errors e.g. files, sockets
except:
pass # who cares we'll just ignore it
Was the exception the one you want suppress, or is it more serious? But there are more subtle cases. That can make you pull your hair out trying to figure out.
try:
foo().bar().baz()
except AttributeError: # baz() may return None or an incompatible *duck type*
handle_no_baz()
The problem is foo or baz could be the culprits too. I think this can be more insidious in that this is idiomatic python where you are checking your types for proper methods. But each method call has chance to return something unexpected and suppress bugs that should be raising exceptions.
Knowing what exceptions a method can throw are not always obvious. For example, urllib and urllib2 use socket which has its own exceptions that percolate up and rear their ugly head when you least expect it.
Exception handling is a productivity boon in handling errors over system level languages like C. But I have found suppressing exceptions improperly can create truly mysterious debugging sessions and take away a major advantage interpreted languages provide.
A:
Similar to mutable default arguments is the mutable class attribute.
>>> class Classy:
... foo = []
... def add(self, value):
... self.foo.append(value)
...
>>> instance1 = Classy()
>>> instance2 = Classy()
>>> instance1.add("Foo!")
>>> instance2.foo
['Foo!']
Not what you expect.
A:
This has been mentioned already, but I'd like to elaborate a bit on class attribute mutability.
When you define a member attribute, then every time you instance that class it gets an attribute that's a shallow copy of the class attribute.
So if you have something like
class Test(object):
myAttr = 1
instA = Test()
instB = Test()
instB.myAttr = 2
It will behave as expected.
>>> instA.myAttr
1
>>> instB.myAttr
2
The problem comes when you have class attributes that are mutable. Since instantiation just did a shallow copy, all instances are going to just have a reference pointing to the same object.
class Test(object):
myAttr=[1,2,3]
instA = Test()
instB = Test()
instB.myAttr[0]=2
>>> instA.myAttr
[2,2,3]
But the references are actual members of the instance, so as long as you are actually assigning something new to the attribute you are ok.
You can get around this by making a deep copy of mutable variables during the init function
import copy
class Test(object):
myAttr = [1,2,3]
def __init__(self):
self.myAttr = copy.deepcopy(self.myAttr)
instA = Test()
instB = Test()
instB.myAttr[0] = 5
>>> instA.myAttr
[1,2,3]
>>> instB.myAttr
[5,2,3]
It might be possible to write a decorator that would automatically deepcopy all your class attributes during init, but I don't know offhand of one that is provided anywhere.
A:
Algorithm blogs has a good post about Python performance issues and how to avoid them:
10 Python Optimization Tips and Issues
A:
Class attributes
Some answers above are incorrect or unclear about class attributes.
They do not become instance attributes, but are readable using the same syntax as instance attributes. They can be changed by accessing them via the class name.
class MyClass:
attrib = 1 # class attributes named 'attrib'
another = 2 # and 'another'
def __init__(self):
self.instattr = 3 # creates instance attributes
self.attrib = 'instance'
mc0 = MyClass()
mc1 = MyClass()
print mc.attrib # 'instance'
print mc.another # '2'
MyClass.another = 5 # change class attributes
MyClass.attrib = 21 # <- masked by instance attribute of same name
print mc.attrib # 'instance' unchanged instance attribute
print mc.another # '5' changed class attribute
Class attributes can be used as sort of default values for instance attributes, masked later by instance attributes of the same name with a different value.
Intermediate scope local variables
A more difficult matter to understand is the scoping of variables in nested functions.
In the following example, y is unwritable from anywhere other than function 'outer'. x is readable and writable from anywhere, as it is declared global in each function. z is readable and writable in 'inner*' only. y is readable in 'outer' and 'inner*',
but not writable except in 'outer'.
x = 1
def outer():
global x
y = 2
def inner1():
global x, y
y = y+1 # creates new global variable with value=3
def inner2():
global x
y = y+1 # creates new local variable with value=3
I believe that Python 3 includes an 'outer' keyword for such 'outside this function but not global' cases. In Python 2.#, you are stuck with either making y global, or making it a mutable parameter to 'inner'.
| Common pitfalls in Python | Today I was bitten again by mutable default arguments after many years. I usually don't use mutable default arguments unless needed, but I think with time I forgot about that. Today in the application I added tocElements=[] in a PDF generation function's argument list and now "Table of Contents" gets longer and longer after each invocation of "generate pdf". :)
What else should I add to my list of things to MUST avoid?
Always import modules the same way, e.g. from y import x and import x are treated as different modules.
Do not use range in place of lists because range() will become an iterator anyway, the following will fail:
myIndexList = [0, 1, 3]
isListSorted = myIndexList == range(3) # will fail in 3.0
isListSorted = myIndexList == list(range(3)) # will not
Same thing can be mistakenly done with xrange:
myIndexList == xrange(3)
Be careful catching multiple exception types:
try:
raise KeyError("hmm bug")
except KeyError, TypeError:
print TypeError
This prints "hmm bug", though it is not a bug; it looks like we are catching exceptions of both types, but instead we are catching KeyError only as variable TypeError, use this instead:
try:
raise KeyError("hmm bug")
except (KeyError, TypeError):
print TypeError
| [
"Don't use index to loop over a sequence\nDon't :\nfor i in range(len(tab)) :\n print tab[i]\n\nDo :\nfor elem in tab :\n print elem\n\nFor will automate most iteration operations for you.\nUse enumerate if you really need both the index and the element.\nfor i, elem in enumerate(tab):\n print i, elem\n\n... | [
73,
39,
28,
27,
22,
18,
15,
14,
14,
13,
10,
9,
8,
8,
7,
6,
6,
5,
5,
5,
4,
4,
3,
3,
3,
3,
3,
3,
1,
1,
0,
0,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0001011431_python.txt |
Q:
py2exe on PIL ImageStat.Stat throws Exception: argument 2 must be ImagingCore, not ImagingCore
I'm trying to create a .exe from a python program using py2exe, but when I run the .exe I get a log file with
Exception in thread Thread-1:
Traceback (most recent call last):
File "threading.pyc", line 532, in __bootstrap_inner
File "threading.pyc", line 484, in run
File "webcam.py", line 66, in loop
File "ImageStat.pyc", line 50, in __init__
File "PIL\Image.pyc", line 990, in histogram
TypeError: argument 2 must be ImagingCore, not ImagingCore
Here's some code:
#webcam.py
cam = VideoCapture.Device();
def getImage():
return cam.getImage();
...
camshot = grayscale(getImage());
lightCoords = [];
level = camshot.getextrema()[1]-leniency;
for p in camshot.getdata():
if p>=level:
lightCoords.append(255);
else:
lightCoords.append(0);
maskIm = new("L",res);
maskIm.putdata(lightCoords);
...
64 colorcamshot = getImage();
65 camshot = grayscale(colorcamshot);
66 brightness = ImageStat.Stat(camshot,maskIm).sum[0]/divVal;
A:
Try importing PIL in your main thread before starting any worker threads. It looks like the same class has been imported twice, and type comparisons are acting wacky as a result.
| py2exe on PIL ImageStat.Stat throws Exception: argument 2 must be ImagingCore, not ImagingCore | I'm trying to create a .exe from a python program using py2exe, but when I run the .exe I get a log file with
Exception in thread Thread-1:
Traceback (most recent call last):
File "threading.pyc", line 532, in __bootstrap_inner
File "threading.pyc", line 484, in run
File "webcam.py", line 66, in loop
File "ImageStat.pyc", line 50, in __init__
File "PIL\Image.pyc", line 990, in histogram
TypeError: argument 2 must be ImagingCore, not ImagingCore
Here's some code:
#webcam.py
cam = VideoCapture.Device();
def getImage():
return cam.getImage();
...
camshot = grayscale(getImage());
lightCoords = [];
level = camshot.getextrema()[1]-leniency;
for p in camshot.getdata():
if p>=level:
lightCoords.append(255);
else:
lightCoords.append(0);
maskIm = new("L",res);
maskIm.putdata(lightCoords);
...
64 colorcamshot = getImage();
65 camshot = grayscale(colorcamshot);
66 brightness = ImageStat.Stat(camshot,maskIm).sum[0]/divVal;
| [
"Try importing PIL in your main thread before starting any worker threads. It looks like the same class has been imported twice, and type comparisons are acting wacky as a result.\n"
] | [
0
] | [] | [] | [
"py2exe",
"python",
"python_imaging_library"
] | stackoverflow_0003348079_py2exe_python_python_imaging_library.txt |
Q:
python: inserting a whole line into a list
import csv
with open('thefile.csv', 'rb') as f:
data = list(csv.reader(f))
import collections
counter = collections.defaultdict(int)
for row in data:
counter[row[10]] += 1
with open('/pythonwork/thefile_subset11.csv', 'w') as outfile:
writer = csv.writer(outfile)
sample_cutoff=500
b[]
for row in data:
if counter[row[10]] >= sample_cutoff:
writer.writerow(row)
in addition to writing the data to a file, i would like to insert it into a list b[]
can i just do b.insert[row] ?
A:
It's b.append(row), but otherwise yes. And instead of b[] you want b = []. Another way to do it would be to make the list first, and then just write each element of the list to the file:
b = [row for row in data if counter[row[10]] >= sample_cutoff]
map(writer.writerow, b)
A:
It's list.insert(index, item) or list.append(item).
Also, b[] is a name error and syntax error. I suppose you mean b = [].
A:
Yes, b = [] would support an insert method. The syntax is as follows.
b.insert(postiion_int, element)
| python: inserting a whole line into a list | import csv
with open('thefile.csv', 'rb') as f:
data = list(csv.reader(f))
import collections
counter = collections.defaultdict(int)
for row in data:
counter[row[10]] += 1
with open('/pythonwork/thefile_subset11.csv', 'w') as outfile:
writer = csv.writer(outfile)
sample_cutoff=500
b[]
for row in data:
if counter[row[10]] >= sample_cutoff:
writer.writerow(row)
in addition to writing the data to a file, i would like to insert it into a list b[]
can i just do b.insert[row] ?
| [
"It's b.append(row), but otherwise yes. And instead of b[] you want b = []. Another way to do it would be to make the list first, and then just write each element of the list to the file:\nb = [row for row in data if counter[row[10]] >= sample_cutoff]\nmap(writer.writerow, b)\n\n",
"It's list.insert(index, item) ... | [
2,
1,
1
] | [] | [] | [
"csv",
"python"
] | stackoverflow_0003348577_csv_python.txt |
Q:
How to write a JIT library?
I've browsed through many JIT libraries. But I'd like to learn how to write one.
Softwire looked like nice. Though what the emitter interface should do? Can I do something better than existing libraries? How do I support inline caching?
A:
I would recommend you join an existing team instead of starting from scratch. The PyPy team's work on this area is very interesting and is currently under development, so may be a good place to start and seek more information, and then perhaps help.
http://codespeak.net/pypy/dist/pypy/doc/jit/overview.html
http://codespeak.net/pypy/dist/pypy/doc/jit/pyjitpl5.html
http://codespeak.net/svn/pypy/extradoc/talk/icooolps2009/bolz-tracing-jit-final.pdf
Other good readings on the PyPy blog:
http://morepypy.blogspot.com/2010/05/efficient-and-elegant-regular.html
http://morepypy.blogspot.com/2010/06/jit-for-regular-expression-matching.html
http://morepypy.blogspot.com/search/label/jit
This may interest you also:
http://indefinitestudies.org/2010/02/08/creating-a-toy-virtual-machine-with-pypy/
| How to write a JIT library? | I've browsed through many JIT libraries. But I'd like to learn how to write one.
Softwire looked like nice. Though what the emitter interface should do? Can I do something better than existing libraries? How do I support inline caching?
| [
"I would recommend you join an existing team instead of starting from scratch. The PyPy team's work on this area is very interesting and is currently under development, so may be a good place to start and seek more information, and then perhaps help.\n\nhttp://codespeak.net/pypy/dist/pypy/doc/jit/overview.html\nhtt... | [
4
] | [] | [] | [
"c",
"compiler_construction",
"jit",
"python"
] | stackoverflow_0003292704_c_compiler_construction_jit_python.txt |
Q:
Cleanest way of choosing between two values in Python
Dicts in Python have a very nice method get:
# m is some dict
m.get(k,v) # if m[k] exists, returns that, otherwise returns v
Is there some way to do this for any value? For example, in Perl I could do this:
return $some_var or "some_var doesn't exist."
A:
The or operator in Python is guaranteed to return one of its operands, in case the left expression evaluates to False, the right one is evaluated and returned.
Edit:
After re-reading your question, I noticed that I misunderstood it the first time. By using the locals() built-in function, you can use the get() method for variables, just like you use it for dicts (although it's neither very pretty nor pythonic), to check whether a value exists or not:
>>> locals().get('some_var', "some_var doesn't exist.")
"some_var doesn't exist."
>>> some_var = 0
>>> locals().get('some_var', "some_var doesn't exist.")
0
A:
And if you need to be able to handle "false-ish" values more specifically (e.g., 0 or '' are good values but None and False are not), there's the more verbose "conditional expression" (a.k.a. ternary operator) introduced in Python 2.5:
return x if <cond> else y
where <cond> can be any condition:
return x if isinstance(x, basestring) else y
A:
Short-circuits work in Python too.
return a or b
A:
Actually there is a difference in Python ehre:
If a variable named some_var is not defined you can't access it to know it does not exist -
you will get a NameError exception.
This is so by design: undefined variable names do not have an "undefined" or "null" value, they raise an Exception, and that is one of the greates differences from Python to most dynamic typed languages.
So, it is not Pythonic to get to a statement where some_var might or not be defined. In a good program it would have been defined, even if to contain "None" or "False" -
In that case the "or" idiom would work, but the recommended way to handle it s to use the expression if like in:
return some_var if some_var else "some_var contains a False Value"
If you happen tohave code where some_var might not be defined, and cannot for soemr eason change that, you can check if the string "some_var" exists as a key in the dicitonary of localv ariables, returned by locals() to know wether it was defined or not:
return some_var if "some_var" in locals() else "some_var was not defined"
That is not the most pythonic way either - having to deal witha variable that my not be defined is exceptional in Python code. So the best pratice would be to try to use the vars contents inside a try cause, and catch the "NameError" exception if the var does not exist:
try:
return some_var
except NameError:
return "some_var not defined"
| Cleanest way of choosing between two values in Python | Dicts in Python have a very nice method get:
# m is some dict
m.get(k,v) # if m[k] exists, returns that, otherwise returns v
Is there some way to do this for any value? For example, in Perl I could do this:
return $some_var or "some_var doesn't exist."
| [
"The or operator in Python is guaranteed to return one of its operands, in case the left expression evaluates to False, the right one is evaluated and returned.\nEdit:\nAfter re-reading your question, I noticed that I misunderstood it the first time. By using the locals() built-in function, you can use the get() me... | [
2,
2,
1,
1
] | [] | [] | [
"coding_style",
"perl",
"python"
] | stackoverflow_0003348594_coding_style_perl_python.txt |
Q:
How to make tkinter repond events while waiting socket data?
I'm trying to make the app read data from a socket, but it takes some time and locks the interface, how do I make it respond to tk events while waiting?
A:
Thats is easy! And you don’t even need threads! But you’ll have to restructure your I/O code a bit. Tk has the equivalent of Xt’s XtAddInput() call, which allows you to register a callback function which will be called from the Tk mainloop when I/O is possible on a file descriptor. Here’s what you need:
from Tkinter import tkinter
tkinter.createfilehandler(file, mask, callback)
The file may be a Python file or socket object (actually, anything with a fileno() method), or an integer file descriptor. The mask is one of the constants tkinter.READABLE or tkinter.WRITABLE. The callback is called as follows:
callback(file, mask)
You must unregister the callback when you’re done, using
tkinter.deletefilehandler(file)
Note: since you don’t know how many bytes are available for reading, you can’t use the Python file object’s read or readline methods, since these will insist on reading a predefined number of bytes. For sockets, the recv() or recvfrom() methods will work fine; for other files, use os.read(file.fileno(), maxbytecount).
| How to make tkinter repond events while waiting socket data? | I'm trying to make the app read data from a socket, but it takes some time and locks the interface, how do I make it respond to tk events while waiting?
| [
"Thats is easy! And you don’t even need threads! But you’ll have to restructure your I/O code a bit. Tk has the equivalent of Xt’s XtAddInput() call, which allows you to register a callback function which will be called from the Tk mainloop when I/O is possible on a file descriptor. Here’s what you need:\nfrom Tkin... | [
9
] | [] | [] | [
"event_handling",
"python",
"tkinter"
] | stackoverflow_0003348757_event_handling_python_tkinter.txt |
Q:
Wrong ELF class - Python
I'm trying to install this library for LZJB compression. PyLZJB LINK
The library is a binding for a C library, the file is located here PyLZJB.so
Unfortunately by copying to the site-packages directory when import I get the "Wrong ELF class" error.
>>> import PyLZJB
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: ./PyLZJB.so: wrong ELF class: ELFCLASS32
Help would be great. :)
PS: I'm running Ubuntu 10.4 64bit
Edit:
If someone could suggest me an alternative compression algorithm I would be equally happy. :)
The algorithm is for HTML compression, and it needs client side Javascript decompression/compression support too.
I really hope someone can help on this one. Thanks guys!
A:
You are running a 64 bit Python interpreter and trying to load a 32 bit extension and that is not allowed.
You need to have both your Python interpreter and your extension compiled for the same architectures. While you could get a 32 bit Python interpreter, it would probably be better to get a 64 bit extension.
What you should do is get the source for LZJB and build it yourself to get a 64 bit shared object.
A:
If someone could suggest me an alternative compression algorithm I would be equally happy.
There is always good old deflate, a much more common member of the LZ compression family. JavaScript implementation. How to handle raw deflate content with Python's zlib module.
This a lot of overhead in relatively slow client-side code to be compressing submission data, and it's not trivial to submit the raw bytes you will obtain from it.
do they Gzip GET parameters within a request?
GET form submissions in the query string must by nature be fairly short, or you will overrun browser or server URL length limits. There is no point compressing anything so small. If you have a lot of data, it needs to go in a POST form.
Even in a POST form, the default enctype is application/x-www-form-urlencoded, which means a majority of bytes are going to get encoded as %nn sequences. This will balloon your form submission, probably beyond the original uncompressed size. To submit raw bytes you would have to use a enctype="multipart/form-data" form.
Even then, you're going to have encoding problems. JS strings are Unicode not bytes, and will get encoded using the encoding of the page containing the form. That should normally be UTF-8, but then you can't actually generate an arbitrary sequence of bytes for upload by encoding to it, since many byte sequences are not valid in UTF-8. You could have bytes-in-unicode by encoding each byte as a code unit to UTF-8, but that would bloat your compressed bytes by 50% (since half the code units, those over 0x80, would encode to two UTF-8 bytes).
In theory, if you didn't mind losing proper internationalisation support, you could serve the page as ISO-8859-1 and use the escape/encodeURIComponent idiom to convert between UTF-8 and ISO-8859-1 for output. But that won't work because browsers lie and actually use Windows code page 1252 for encoding/decoding content marked as ISO-8859-1. You could use another encoding that mapped every byte to a character, but that'd be more manual encoding overhead and would further limit characters you could use in the page.
You could avoid encoding problems by using something like base64, but then, again, you've got more manual encoding performance overhead and a 33% bloat.
In summary, all approaches are bad; I don't think you're going to get much useful out of this.
A:
You can either run a 32-bit Python or compile your own PyLZJB rather than using the prebuilt binary. Or get a 64-bit binary PyLZJB from somewhere.
| Wrong ELF class - Python | I'm trying to install this library for LZJB compression. PyLZJB LINK
The library is a binding for a C library, the file is located here PyLZJB.so
Unfortunately by copying to the site-packages directory when import I get the "Wrong ELF class" error.
>>> import PyLZJB
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: ./PyLZJB.so: wrong ELF class: ELFCLASS32
Help would be great. :)
PS: I'm running Ubuntu 10.4 64bit
Edit:
If someone could suggest me an alternative compression algorithm I would be equally happy. :)
The algorithm is for HTML compression, and it needs client side Javascript decompression/compression support too.
I really hope someone can help on this one. Thanks guys!
| [
"You are running a 64 bit Python interpreter and trying to load a 32 bit extension and that is not allowed.\nYou need to have both your Python interpreter and your extension compiled for the same architectures. While you could get a 32 bit Python interpreter, it would probably be better to get a 64 bit extension.\... | [
7,
4,
0
] | [] | [] | [
"compression",
"javascript",
"libraries",
"lzw",
"python"
] | stackoverflow_0003348538_compression_javascript_libraries_lzw_python.txt |
Q:
Efficient way of setting Logging across a Package Module
I have a package that has several components in it that would benefit greatly from using logging and outputting useful information.
What I do not want to do is to 'setup' proper logging for every single file with somewhere along these lines:
import logging
logging.basicConfig(level=DEBUG)
my_function = logging.getLogger("my_function")
my_class = logging.getLogger("my_class")
I have tried a couple of approaches, one of them being adding the boilerplate code into a class within a utility module and try and do something like this:
from util import setlogging
set_logging()
But even the above solution doesn't look clean to me and would cause issues because setLogger doesn't have a __call__ method. What I did liked was that my "set_logging" class would read form a config file and have some default values so it wouldn't matter what level or what type of logging format I wanted it would set it up correctly.
Is there a way to initialize proper logging across the board in my package? Maybe in the __init__.py file?
And just to be as verbose as possible, this is what setlogging (now a function, not a class) looks like:
def setlogging(config=None):
if config == None:
config = config_options() # sets default values
levels = {
'debug': DEBUG,
'info': INFO
}
level = levels.get(config['log_level'])
log_format = config['log_format']
datefmt = config['log_datefmt']
basicConfig(
level = level,
format = log_format,
datefmt = datefmt)
A:
If you want all the code in the various modules of your package to use the same logger object, you just need to (make that logger available -- see later -- and) call
mylogger.warning("Attenzione!")
or the like, rather than logging.warning &c. So, the problem reduces to making one mylogger object for the whole package and making it available throughout the modules in the package. (Alternatively, you could used named loggers with names starting with the package's name followed by a dot, but while that's very much a part of the logging package functionality, I've personally never found it a natural way to operate).
So, your util.setlogging function could simply be followed by, say,
mylogger = logging.getLogger(package_name)
and every module that imports util can simply use
util.mylogger.warning('Watch out!')
and the like. This seems to me to be the simplest approach, as long as the concept that all the code in the package should be logging in the same way applies.
A:
The proper way for a module to use logging is
import logging
logger = logging.getLogger('my_module_name')
and
logger.debug('help!')
becomes a no-op until someone calls logging.basicConfig() (or a variant).
| Efficient way of setting Logging across a Package Module | I have a package that has several components in it that would benefit greatly from using logging and outputting useful information.
What I do not want to do is to 'setup' proper logging for every single file with somewhere along these lines:
import logging
logging.basicConfig(level=DEBUG)
my_function = logging.getLogger("my_function")
my_class = logging.getLogger("my_class")
I have tried a couple of approaches, one of them being adding the boilerplate code into a class within a utility module and try and do something like this:
from util import setlogging
set_logging()
But even the above solution doesn't look clean to me and would cause issues because setLogger doesn't have a __call__ method. What I did liked was that my "set_logging" class would read form a config file and have some default values so it wouldn't matter what level or what type of logging format I wanted it would set it up correctly.
Is there a way to initialize proper logging across the board in my package? Maybe in the __init__.py file?
And just to be as verbose as possible, this is what setlogging (now a function, not a class) looks like:
def setlogging(config=None):
if config == None:
config = config_options() # sets default values
levels = {
'debug': DEBUG,
'info': INFO
}
level = levels.get(config['log_level'])
log_format = config['log_format']
datefmt = config['log_datefmt']
basicConfig(
level = level,
format = log_format,
datefmt = datefmt)
| [
"If you want all the code in the various modules of your package to use the same logger object, you just need to (make that logger available -- see later -- and) call\nmylogger.warning(\"Attenzione!\")\n\nor the like, rather than logging.warning &c. So, the problem reduces to making one mylogger object for the who... | [
13,
1
] | [] | [] | [
"logging",
"module",
"package",
"python"
] | stackoverflow_0003348958_logging_module_package_python.txt |
Q:
persistant TCP connection in Django
I have a Django application which sometimes needs to send some data through TCP and I want this connection to be persistant.
The way I wanted to do it was to create a simple Twisted TCP server (I'm the one who will be waiting for the initial connection) and somehow call it from a Django view whenever I would be needing it.
How should the communication look like beetwen Twisted and Django in this case?
A:
Use the Twisted wsgi container to run Django. This container simply runs the WSGI application in multiple Twisted-threadpool threads, so you can simply call any Twisted API via blockingCallFromThread. There's really not that much to it!
| persistant TCP connection in Django | I have a Django application which sometimes needs to send some data through TCP and I want this connection to be persistant.
The way I wanted to do it was to create a simple Twisted TCP server (I'm the one who will be waiting for the initial connection) and somehow call it from a Django view whenever I would be needing it.
How should the communication look like beetwen Twisted and Django in this case?
| [
"Use the Twisted wsgi container to run Django. This container simply runs the WSGI application in multiple Twisted-threadpool threads, so you can simply call any Twisted API via blockingCallFromThread. There's really not that much to it!\n"
] | [
4
] | [] | [] | [
"django",
"python",
"tcp",
"twisted"
] | stackoverflow_0003348663_django_python_tcp_twisted.txt |
Q:
Importing Modules (SQLITE3) from Python Virtual Environment
I am using a Windows machine with python, django, and pinax installed.
I can import modules from any normal location (even if it's not in the actuall installed directory). However, I cannot import these same modules when I am in a virtual environment that I built for Pinax.
What are possible causes of this? What are possible solutions?
A:
To diagnose failure to import, try using the -v switch to python:
python -v my_program.py
It will show its attempts to import your modules.
A:
As the summary says,
[[virtualenv]] creates an environment
that has its own installation
directories, that doesn't share
libraries with other virtualenv
environments (and optionally doesn't
use the globally installed libraries
either).
Yet you appear surprised that the virtualenv you've built "doesn't share libraries"... why are you surprised, when that not-sharing is the whole point of virtualenv?!-)
Once you've made a python virtualenv.py ENV, to keep quoting from the summary I've already pointed you to, "if you use ENV/bin/easy_install the packages will be installed into the environment".
So, do that to install all packages you need to be available for importing in the virtual environment.
(Assuming you've used the --no-site-packages option to make the virtual environment, you need to do that also for all packages you had installed "site-wide", since the purpose of that option is to exclude them for better control and isolation).
| Importing Modules (SQLITE3) from Python Virtual Environment | I am using a Windows machine with python, django, and pinax installed.
I can import modules from any normal location (even if it's not in the actuall installed directory). However, I cannot import these same modules when I am in a virtual environment that I built for Pinax.
What are possible causes of this? What are possible solutions?
| [
"To diagnose failure to import, try using the -v switch to python:\npython -v my_program.py\n\nIt will show its attempts to import your modules.\n",
"As the summary says,\n\n[[virtualenv]] creates an environment\n that has its own installation\n directories, that doesn't share\n libraries with other virtualenv... | [
2,
1
] | [] | [] | [
"django",
"pinax",
"python"
] | stackoverflow_0003349313_django_pinax_python.txt |
Q:
Python - PHP Shared MySQL server connection info?
I have some MySQL database server information that needs to be shared between a Python backend and a PHP frontend.
What is the best way to go about storing the information in a manner wherein it can be read easily by Python and PHP?
I can always brute force it with a bunch of str.replace() calls in Python and hope it works if nobody has a solution, or I can just maintain two separate files, but it would be a bunch easier if I could do this automatically.
I assume it would be easiest to store the variables in PHP format directly and do conversions in Python, and I know there exist Python modules for serializing and unserializing PHP, but I haven't been able to get it all figured out.
Any help is appreciated!
A:
Store the shared configuration in a plain text file, preferably in a standard format.
You might consider yaml, ini, or json.
I'm pretty sure both PHP and python can very trivially read and parse all three of those formats.
| Python - PHP Shared MySQL server connection info? | I have some MySQL database server information that needs to be shared between a Python backend and a PHP frontend.
What is the best way to go about storing the information in a manner wherein it can be read easily by Python and PHP?
I can always brute force it with a bunch of str.replace() calls in Python and hope it works if nobody has a solution, or I can just maintain two separate files, but it would be a bunch easier if I could do this automatically.
I assume it would be easiest to store the variables in PHP format directly and do conversions in Python, and I know there exist Python modules for serializing and unserializing PHP, but I haven't been able to get it all figured out.
Any help is appreciated!
| [
"Store the shared configuration in a plain text file, preferably in a standard format.\nYou might consider yaml, ini, or json. \nI'm pretty sure both PHP and python can very trivially read and parse all three of those formats.\n"
] | [
4
] | [] | [] | [
"mysql",
"php",
"python",
"share",
"variables"
] | stackoverflow_0003349445_mysql_php_python_share_variables.txt |
Q:
Error when trying make cleaned_data()! Django
Must be simple solution. But I do not see it. Please help me. Looks like 'gorod' is in request but when i trying cleaned_data() it gives me KeyError
KeyError at /ticket/
'gorod'
Request Method: POST
Request URL: http://localhost:8000/ticket/
Exception Type: KeyError
Exception Value:
'gorod'
Exception Location: C:\Documents and Settings\POLINOM\web\website\orders\views.py in SaveOrder2, line 93
Python Executable: C:\Python26\python.exe
Python Version: 2.6.5
Traceback:
File "C:\Python26\lib\site-packages\django\core\handlers\base.py" in get_response
92. response = callback(request, *callback_args, **callback_kwargs)
File "C:\Documents and Settings\POLINOM\web\website\orders\views.py" in ticket
136. adr_form=MakeForm2(request)
File "C:\Documents and Settings\POLINOM\web\website\orders\views.py" in MakeForm2
58. SaveOrder2(request,form)
File "C:\Documents and Settings\POLINOM\web\website\orders\views.py" in SaveOrder2
95. gorod=form.cleaned_data['gorod']
Exception Type: KeyError at /ticket/
Exception Value: 'gorod'
My form:
class MumForm(AddressForm):
CITY_CHOICES = (
(0,'August,11 - Washington, BlackCat'),
(1,'August,12 - Philadelphia, WorldLiveCafe'),
(2,'August,13 - New York, Irving Plaza'),
(3,'August,14 - Boston, Middle East'),
(4,'August,15 - Montreal, Cabaret Du Musee Juste Pour Rire'),
(5,'August,16 - Toronto, Mod club'),
(6,'August,17 - Cleveland, Beachland Tavern'),
(7,'August,18 - ------------------------'),
(9,'August,19 - ------------------------'),
(10,'August,20 - Atlanta, Masquerade'),
(11,'August,22 - Chicago, Empty Bottle'),
(12,'August,23 - Milwaukee, Shank Hall'),
(13,'August,24 - Minneapolis, Varsity (promoted by 1st Avenue)'),
(14,'August,25 - Winnipeg, Cowboys'),
(15,'August,27 - Edmonton, The Starlite Room'),
(16,'August,28 - Calgary, The Den'),
(17,'August,29 - Vancouver, Commodore Ballroom'),
(18,"August,31 - Seattle, Neumo's"),
(19,'September,01 - Portland, Howthorne Theatre'),
(20,'September,02 - San Francisco, Bottom Of The Hill'),
(21,'September,03 - Los Angeles, The Roxy'),
(22,'September,04 - Salt Lake City, Urban Lounge'),
(23,'September,05 - Denver, Bluebird Theatre'),
(24,'September,06 - Kansas City, The Record Bar'),
(25,'September,07 - Dallas, The Loft'),
(26,"September,08 - Austin, Emo's"),
)
gorod = forms.ChoiceField(choices=CITY_CHOICES, label="Выберите город в котором вы хотите пойти на концерт")
note = forms.CharField(max_length=1500,label=u'Заметка',widget=forms.Textarea)
This is inherits from ather form which is:
class AddressForm(forms.Form):
address = forms.CharField(label=u"Адрес",max_length=200,required=True,widget=forms.TextInput(attrs={'size':60}))
city = forms.CharField(label=u"Город",max_length=50,required=True)
country = forms.ChoiceField(label=u"Страна",required=True,choices=COUNTRIES)
state = forms.ChoiceField(label=u"Штат/Провинция",required=False, choices=StateProvince().get_all_states(), error_messages={'invalid_choice':u'нет среди допустимых значений. Пожалуйста, выберите правильный вариант'})
zip = forms.CharField(label=u"Почтовый код",max_length=10,widget=forms.TextInput(attrs={'size':8}),required=True)
phone_number = forms.CharField(label=u"Номер телефона",required=False)
there is two models which i'm working with:
class Order(models.Model):
STATUS_CHOICES=[
(1, 'Send'),
(0, 'Not_send'),
]
user=models.ForeignKey(User)
product=models.ForeignKey(Product)
status = models.CharField(max_length=1,choices=STATUS_CHOICES)
country = models.CharField(max_length=2)
state = models.CharField(max_length=50)
zip = models.CharField(max_length=10)
city = models.CharField(max_length=50)
street = models.CharField(max_length=200)
phone=models.CharField(max_length=20)
def __unicode__(self):
return 'Order by ' + self.user
class TicketProp(models.Model):
order=models.ForeignKey(Order)
note=models.CharField(max_length=1500)
gorod=models.CharField(max_length=2)
A:
Maybe your AddressForm class is missing the gorod field? The form populates the .cleaned_data attribute (no () after the latter!-) based on the fields in the form; for example, in the source for the current django.forms, you'll see on line 274
for name, field in self.fields.items():
and it's in the body of the loop that the .cleaned_data dict is populated.
So what's your AddressForm code (and that of the model it presumably works with)...?
| Error when trying make cleaned_data()! Django | Must be simple solution. But I do not see it. Please help me. Looks like 'gorod' is in request but when i trying cleaned_data() it gives me KeyError
KeyError at /ticket/
'gorod'
Request Method: POST
Request URL: http://localhost:8000/ticket/
Exception Type: KeyError
Exception Value:
'gorod'
Exception Location: C:\Documents and Settings\POLINOM\web\website\orders\views.py in SaveOrder2, line 93
Python Executable: C:\Python26\python.exe
Python Version: 2.6.5
Traceback:
File "C:\Python26\lib\site-packages\django\core\handlers\base.py" in get_response
92. response = callback(request, *callback_args, **callback_kwargs)
File "C:\Documents and Settings\POLINOM\web\website\orders\views.py" in ticket
136. adr_form=MakeForm2(request)
File "C:\Documents and Settings\POLINOM\web\website\orders\views.py" in MakeForm2
58. SaveOrder2(request,form)
File "C:\Documents and Settings\POLINOM\web\website\orders\views.py" in SaveOrder2
95. gorod=form.cleaned_data['gorod']
Exception Type: KeyError at /ticket/
Exception Value: 'gorod'
My form:
class MumForm(AddressForm):
CITY_CHOICES = (
(0,'August,11 - Washington, BlackCat'),
(1,'August,12 - Philadelphia, WorldLiveCafe'),
(2,'August,13 - New York, Irving Plaza'),
(3,'August,14 - Boston, Middle East'),
(4,'August,15 - Montreal, Cabaret Du Musee Juste Pour Rire'),
(5,'August,16 - Toronto, Mod club'),
(6,'August,17 - Cleveland, Beachland Tavern'),
(7,'August,18 - ------------------------'),
(9,'August,19 - ------------------------'),
(10,'August,20 - Atlanta, Masquerade'),
(11,'August,22 - Chicago, Empty Bottle'),
(12,'August,23 - Milwaukee, Shank Hall'),
(13,'August,24 - Minneapolis, Varsity (promoted by 1st Avenue)'),
(14,'August,25 - Winnipeg, Cowboys'),
(15,'August,27 - Edmonton, The Starlite Room'),
(16,'August,28 - Calgary, The Den'),
(17,'August,29 - Vancouver, Commodore Ballroom'),
(18,"August,31 - Seattle, Neumo's"),
(19,'September,01 - Portland, Howthorne Theatre'),
(20,'September,02 - San Francisco, Bottom Of The Hill'),
(21,'September,03 - Los Angeles, The Roxy'),
(22,'September,04 - Salt Lake City, Urban Lounge'),
(23,'September,05 - Denver, Bluebird Theatre'),
(24,'September,06 - Kansas City, The Record Bar'),
(25,'September,07 - Dallas, The Loft'),
(26,"September,08 - Austin, Emo's"),
)
gorod = forms.ChoiceField(choices=CITY_CHOICES, label="Выберите город в котором вы хотите пойти на концерт")
note = forms.CharField(max_length=1500,label=u'Заметка',widget=forms.Textarea)
This is inherits from ather form which is:
class AddressForm(forms.Form):
address = forms.CharField(label=u"Адрес",max_length=200,required=True,widget=forms.TextInput(attrs={'size':60}))
city = forms.CharField(label=u"Город",max_length=50,required=True)
country = forms.ChoiceField(label=u"Страна",required=True,choices=COUNTRIES)
state = forms.ChoiceField(label=u"Штат/Провинция",required=False, choices=StateProvince().get_all_states(), error_messages={'invalid_choice':u'нет среди допустимых значений. Пожалуйста, выберите правильный вариант'})
zip = forms.CharField(label=u"Почтовый код",max_length=10,widget=forms.TextInput(attrs={'size':8}),required=True)
phone_number = forms.CharField(label=u"Номер телефона",required=False)
there is two models which i'm working with:
class Order(models.Model):
STATUS_CHOICES=[
(1, 'Send'),
(0, 'Not_send'),
]
user=models.ForeignKey(User)
product=models.ForeignKey(Product)
status = models.CharField(max_length=1,choices=STATUS_CHOICES)
country = models.CharField(max_length=2)
state = models.CharField(max_length=50)
zip = models.CharField(max_length=10)
city = models.CharField(max_length=50)
street = models.CharField(max_length=200)
phone=models.CharField(max_length=20)
def __unicode__(self):
return 'Order by ' + self.user
class TicketProp(models.Model):
order=models.ForeignKey(Order)
note=models.CharField(max_length=1500)
gorod=models.CharField(max_length=2)
| [
"Maybe your AddressForm class is missing the gorod field? The form populates the .cleaned_data attribute (no () after the latter!-) based on the fields in the form; for example, in the source for the current django.forms, you'll see on line 274\nfor name, field in self.fields.items():\n\nand it's in the body of th... | [
1
] | [] | [] | [
"django",
"python"
] | stackoverflow_0003347680_django_python.txt |
Q:
How to run GUI2Exe from command line?
I'm using GUI2Exe to CX_freeze my python app, which is working great... if I want to build it manually.
My next step is to automate this build, so I can build in one step
Is there a way to use the exported setup.py to build?
or to call GUI2Exe with some command line parameters to build the project?
Thanks!
Update: So I ran the command manually following the suggestions below: Here's the difference:
library.zip is different, size off by
11 bytes
{app}.zip is different, same size
missing {app}.manifest
Would you be comfortable that they are the same?
A:
As its homepage says, GUI2Exe is just a GUI around different python exe builders, so I guess you should just use your tool of choice directly. As for cx_Freeze, you could find the description of its setup.py options in its manual http://cx-freeze.sourceforge.net/cx_Freeze.html#distutils-setup-script.
A:
GUI2Exe is just a wrapper around various binary builders. In the case of py2exe, there's a menu item where you can actually view the setup.py file that GUI2Exe generates. There you'll see what extra things it does. And no, you cannot run it via the command line unless you mean just running the python file itself (i.e. python GUI2Exe.py). It's not a command line utility.
A:
python setup.py build
should be the only command you need. What's the difference in the results?
| How to run GUI2Exe from command line? | I'm using GUI2Exe to CX_freeze my python app, which is working great... if I want to build it manually.
My next step is to automate this build, so I can build in one step
Is there a way to use the exported setup.py to build?
or to call GUI2Exe with some command line parameters to build the project?
Thanks!
Update: So I ran the command manually following the suggestions below: Here's the difference:
library.zip is different, size off by
11 bytes
{app}.zip is different, same size
missing {app}.manifest
Would you be comfortable that they are the same?
| [
"As its homepage says, GUI2Exe is just a GUI around different python exe builders, so I guess you should just use your tool of choice directly. As for cx_Freeze, you could find the description of its setup.py options in its manual http://cx-freeze.sourceforge.net/cx_Freeze.html#distutils-setup-script.\n",
"GUI2Ex... | [
1,
1,
0
] | [] | [] | [
"command_line",
"gui2exe",
"python",
"wxpython"
] | stackoverflow_0003315820_command_line_gui2exe_python_wxpython.txt |
Q:
Does the TCPServer + BaseRequestHandler in Python's SocketServer close the socket after each call to handle()?
I'm writing a client/server application in Python and I'm finding it necessary to get a new connection to the server for each request from the client. My server is just inheriting from TCPServer and I'm inheriting from BaseRequestHandler to do my processing. I'm not calling self.request.close() anywhere in the handler, but somehow the server seems to be hanging up on my client. What's up?
A:
Okay, I read the code (on my Mac, SocketServer.py is at /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/).
Indeed, TCPServer is closing the connection. In BaseServer.handle_request, process_request is called, which calls close_request. In the TCPServer class, close_request calls self.request.close(), and self.request is just the socket used to handle the request.
So the answer to my question is "Yes".
A:
tcpserver really close connection each time after 'handle'
the follow code doesn't work while peer closing
def handle(self):
while 1:
try:
data = self.request.recv(1024)
print self.client_address, "sent", data
except:
pass
this may be better:
def handle(self):
while(1):
try:
self.data = self.request.recv(1024)
if not self.data:
print "%s close" % self.client_address[0]
break
print "%s wrote:" % self.client_address[0], self.data
except:
print "except"
break
reuseaddr doesn't work as it's too late after binding
server.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
A:
According to the docs, neither TCPServer nor BaseRequestHandler close the socket unless prompted to do so. The default implementations of both handle() and finish() do nothing.
A couple of things might be happening:
You are closing the socket itself or the request which wraps it, or calling server_close somewhere.
The socket timeout could have been hit and you have implemented a timeout handler which closes the socket.
The client could be actually closing the socket. Code would really help figuring this out.
However, my testing confirms your results. Once you return from handle on the server, whether connecting through telnet or Python's socket module, your connection shows up as being closed by the remote host. Handling socket activity in a loop inside handle seems to work:
def handle(self):
while 1:
try:
data = self.request.recv(1024)
print self.client_address, "sent", data
except:
pass
A brief Google Code Search confirms that this is a typical way of handling a request: 1 2 3 4. Honestly, there are plenty of other networking libraries available for Python that I might look to if I were facing a disconnect between the abstraction SocketServer provides and my expectations.
Code sample that I used to test:
from SocketServer import BaseRequestHandler, TCPServer
class TestRequestHandler(BaseRequestHandler):
def handle(self):
data = self.request.recv(1024)
print self.client_address, "sent", data
self.request.send(data)
class TestServer(TCPServer):
def __init__(self, server_address, handler_class=TestRequestHandler):
TCPServer.__init__(self, server_address, handler_class)
if __name__ == "__main__":
import socket
address = ('localhost', 7734)
server = TestServer(address)
server.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
A:
You sure the client is not hanging up on the server? This is a bit too vague to really tell what is up, but generally a server that is accepting data from a client will quit the connection of the read returns no data.
| Does the TCPServer + BaseRequestHandler in Python's SocketServer close the socket after each call to handle()? | I'm writing a client/server application in Python and I'm finding it necessary to get a new connection to the server for each request from the client. My server is just inheriting from TCPServer and I'm inheriting from BaseRequestHandler to do my processing. I'm not calling self.request.close() anywhere in the handler, but somehow the server seems to be hanging up on my client. What's up?
| [
"Okay, I read the code (on my Mac, SocketServer.py is at /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/).\nIndeed, TCPServer is closing the connection. In BaseServer.handle_request, process_request is called, which calls close_request. In the TCPServer class, close_request calls self.reque... | [
9,
7,
6,
0
] | [] | [] | [
"python",
"sockets"
] | stackoverflow_0002066810_python_sockets.txt |
Q:
Duplicate key-value pairs returned by memcached
We are using a cluster of memcached servers for caching purpose, in a Django(Python) production, having tried both cmemcache and python-memcache as the API. The problem is under high concurrency, we started to have duplicate key-value pairs, that is to say we are having multi values for a single key. Is there anyone having had the same similar situation and what is the kill?
Since the memcached servers themselves are not communicating with each to maintain the singularity of the key-value pair, this task is left to the client library, so we are trying to understand further how cmemcache and python-memcache works.
A:
The client is supposed to figure out which memcache server a value lives on based on its key. If you're using two different clients (or two different configurations of the same client) they might be using different algorithms to map a key to a server, thereby sending values for the same key to two different servers.
You might want to switch to pylibmc or python-libmemcached. When I reviewed python memcache clients last year, both of the clients you mentioned were either retired or broken or both.
A:
Which is a bigger problem: having redundant data, or getting more than one result back for a query?
If the former is the problem, then you've got a nasty concurrency issue to solve on your hands.
If the latter is the problem, however, why not just give each host that's storing values in memcache a unique identifier, and prepend that to any key?
| Duplicate key-value pairs returned by memcached | We are using a cluster of memcached servers for caching purpose, in a Django(Python) production, having tried both cmemcache and python-memcache as the API. The problem is under high concurrency, we started to have duplicate key-value pairs, that is to say we are having multi values for a single key. Is there anyone having had the same similar situation and what is the kill?
Since the memcached servers themselves are not communicating with each to maintain the singularity of the key-value pair, this task is left to the client library, so we are trying to understand further how cmemcache and python-memcache works.
| [
"The client is supposed to figure out which memcache server a value lives on based on its key. If you're using two different clients (or two different configurations of the same client) they might be using different algorithms to map a key to a server, thereby sending values for the same key to two different serve... | [
1,
0
] | [] | [] | [
"concurrency",
"memcached",
"python"
] | stackoverflow_0003349614_concurrency_memcached_python.txt |
Q:
How can I replace a Python 2.65 UCS-2 build with one built using UCS-4 without losing everything in my site-packages?
I downloaded the Python 2.6.5 source, built it for OS 10.6.4 64-bit, and installed numerous dependencies. I opened a big project our team has been working on recently, ran the unit tests, and one of the tests failed because I had installed Python built using UCS-2 (I didn't know this was the default of OS X!)
In a nutshell:
I didn't supply flag --enable-unicode=ucs4 when building Python.
(as I discovered was necessary: http://copia.posterous.com/confusion-over-python-storage-form-for-unicod)
Now I want to correct this without losing everything I put in site-packages.
Is this possible? If so, how?
Thank you!
Michaux
A:
You can save and restore /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages (e.g. as a .tar.bz2), but the restored .sos will not work properly if any of their entry points deal with Python Unicode objects -- so, those packages (containing any such .so files) you'll have to rebuild/reinstall once your new Python version is working! Hopefully that's a far cry from "everything" you've put in site-packages (fingers crossed).
| How can I replace a Python 2.65 UCS-2 build with one built using UCS-4 without losing everything in my site-packages? | I downloaded the Python 2.6.5 source, built it for OS 10.6.4 64-bit, and installed numerous dependencies. I opened a big project our team has been working on recently, ran the unit tests, and one of the tests failed because I had installed Python built using UCS-2 (I didn't know this was the default of OS X!)
In a nutshell:
I didn't supply flag --enable-unicode=ucs4 when building Python.
(as I discovered was necessary: http://copia.posterous.com/confusion-over-python-storage-form-for-unicod)
Now I want to correct this without losing everything I put in site-packages.
Is this possible? If so, how?
Thank you!
Michaux
| [
"You can save and restore /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages (e.g. as a .tar.bz2), but the restored .sos will not work properly if any of their entry points deal with Python Unicode objects -- so, those packages (containing any such .so files) you'll have to rebuild/reinst... | [
1
] | [] | [] | [
"macos",
"python",
"python_2.6",
"unicode"
] | stackoverflow_0003349798_macos_python_python_2.6_unicode.txt |
Q:
Python variable assignment order of operations
Is there a way to do a variable assignment inside a function call in python? Something like
curr= []
curr.append(num = num/2)
A:
Nopey. Assignment is a statement. It is not an expression as it is in C derived languages.
A:
I'm pretty certain I remember one of the reasons Python was created was to avoid these abominations, instead preferring readability over supposed cleverness :-)
What, pray tell, is wrong with the following?
curr= []
num = num/2
curr.append(num)
A:
Even if you could, side-effecting expressions are a great way to make your code unreadable, but no... Python interprets that as a keyword argument. The closest you could come to that is:
class Assigner(object):
def __init__(self, value):
self.value = value
def assign(self, newvalue):
self.value = newvalue
return self.value
# ...
num = Assigner(2)
curr = []
curr.append(num.assign(num.value / 2))
| Python variable assignment order of operations | Is there a way to do a variable assignment inside a function call in python? Something like
curr= []
curr.append(num = num/2)
| [
"Nopey. Assignment is a statement. It is not an expression as it is in C derived languages.\n",
"I'm pretty certain I remember one of the reasons Python was created was to avoid these abominations, instead preferring readability over supposed cleverness :-)\nWhat, pray tell, is wrong with the following?\ncurr= []... | [
8,
4,
0
] | [] | [] | [
"function",
"python",
"variable_assignment"
] | stackoverflow_0003349908_function_python_variable_assignment.txt |
Q:
Substring search in GAE using python?
I have a model which looks like this:
class Search (db.Model) :
word = db.StringProperty()
an example "word" can look like word = "thisisaword"
I want to search all entities in Search for substrings like "this" "isa" etc.
How can i do this in App engine using python?
Update:
The words here will be domain names. So there is some limit, i guess on the size of the word. e.g. google.com, facebook.com
I want to display "google.com" when someone searches for "gle"
A:
With no word-separation, I don't think that the task you desire is feasible (it would be in many DB engines by implicitly using no index at all and destroying performance and scalability, but App Engine just doesn't implement "features" that inevitably destroy scalability and performance). If you had word separation, you could use the rudimental full-text search explained here, but, as that blog says,
it has no exact phrase match,
substring match, boolean operators,
stemming, or other common full-text
features.
nonrel-search is mentioned here as "alternative" and "similar" to an older, now-discontinued project by the same authors called gae-search (which was available in free and for-pay versions -- maybe the for-pay version could help you, I've never investigated it in depth -- and the last link I gave has contact info for the authors, who might be happy to develop something for you if your budget is sufficient to fund a costly project like this).
Problem is that the number of substrings of each given string grows quadratically with the string's length, so the indices needed for a reasonably fast search of the unbounded kind you want would also grow terribly big extremely fast. You could apply some optimizations if you have bounded lengths for the strings you're storing and those you're searching, but it's still a pretty hard problem to solve with any halfway tolerable efficiency.
Maybe you can explain exactly what you're trying to obtain with this hypothetical "arbitrary substring search" so that the numerical limits on the strings (and substrings being searched for), in both lengths and numbers, can be assessed. The exact problem you want to solve perhaps, if your numerical limits are not tight (as you've currently expressed your problem, there seems to be no limits whatsoever -- but hopefully that's not really the case!-), might not be practically solvable, but maybe some variant / subset of it might... but you'll need to explain the exact problem in question to allow would-be helped to think about such subsets and variants!
Edit: given a slight clarification in the OP's edit of his Q, the heuristics I would suggest is to pick some sensible max and min "relevant substring length" (say 2 and 5, call them MINRSL and MAXRSL for definiteness). When a string (domain name) is entered, break it up by dots if appropriate (e.g. you don't want to allow searches "across" the dots), possibly discard some parts (you don't want to explicitly record all the .com, .org &c suffixes do you? anyway, this decision is app-specific), and, for each of the other parts that you do want to be searchable, do some indexing for substrings of lengths MINRSL to MAXRSL.
Specifically, with the limits 2 and 5 given above, and assuming www. and .com can be removed (much like one usually removes words like "and", "the", "of", ... in full text search: such "stop-words" are just too common and a search for them -- in return for the enormous cost of indexing them -- would return useless tons of unrelated documents), you would have to consider as indexables:
go oo og gl le
goo oog ogl gle
goog oogl ogle
googl oogle
so, you'd need to create 5 + 4 + 3 + 2 = 14 instances of a model which has the indexable as one field and, as the other field, a reference to the instance where you stored www.google.com. Like all indexing schemes, of course, this makes it onerous to "write" (create new object, or, even worse, alter the indexed parts of existing ones!-) as the price you pay for very fast "read" (searching).
Alternatively, for cheaper writing but costlier reading (searching), you could record just substrings of a certain single length, say 4 -- that would be just (ideal case oversimplified, see later):
goog oogl ogle
i.e. three instances of said auxiliary model, instead of fourteen. But now, for searching, you need to truncate the substring being searched for to four characters, get all the matches which will include some false positives, and use extra code in your application to filter the "possible hits" thus found to eliminate the false positives.
When the user searches for a shorter string, say just "oo", you can locate all the matches that start with "oo" (by using both a >= and a < in your search: >= "oo", but also < "op", the next possible length-two string). However, and this is the oversimplification in the above paragraphs, this doesn't work for shorter-substring searches that don't appear at the start of substrings of length four -- so you have to add the "trailing indexables"
gle le
(for a total of 5, rather than 14 with full indexing) to this more complicated but more balanced scheme.
Note that, in the other complete model, you still need the code to eliminate false positives when needed -- if you've set MAXRSL to 5, and the user looks for a substring of length, say, seven, you either give an error, or truncate it to five and apply the same code I was mentioning above.
Can you afford the simpler, faster-for-searching "complete indexing from MINRSL to MAXRSL" architecture? It depends on your number. If you have, in all, about 2,000 indexed URLs with a total of, say, 4,000 "words" in them to index, all (for simplicity) we'll say of length 8 characters, the MINRSL=2, MAXRSL=5 scheme requires 7+6+5+4 indexables per word, i.e., 22 per word, multiplied by 4,000 is only 88,000 entries, which would be quite affordable. But if you have many more words to index, or substantially-longer words, or need much-vaster ranges of min to max RSL, then the numbers can grow worrisome (and this is the case in which the savings of, say, a factor of three, for the more complicated slower-to-search scheme, may be considered worthwhile). You don't give us any numbers, so of course I can make no guess.
As you see, even this simple idea leads to needing pretty complicated code -- which you're unlikely to find already available as open source, since the requirement is pretty peculiar (very few people care about "arbitrary substrings of DNS names" as you appear to) -- whence my suggestion that, unless you're confident about developing and tuning all this code in house, you consider contacting expert professionals such as those mentioned above to get a quote for developing such code for you (of course, you'll have to give them the numbers you haven't given us, including how big the auxiliary indices are allowed to become, in order to allow them to make a preliminary feasibility assessment before bidding on your requirements).
| Substring search in GAE using python? | I have a model which looks like this:
class Search (db.Model) :
word = db.StringProperty()
an example "word" can look like word = "thisisaword"
I want to search all entities in Search for substrings like "this" "isa" etc.
How can i do this in App engine using python?
Update:
The words here will be domain names. So there is some limit, i guess on the size of the word. e.g. google.com, facebook.com
I want to display "google.com" when someone searches for "gle"
| [
"With no word-separation, I don't think that the task you desire is feasible (it would be in many DB engines by implicitly using no index at all and destroying performance and scalability, but App Engine just doesn't implement \"features\" that inevitably destroy scalability and performance). If you had word separ... | [
6
] | [] | [] | [
"google_app_engine",
"python",
"search",
"string"
] | stackoverflow_0003349868_google_app_engine_python_search_string.txt |
Q:
Python Web Server - Getting it to do other tasks
Using the following example I can get a basic web server running but my problem is that the handle_request() blocks the do_something_else() until a request comes in. Is there any way around this to have the web server do other back ground tasks?
def run_while_true(server_class=BaseHTTPServer.HTTPServer,
handler_class=BaseHTTPServer.BaseHTTPRequestHandler):
server_address = ('', 8000)
httpd = server_class(server_address, handler_class)
while keep_running():
httpd.handle_request()
do_something_else()
A:
You can use multiple threads of execution through the Python threading module. An example is below:
import threading
# ... your code here...
def run_while_true(server_class=BaseHTTPServer.HTTPServer,
handler_class=BaseHTTPServer.BaseHTTPRequestHandler):
server_address = ('', 8000)
httpd = server_class(server_address, handler_class)
while keep_running():
httpd.handle_request()
if __name__ == '__main__':
background_thread = threading.Thread(target=do_something_else)
background_thread.start()
# ... web server start code here...
background_thread.join()
This will cause a thread which executes do_something_else() to start before your web server. When the server shuts down, the join() call ensures do_something_else finishes before the program exits.
A:
You should have a thread that handles http requests, and a thread that does do_something_else().
| Python Web Server - Getting it to do other tasks | Using the following example I can get a basic web server running but my problem is that the handle_request() blocks the do_something_else() until a request comes in. Is there any way around this to have the web server do other back ground tasks?
def run_while_true(server_class=BaseHTTPServer.HTTPServer,
handler_class=BaseHTTPServer.BaseHTTPRequestHandler):
server_address = ('', 8000)
httpd = server_class(server_address, handler_class)
while keep_running():
httpd.handle_request()
do_something_else()
| [
"You can use multiple threads of execution through the Python threading module. An example is below:\nimport threading\n\n# ... your code here...\n\ndef run_while_true(server_class=BaseHTTPServer.HTTPServer,\n handler_class=BaseHTTPServer.BaseHTTPRequestHandler):\n\n server_address = ('', 8000)\n ... | [
2,
0
] | [] | [] | [
"python"
] | stackoverflow_0003346880_python.txt |
Q:
basic python server using spawn/threads
Got a problem that I'm facing. and it should be pretty simple.
I have an app that places data into a dir "A". The data will be a series of files.
I want to have a continually running server, that does a continual look at the dir, and on seeing a completed file in the dir, the server spawns/forks/creates a thread (not sure of the exact word/tech in python) that then performs some work.
Basically, I'm going to be doing an include/execfile("foo") of an external file in the thread, to perform work, based on the file in the dir "A".
I want to be able to have the multiple threads running at the same time. So i'm looking to run the entire process as fast as possible, and implementing threads/spawn/forked process should allow me to have multiple threads running in parallel. There's no communication between the different work processes.
I've seen various examples using twisted, etc.. but I think I'm over thinking this..
Any simple/complete example that I can play with would be great!! (pointers to samples on the 'net would also be cool...
thanks...
A:
In Python, you should consider using the multiprocessing module instead of threads, especially if you have a multicore machine:
multiprocessing is a package that supports spawning processes using an API similar to the threading module. The multiprocessing package offers both local and remote concurrency, effectively side-stepping the Global Interpreter Lock by using subprocesses instead of threads. Due to this, the multiprocessing module allows the programmer to fully leverage multiple processors on a given machine. It runs on both Unix and Windows
Also consult the following for examples and an introduction.
multiprocessing Basics
Communication between processes with multiprocessing
| basic python server using spawn/threads | Got a problem that I'm facing. and it should be pretty simple.
I have an app that places data into a dir "A". The data will be a series of files.
I want to have a continually running server, that does a continual look at the dir, and on seeing a completed file in the dir, the server spawns/forks/creates a thread (not sure of the exact word/tech in python) that then performs some work.
Basically, I'm going to be doing an include/execfile("foo") of an external file in the thread, to perform work, based on the file in the dir "A".
I want to be able to have the multiple threads running at the same time. So i'm looking to run the entire process as fast as possible, and implementing threads/spawn/forked process should allow me to have multiple threads running in parallel. There's no communication between the different work processes.
I've seen various examples using twisted, etc.. but I think I'm over thinking this..
Any simple/complete example that I can play with would be great!! (pointers to samples on the 'net would also be cool...
thanks...
| [
"In Python, you should consider using the multiprocessing module instead of threads, especially if you have a multicore machine:\n\nmultiprocessing is a package that supports spawning processes using an API similar to the threading module. The multiprocessing package offers both local and remote concurrency, effect... | [
1
] | [] | [] | [
"fork",
"multithreading",
"python",
"spawn"
] | stackoverflow_0003350286_fork_multithreading_python_spawn.txt |
Q:
On the google app engine, why do updates not reflect in a transaction?
I store groups of entities in the google app engine Data Store with the same ancestor/parent/entityGroup. This is so that the entities can be updated in one atomic datastore transaction.
The problem is as follows:
I start a db transaction
I update entityX by setting entityX.flag = True
I save entityX
I query for entity where flag == True. BUT, here is the problem. This query does NOT return any results. It should have returned entityX, but it did not.
When I remove the transaction, my code works perfectly, so it must be the transaction that is causing this strange behavior.
Should updates to entities in the entity group not be visible elsewhere in the same transaction?
PS: I am using Python. And GAE tells me I can't use nested transactions :(
A:
App Engine's transactions are designed that way, ie reads within a transaction see a snapshot as of the beginning of the transaction, so they don't see the result of earlier writes within the transaction:
http://code.google.com/appengine/docs/python/datastore/transactions.html#Isolation_and_Consistency
A:
Looks like you are not doing a commit on the transaction before querying
start a db transaction
update entityX by setting entityX.flag = True
save entityX
COMMIT TRANSACTION
query for entity where flag == True. BUT, here is the problem. This query does NOT return any results. It should have returned entityX, but it did not.
In a transaction, entities will not be persisted until the transaction is commited
| On the google app engine, why do updates not reflect in a transaction? | I store groups of entities in the google app engine Data Store with the same ancestor/parent/entityGroup. This is so that the entities can be updated in one atomic datastore transaction.
The problem is as follows:
I start a db transaction
I update entityX by setting entityX.flag = True
I save entityX
I query for entity where flag == True. BUT, here is the problem. This query does NOT return any results. It should have returned entityX, but it did not.
When I remove the transaction, my code works perfectly, so it must be the transaction that is causing this strange behavior.
Should updates to entities in the entity group not be visible elsewhere in the same transaction?
PS: I am using Python. And GAE tells me I can't use nested transactions :(
| [
"App Engine's transactions are designed that way, ie reads within a transaction see a snapshot as of the beginning of the transaction, so they don't see the result of earlier writes within the transaction:\nhttp://code.google.com/appengine/docs/python/datastore/transactions.html#Isolation_and_Consistency\n",
"Loo... | [
4,
0
] | [] | [] | [
"google_app_engine",
"python"
] | stackoverflow_0003350068_google_app_engine_python.txt |
Q:
Variables with Subtypes (Struct?) Python
How would one do something like this in python
Mainstring:
Sub1
Sub2
Sub3
then call upon each of those values by defining a Mainstring StringNumberOne
and
StringNumberOne.Sub1 = ""
A:
There is also the named tuple approach:
from collections import namedtuple
Mainstring = namedtuple('Mainstring', 'sub1 sub2 sub3')
example = Mainstring("a", "b", "c")
print example.sub1 # "a"
A:
I'm not sure if I understand your question. You can habe a class like this:
class ManySubs(object): # explicit inheritance not needed in 3.x
def __init__(self, *subs):
self._subs = subs
# add sub1..subN fields, but only because you asked for it
# I think a dynamic fields are an especially bad idea
# plus, about everytime you have x1..xN, you actually want an array/list
for i in range(len(subs)):
setattr(self, 'sub'+str(i+1), subs[i])
# wrapping code for sequencemethods (__len__, __getitem__, etc)
def __str__(self):
return ''.join(self._subs)
A:
First you define a class MainString. In the __init__ method (the constructor), you create the instance variables (Sub1, etc):
class MainString(object):
def __init__(self):
self.Sub1 = ""
self.Sub2 = ""
self.Sub3 = ""
Then you create an instance of the class. You can change the value of instance variables for that instance:
StringNumberOne = MainString()
StringNumberOne.Sub1 = "hello"
| Variables with Subtypes (Struct?) Python | How would one do something like this in python
Mainstring:
Sub1
Sub2
Sub3
then call upon each of those values by defining a Mainstring StringNumberOne
and
StringNumberOne.Sub1 = ""
| [
"There is also the named tuple approach:\nfrom collections import namedtuple\n\nMainstring = namedtuple('Mainstring', 'sub1 sub2 sub3')\n\nexample = Mainstring(\"a\", \"b\", \"c\")\nprint example.sub1 # \"a\"\n\n",
"I'm not sure if I understand your question. You can habe a class like this:\nclass ManySubs(objec... | [
4,
2,
2
] | [] | [] | [
"python",
"types"
] | stackoverflow_0003350251_python_types.txt |
Q:
Python string match
If a string contains *SUBJECT123, how do I determine that the string has subject in it in python?
A:
if "subject" in mystring.lower():
# do something
A:
If you want to have subject match SUBJECT, you could use re
import re
if re.search('subject', your_string, re.IGNORECASE)
Or you could transform the string to lower case first and simply use:
if "subject" in your_string.lower()
A:
Just another way
mystring.find("subject")
the above answers will return true if the string contains "subject" and false otherwise. While find will return its position in the string if it exists else a negative number.
A:
if "*SUGJECT123" in mystring and "subject" in mystring:
# do something
| Python string match | If a string contains *SUBJECT123, how do I determine that the string has subject in it in python?
| [
"if \"subject\" in mystring.lower():\n # do something\n\n",
"If you want to have subject match SUBJECT, you could use re\nimport re\nif re.search('subject', your_string, re.IGNORECASE)\n\nOr you could transform the string to lower case first and simply use:\nif \"subject\" in your_string.lower()\n\n",
"Just an... | [
34,
12,
7,
1
] | [] | [] | [
"python",
"regex",
"string"
] | stackoverflow_0003351218_python_regex_string.txt |
Q:
Iteration in python dictionary
I populate a python dictionary based on few conditions.
My question is:
can we retrieve the dictionary in the same order as it is populated?
questions_dict={}
data = str(header_arr[opt]) + str(row)
questions_dict.update({data : xl_data})
valid_xl_format = 7
if (type.lower() == "ma" or type.lower() == "mc"):
data = str(header_arr[opt]) + str(row)
questions_dict.update({data : xl_data})
valid_xl_format = 7
After populating if i iterate it is not in the order it is populated
for k in questions_dict:
logging.debug("%s:%s" %(k,questions_dict[k]))
A:
To keep track of the order in which a dictionary is populated, you need a type different than dict (commonly known as "ordered dict"), such as those from the third-party odict module, or, if you can upgrade to Python 2.7, collections.OrderedDict.
A:
Dictionaries aren't ordered collections. You have to have some other data to keep track of the ordering.
A:
the reason dictionaries are efficient is that the keys are mapped to objects called hashes and the dictionaries internally stores your values associated to the key's hashes. so when you loop over your entries, this is related to the order of the hashes, not the original keys.
| Iteration in python dictionary | I populate a python dictionary based on few conditions.
My question is:
can we retrieve the dictionary in the same order as it is populated?
questions_dict={}
data = str(header_arr[opt]) + str(row)
questions_dict.update({data : xl_data})
valid_xl_format = 7
if (type.lower() == "ma" or type.lower() == "mc"):
data = str(header_arr[opt]) + str(row)
questions_dict.update({data : xl_data})
valid_xl_format = 7
After populating if i iterate it is not in the order it is populated
for k in questions_dict:
logging.debug("%s:%s" %(k,questions_dict[k]))
| [
"To keep track of the order in which a dictionary is populated, you need a type different than dict (commonly known as \"ordered dict\"), such as those from the third-party odict module, or, if you can upgrade to Python 2.7, collections.OrderedDict.\n",
"Dictionaries aren't ordered collections. You have to have s... | [
8,
2,
0
] | [] | [] | [
"dictionary",
"python"
] | stackoverflow_0003350091_dictionary_python.txt |
Q:
how to create a dynamic sql statement w/ python and mysqldb
I have the following code:
def sql_exec(self, sql_stmt, args = tuple()):
"""
Executes an SQL statement and returns a cursor.
An SQL exception might be raised on error
@return: SQL cursor object
"""
cursor = self.conn.cursor()
if self.__debug_sql:
try:
print "sql_exec: " % (sql_stmt % args)
except:
print "sql_exec: " % sql_stmt
cursor.execute(sql_stmt, args)
return cursor
def test(self, limit = 0):
result = sql_exec("""
SELECT
*
FROM
table
""" + ("LIMIT %s" if limit else ""), (limit, ))
while True:
row = result.fetchone()
if not row:
break
print row
result.close()
How can I nicely write test() so it works with or without 'limit' without having to write two queries?
A:
First, don't.
Do not build SQL "on the fly". It's a security nightmare. It will cause more problems than it appears to solve.
Second, read the MySQL page on LIMIT. They suggest using a large number.
SELECT * FROM tbl LIMIT 18446744073709551615;
Switch your default from 0 to 18446744073709551615.
If you don't like that, then use an if statement and write two versions of the SELECT. It's better in the long run to have two similar SELECT statements with no security hole.
Third, don't test this way.
Use unittest. http://docs.python.org/library/unittest.html
| how to create a dynamic sql statement w/ python and mysqldb | I have the following code:
def sql_exec(self, sql_stmt, args = tuple()):
"""
Executes an SQL statement and returns a cursor.
An SQL exception might be raised on error
@return: SQL cursor object
"""
cursor = self.conn.cursor()
if self.__debug_sql:
try:
print "sql_exec: " % (sql_stmt % args)
except:
print "sql_exec: " % sql_stmt
cursor.execute(sql_stmt, args)
return cursor
def test(self, limit = 0):
result = sql_exec("""
SELECT
*
FROM
table
""" + ("LIMIT %s" if limit else ""), (limit, ))
while True:
row = result.fetchone()
if not row:
break
print row
result.close()
How can I nicely write test() so it works with or without 'limit' without having to write two queries?
| [
"First, don't.\nDo not build SQL \"on the fly\". It's a security nightmare. It will cause more problems than it appears to solve.\nSecond, read the MySQL page on LIMIT. They suggest using a large number.\nSELECT * FROM tbl LIMIT 18446744073709551615;\n\nSwitch your default from 0 to 18446744073709551615.\nIf you... | [
0
] | [] | [] | [
"mysql",
"python"
] | stackoverflow_0003351897_mysql_python.txt |
Q:
How to select rows from an SQL model for a QListView connected to it
I am trying the following in PyQt4, using SQLAlchemy as the backend for a model for a QListView.
My first version looked like this:
class Model(QAbstractListModel):
def __init__(self, parent=None, *args):
super(Model, self).__init__(parent, *args)
def data(self, index, role):
if not index.isValid():
return None
if role == QtCore.Qt.DisplayRole:
d = sqlmodel.q.get(index.row()+1)
if d:
return d.data
return None
This had the problem that as soon as I start deleting rows, the ids are not consecutive anymore.
So my current solution looks like this:
class Model(QAbstractListModel):
def __init__(self, parent=None, *args):
super(Model, self).__init__(parent, *args)
def data(self, index, role):
if not index.isValid():
return None
if role == QtCore.Qt.DisplayRole:
dives = Dive.q.all()
if index.row() >= len(dives) or index.row() < 0:
return None
return dives[index.row()].location
But I guess this way, I might run into trouble selecting the correct entry from the database later on.
Is there some elegant way to do this?
My first idea would be to return the maximum id from the db as the row_count, and then fill non-existing rows with bogus data and hide them in the view. As the application will, at most, have to handle something around 10k, and that is already very unlikely, I think this might be feasible.
A:
Store the row IDs in a list in the model and use that as an index to retrieve the database rows. If you want to implement sorting within the model-view system just sort the list as reqired.
If you delete a row from the database directly, the model won't know and so it won't update the views. They will display stale data and possibly also break things when users try to edit rows that nolonger exist in the underlying database, which could get realy bad. You can get round this by calling reset() on the model whenever you do this to refresh all the views.
class Model(QAbstractListModel):
def __init__(self, parent=None, *args):
super(Model, self).__init__(parent, *args)
self.id_list = []
def data(self, index, role):
if not index.isValid():
return None
row_id = self.id_list[index.row()]
if role == QtCore.Qt.DisplayRole:
# query database to retrieve the row with the given row_id
| How to select rows from an SQL model for a QListView connected to it | I am trying the following in PyQt4, using SQLAlchemy as the backend for a model for a QListView.
My first version looked like this:
class Model(QAbstractListModel):
def __init__(self, parent=None, *args):
super(Model, self).__init__(parent, *args)
def data(self, index, role):
if not index.isValid():
return None
if role == QtCore.Qt.DisplayRole:
d = sqlmodel.q.get(index.row()+1)
if d:
return d.data
return None
This had the problem that as soon as I start deleting rows, the ids are not consecutive anymore.
So my current solution looks like this:
class Model(QAbstractListModel):
def __init__(self, parent=None, *args):
super(Model, self).__init__(parent, *args)
def data(self, index, role):
if not index.isValid():
return None
if role == QtCore.Qt.DisplayRole:
dives = Dive.q.all()
if index.row() >= len(dives) or index.row() < 0:
return None
return dives[index.row()].location
But I guess this way, I might run into trouble selecting the correct entry from the database later on.
Is there some elegant way to do this?
My first idea would be to return the maximum id from the db as the row_count, and then fill non-existing rows with bogus data and hide them in the view. As the application will, at most, have to handle something around 10k, and that is already very unlikely, I think this might be feasible.
| [
"Store the row IDs in a list in the model and use that as an index to retrieve the database rows. If you want to implement sorting within the model-view system just sort the list as reqired.\nIf you delete a row from the database directly, the model won't know and so it won't update the views. They will display sta... | [
1
] | [] | [] | [
"pyqt4",
"python",
"qt",
"sqlalchemy"
] | stackoverflow_0003190705_pyqt4_python_qt_sqlalchemy.txt |
Q:
python win32api in cygwin-1.75
when i run fabric-0.9.1 in cygwin, it say following error:
$ fab test.py
Traceback (most recent call last):
File "/usr/bin/fab", line 8, in <module>
load_entry_point('Fabric==0.9.1', 'console_scripts', 'fab')()
File "/usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg/pkg_resources.py", line 318, in load_entry_point
File "/usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg/pkg_resources.py", line 2221, in load_entry_point
File "/usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg/pkg_resources.py", line 1954, in load
File "build/bdist.cygwin-1.7.5-i686/egg/fabric/main.py", line 17, in <module>
File "build/bdist.cygwin-1.7.5-i686/egg/fabric/api.py", line 9, in <module>
File "build/bdist.cygwin-1.7.5-i686/egg/fabric/context_managers.py", line 12, in <module>
File "build/bdist.cygwin-1.7.5-i686/egg/fabric/state.py", line 125, in <module>
File "build/bdist.cygwin-1.7.5-i686/egg/fabric/state.py", line 74, in _get_system_username
ImportError: No module named win32api
My environment is windows xp+cygwin1.75+Python 2.6.5+fabric-0.9.1.
should I install python win32 package for cygwin?
Thanks in advance.
A:
i find the answer, it is a small bug of fabric. i solve it according the article:
http://atbrox.com/tag/fabric/
| python win32api in cygwin-1.75 | when i run fabric-0.9.1 in cygwin, it say following error:
$ fab test.py
Traceback (most recent call last):
File "/usr/bin/fab", line 8, in <module>
load_entry_point('Fabric==0.9.1', 'console_scripts', 'fab')()
File "/usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg/pkg_resources.py", line 318, in load_entry_point
File "/usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg/pkg_resources.py", line 2221, in load_entry_point
File "/usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg/pkg_resources.py", line 1954, in load
File "build/bdist.cygwin-1.7.5-i686/egg/fabric/main.py", line 17, in <module>
File "build/bdist.cygwin-1.7.5-i686/egg/fabric/api.py", line 9, in <module>
File "build/bdist.cygwin-1.7.5-i686/egg/fabric/context_managers.py", line 12, in <module>
File "build/bdist.cygwin-1.7.5-i686/egg/fabric/state.py", line 125, in <module>
File "build/bdist.cygwin-1.7.5-i686/egg/fabric/state.py", line 74, in _get_system_username
ImportError: No module named win32api
My environment is windows xp+cygwin1.75+Python 2.6.5+fabric-0.9.1.
should I install python win32 package for cygwin?
Thanks in advance.
| [
"i find the answer, it is a small bug of fabric. i solve it according the article:\nhttp://atbrox.com/tag/fabric/\n"
] | [
1
] | [] | [] | [
"cygwin",
"fabric",
"python",
"winapi"
] | stackoverflow_0003350853_cygwin_fabric_python_winapi.txt |
Q:
MySQL - inform program that a duplicate INSERT was attempted
Is there an easy way to return something to your code if a duplicate insert is attempted?
I want to do something like this (Obviously doesn't work because (ON DUPLICATE KEY INDEX UPDATE)-
query = "INSERT INTO quotes(symbol, date, open, high, low, close, volume, adj)"
query += "VALUES ('" + symbol + "', '" + Date + "','" + Open + "','" + High + "','" + Low + "','" + Close + "','" + Volume + "','" + Adj +"') ON DUPLICATE KEY INDEX SELECT (2)"
Maybe there is something with INSERT IGNORE INTO?
Basically I need to populate a database with millions of entries but it usually gets cut off in the middle, and this would speed it up as I could determine how far it made it last time in the code
Thanks for any help you can provide
A:
Catch the exception raised by the DB-API adapter.
| MySQL - inform program that a duplicate INSERT was attempted | Is there an easy way to return something to your code if a duplicate insert is attempted?
I want to do something like this (Obviously doesn't work because (ON DUPLICATE KEY INDEX UPDATE)-
query = "INSERT INTO quotes(symbol, date, open, high, low, close, volume, adj)"
query += "VALUES ('" + symbol + "', '" + Date + "','" + Open + "','" + High + "','" + Low + "','" + Close + "','" + Volume + "','" + Adj +"') ON DUPLICATE KEY INDEX SELECT (2)"
Maybe there is something with INSERT IGNORE INTO?
Basically I need to populate a database with millions of entries but it usually gets cut off in the middle, and this would speed it up as I could determine how far it made it last time in the code
Thanks for any help you can provide
| [
"Catch the exception raised by the DB-API adapter.\n"
] | [
0
] | [] | [] | [
"mysql",
"python",
"sql"
] | stackoverflow_0003352330_mysql_python_sql.txt |
Q:
Creating read only text files with python
Is it possible to create read only files in python which can not be changed later and in which users can not change its attribute from read-only to normal file?
Please suggest.
Thanks in advance.
A:
This is not python specific.
If the files are made by a different user that the one viewing it the script can make it read-only. As the file is owned by the python user, the viewing user cannot just change the attributes.
So it's very much an OS question, and not a Python question.
Oh, and there is no way to prevent an administrator changing the file, or for the file to be readable but not copyable.
A:
This is just impossible.
Any user with administrative rights can remove readonly restrictions of any kind.
Another option might be "Write a python program to kill all users over the worls so that they would not be able to change file attributes or security settings" :-)
A:
Take a look at os.chmod() function and execute it with appropriate parameters (filename, stat.S_IWUSR | stat.S_IRGRP | stat.S_IROTH) for your just created file.
On linux other users then you will not be able to change file or change attributes to writable.
Some root user or someone logged into you account will be able to change it though.
| Creating read only text files with python | Is it possible to create read only files in python which can not be changed later and in which users can not change its attribute from read-only to normal file?
Please suggest.
Thanks in advance.
| [
"This is not python specific.\nIf the files are made by a different user that the one viewing it the script can make it read-only. As the file is owned by the python user, the viewing user cannot just change the attributes.\nSo it's very much an OS question, and not a Python question.\nOh, and there is no way to pr... | [
3,
1,
0
] | [] | [] | [
"file",
"python"
] | stackoverflow_0003351484_file_python.txt |
Q:
Fastest way to produce UDP packets
We're building a test harness to push binary messages out on a UDP multicast.
The prototype is using the Twisted reactor loop to push out messages, which is achieving just about the level of traffic we require - about 120000 messages per second.
We have a 16 cores on our test machine, and obviously I'd like to spread this over those cores to really make the harness fly.
Does anyone have any ideas about how we might architect the application (either using an event loop approach or a CSP style approach) to up this output.
Also most of the time in the prototype is spent writing to UDP - as IO I shouldn't be surprised, but am I missing anything?
Any ideas welcome.
A:
Multiple NICs, the hardware or the kernel interface is the limit. I can only reach 69,000 packets per second with a Broadcom Corporation NetXtreme BCM5704S Gigabit Ethernet adapter. Try a quad Intel Gigabit Server Adapter with all four NICs on the same subnet.
A:
The obvious answer when the question of exploiting multiple cores in a Python application comes up is to use multiple processes. With Twisted, you can use reactor.spawnProcess to launch a child process. You could also just start 16 instances of your application some other way (like a shell script). This requires that your application can operate sensibly with multiple instances running at once, of course. Exactly how you might divide the work so that each process can take on some of it depends on the nature of the work.
I would expect a single GigE link to be saturated long before you have all 16 cores running full tilt though. Make sure you're focusing on the bottleneck in the system. As Steve-o said, you may want multiple NICs in the machine as well.
| Fastest way to produce UDP packets | We're building a test harness to push binary messages out on a UDP multicast.
The prototype is using the Twisted reactor loop to push out messages, which is achieving just about the level of traffic we require - about 120000 messages per second.
We have a 16 cores on our test machine, and obviously I'd like to spread this over those cores to really make the harness fly.
Does anyone have any ideas about how we might architect the application (either using an event loop approach or a CSP style approach) to up this output.
Also most of the time in the prototype is spent writing to UDP - as IO I shouldn't be surprised, but am I missing anything?
Any ideas welcome.
| [
"Multiple NICs, the hardware or the kernel interface is the limit. I can only reach 69,000 packets per second with a Broadcom Corporation NetXtreme BCM5704S Gigabit Ethernet adapter. Try a quad Intel Gigabit Server Adapter with all four NICs on the same subnet.\n",
"The obvious answer when the question of explo... | [
1,
1
] | [] | [] | [
"python",
"stackless",
"twisted"
] | stackoverflow_0003350282_python_stackless_twisted.txt |
Q:
Getting BeautifulSoup to catch tags in a non-case-sensitive way
I want to catch some tags with BeautifulSoup: Some <p> tags, the <title> tag, some <meta> tags. But I want to catch them regardless of their case; I know that some sites do meta like this: <META> and I want to be able to catch that.
I noticed that BeautifulSoup is case-sensitive by default. How do I catch these tags in a non-case-sensitive way?
A:
BeautifulSoup standardises the parse tree on input. It converts tags to lower-case. You don't have anything to worry about IMO.
A:
You can use soup.findAll which should match case-insensitively:
import BeautifulSoup
html = '''<html>
<head>
<meta name="description" content="Free Web tutorials on HTML, CSS, XML" />
<META name="keywords" content="HTML, CSS, XML" />
<title>Test</title>
</head>
<body>
</body>
</html>'''
soup = BeautifulSoup.BeautifulSoup(html)
for x in soup.findAll('meta'):
print x
Result:
<meta name="description" content="Free Web tutorials on HTML, CSS, XML" />
<meta name="keywords" content="HTML, CSS, XML" />
| Getting BeautifulSoup to catch tags in a non-case-sensitive way | I want to catch some tags with BeautifulSoup: Some <p> tags, the <title> tag, some <meta> tags. But I want to catch them regardless of their case; I know that some sites do meta like this: <META> and I want to be able to catch that.
I noticed that BeautifulSoup is case-sensitive by default. How do I catch these tags in a non-case-sensitive way?
| [
"BeautifulSoup standardises the parse tree on input. It converts tags to lower-case. You don't have anything to worry about IMO.\n",
"You can use soup.findAll which should match case-insensitively:\nimport BeautifulSoup\n\nhtml = '''<html>\n<head>\n<meta name=\"description\" content=\"Free Web tutorials on HTML, ... | [
2,
0
] | [] | [] | [
"beautifulsoup",
"case_insensitive",
"html",
"parsing",
"python"
] | stackoverflow_0003352563_beautifulsoup_case_insensitive_html_parsing_python.txt |
Q:
Python fork-exec problem, child process output goes to same place as parent process
Try the code below with: python fork.py and with: python fork.py 1 to see what it does.
#!/usr/bin/env python2
import os
import sys
child_exit_status = 0
if len(sys.argv) > 1:
child_exit_status = int(sys.argv[1])
pid = os.fork()
if pid == 0:
print "This is the child"
if child_exit_status == 0:
os.execl('/usr/bin/whoami')
else:
os._exit(child_exit_status)
else:
print "This is the parent"
(child_pid, child_status) = os.wait()
print "Our child %s exited with status %s" % (child_pid, child_status)
Question: How come the child process can do 'print' and it still gets outputted to the same place as the parent process?
(Am using Python 2.6 on Ubuntu 10.04)
A:
Under linux, the child process inherits (almost) everything from the parent, including file descriptors. In your case, file descriptor 1 (stdout) and file descriptor 2 (stderr) are open to the same file as the parent.
See the man page for fork().
If you want the output of the child to go someplace else, you can open a new file(s) in the child.
A:
Because you haven't changed the destination of file descriptor 1, standard output, for the child.
| Python fork-exec problem, child process output goes to same place as parent process | Try the code below with: python fork.py and with: python fork.py 1 to see what it does.
#!/usr/bin/env python2
import os
import sys
child_exit_status = 0
if len(sys.argv) > 1:
child_exit_status = int(sys.argv[1])
pid = os.fork()
if pid == 0:
print "This is the child"
if child_exit_status == 0:
os.execl('/usr/bin/whoami')
else:
os._exit(child_exit_status)
else:
print "This is the parent"
(child_pid, child_status) = os.wait()
print "Our child %s exited with status %s" % (child_pid, child_status)
Question: How come the child process can do 'print' and it still gets outputted to the same place as the parent process?
(Am using Python 2.6 on Ubuntu 10.04)
| [
"Under linux, the child process inherits (almost) everything from the parent, including file descriptors. In your case, file descriptor 1 (stdout) and file descriptor 2 (stderr) are open to the same file as the parent.\nSee the man page for fork().\nIf you want the output of the child to go someplace else, you can ... | [
3,
0
] | [] | [] | [
"exec",
"fork",
"python"
] | stackoverflow_0003352185_exec_fork_python.txt |
Q:
Python nose framework: How to stop execution upon first failure
It seems that if a testcase fails, nose will attempt to execute the next testcases. How can I make nose to abort all execution upon the first error in any testcase? I tried sys.exit() but it gave me some ugly and lengthy messages about it
A:
There is an option for nose:
-x, --stop
Stop running tests after the first error or failure
Is this what you need?
Following link can help you with all the options available for nosetests.
http://nose.readthedocs.org/en/latest/usage.html
| Python nose framework: How to stop execution upon first failure | It seems that if a testcase fails, nose will attempt to execute the next testcases. How can I make nose to abort all execution upon the first error in any testcase? I tried sys.exit() but it gave me some ugly and lengthy messages about it
| [
"There is an option for nose:\n-x, --stop\nStop running tests after the first error or failure\n\nIs this what you need?\nFollowing link can help you with all the options available for nosetests.\n http://nose.readthedocs.org/en/latest/usage.html\n"
] | [
87
] | [] | [] | [
"python",
"unit_testing"
] | stackoverflow_0003352862_python_unit_testing.txt |
Q:
Mixin class to trace attribute requests - __attribute__ recursion
I'm trying to create a class which must be superclass of others, tracing their attribute requests. I thought of using "getattribute" which gets all attribute requests, but it generates recursion:
class Mixin(object):
def __getattribute__ (self, attr):
print self, "getting", attr
return self.__dict__[attr]
I know why I get recursion: it's for the self.dict call which recalls getattribute recursively. I've tryied to change last line in "return object.__getattribute__(self,attr)" like suggested in other posts but recursion is recalled.
A:
Try this:
class Mixin(object):
def __getattribute__ (self, attr):
print self, "getting", attr
return object.__getattribute__(self, attr)
If you are still getting recursion problems, it is caused by code you haven't shown us
>>> class Mixin(object):
... def __getattribute__ (self, attr):
... print self, "getting", attr
... return object.__getattribute__(self, attr)
...
>>> Mixin().__str__
<__main__.Mixin object at 0x00B47870> getting __str__
<method-wrapper '__str__' of Mixin object at 0x00B47870>
>>> Mixin().foobar
<__main__.Mixin object at 0x00B47670> getting foobar
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in __getattribute__
AttributeError: 'Mixin' object has no attribute 'foobar'
>>>
And here is the result when combined with Bob's Mylist
>>> class Mylist(Mixin):
... def __init__ (self, lista):
... if not type (lista) == type (""):
... self.value = lista[:]
... def __add__ (self,some):
... return self.value + some
... def __getitem__ (self,item):
... return self.value[item]
... def __getslice__ (self, beg, end):
... return self.value[beg:end]
...
>>> a=Mylist([1,2])
>>> a.value
<__main__.Mylist object at 0x00B47A90> getting value
[1, 2]
A:
This is the code:
from Es123 import Mixin
class Mylist(Mixin):
def __init__ (self, lista):
if not type (lista) == type (""):
self.value = lista[:]
def __add__ (self,some):
return self.value + some
def __getitem__ (self,item):
return self.value[item]
def __getslice__ (self, beg, end):
return self.value[beg:end]
a = Mylist ([1,2])
a.value
Then python returns "RuntimeError: maximum recursion depth exceeded"
| Mixin class to trace attribute requests - __attribute__ recursion | I'm trying to create a class which must be superclass of others, tracing their attribute requests. I thought of using "getattribute" which gets all attribute requests, but it generates recursion:
class Mixin(object):
def __getattribute__ (self, attr):
print self, "getting", attr
return self.__dict__[attr]
I know why I get recursion: it's for the self.dict call which recalls getattribute recursively. I've tryied to change last line in "return object.__getattribute__(self,attr)" like suggested in other posts but recursion is recalled.
| [
"Try this:\nclass Mixin(object):\n def __getattribute__ (self, attr):\n print self, \"getting\", attr\n return object.__getattribute__(self, attr)\n\nIf you are still getting recursion problems, it is caused by code you haven't shown us\n>>> class Mixin(object):\n... def __getattribute__ (self,... | [
5,
0
] | [] | [] | [
"python",
"recursion"
] | stackoverflow_0003352325_python_recursion.txt |
Q:
fuse utimensat problem
I am developing fuse fs at python (with fuse-python bindings). What method I need to implement that touch correctly work? At present I have next output:
$ touch m/My\ files/d3elete1.me
touch: setting times of `m/My files/d3elete1.me': Invalid argument
File exists "d3elete1.me":
$ ls -l m/My\ files/d3elete1.me
-rw-rw-rw- 1 root root 0 Jul 28 15:28 m/My files/d3elete1.me
Also I was trying to trace system calls:
$ strace touch m/My\ files/d3elete1.me
...
open("m/My files/d3elete1.me", O_WRONLY|O_CREAT|O_NOCTTY|O_NONBLOCK|O_LARGEFILE, 0666) = 3
dup2(3, 0) = 0
close(3) = 0
utimensat(0, NULL, NULL, 0) = -1 EINVAL (Invalid argument)
close(0) = 0
...
As you see utimensat failed. I was trying to implement empty utimens and utime but its are not even called.
A:
Try launching fuse with the -f option. Fuse will stay in foreground and you can see errors in the console.
A:
You must implement utimens and getattr. Not all the system calls necessarily map directly to the C calls you might be expecting. Many of them are used internally by FUSE to check and navigate your filesystem, depending on which FUSE options are set.
I believe in your case FUSE is preceding it's interpretation of utimesat to utimens, with a getattr check to verify that the requested file is present, and has the expected attributes.
Update0
This is a great coincidence. There is a comment below suggestion that the issue likes with the fact that FUSE does not support utimensat. This is not the case. I had the exact same traceback you've provided while using fuse-python on Ubuntu 10.04. I poked around a little, it would appear that the fuse-python 0.2 bindings are for FUSE 2.6, it may be that a slight change has introduced this error (FUSE is now at version 2.8). My solution was to stop using fuse-python (the code is an ugly mess), and I found an alternate binding fusepy. I've not looked back, and had no trouble since.
I highly recommend you take a look, your initialization code will be cleaner, and minimal changes are required to adapt to to the new binding. Best of all, it's only one module, and an easy read.
| fuse utimensat problem | I am developing fuse fs at python (with fuse-python bindings). What method I need to implement that touch correctly work? At present I have next output:
$ touch m/My\ files/d3elete1.me
touch: setting times of `m/My files/d3elete1.me': Invalid argument
File exists "d3elete1.me":
$ ls -l m/My\ files/d3elete1.me
-rw-rw-rw- 1 root root 0 Jul 28 15:28 m/My files/d3elete1.me
Also I was trying to trace system calls:
$ strace touch m/My\ files/d3elete1.me
...
open("m/My files/d3elete1.me", O_WRONLY|O_CREAT|O_NOCTTY|O_NONBLOCK|O_LARGEFILE, 0666) = 3
dup2(3, 0) = 0
close(3) = 0
utimensat(0, NULL, NULL, 0) = -1 EINVAL (Invalid argument)
close(0) = 0
...
As you see utimensat failed. I was trying to implement empty utimens and utime but its are not even called.
| [
"Try launching fuse with the -f option. Fuse will stay in foreground and you can see errors in the console.\n",
"You must implement utimens and getattr. Not all the system calls necessarily map directly to the C calls you might be expecting. Many of them are used internally by FUSE to check and navigate your file... | [
2,
1
] | [] | [] | [
"filesystems",
"fuse",
"linux",
"python"
] | stackoverflow_0003352872_filesystems_fuse_linux_python.txt |
Q:
Good framework for live charting in Python?
I am working on a Python application that involves running regression analysis on live data, and charting both. That is, the application gets fed with live data, and the regression models re-calculates as the data updates. Please note that I want to plot both the input (the data) and output (the regression analysis) in the same one chart.
I have previously done some work with Matplotlib. Is that the best framework for this? It seems to be fairly static, I can't find any good examples similar to mine above. It also seems pretty bloated to me. Performance is key, so if there is any fast python charting framework out there with a small footprint, I'm all ears...
A:
I've done quite a bit of animated graphing with matplotlib - it always took me some wrangling to get it to work.
Here's a nice example though:
http://matplotlib.sourceforge.net/examples/animation/simple_anim_gtk.html
A:
I havent worked with Matplotlib but I've always found gnuplot to be adequate for all my charting needs.
You have the option of calling gnuplot from python or using gnuplot.py
(gnuplot-py.sourceforge.net) to interface to gnuplot.
A:
You can use OpenFlash Chart which wil give you a very nice output.
You don't have to have flash (it works on Flex) and there is a python library to write down the charts in a nice pythonic manner:
def test_radar_charts_3():
chart = open_flash_chart()
chart.title = title(text='Radar Chart')
val1 = [30,50,60,70,80,90,100,115,130,115,100,90,80,70,60,50]
spokes = ['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p']
val2 = []
for i in val1:
txt = "#val#<br>Spoke: %s" % i
tmp = dot_value(value=i, colour='#D41E47', tip=txt)
val2.append(tmp)
line = line_hollow()
line.values = val2
line.halo_size = 0
line.width = 2
line.dot_size = 6
line.colour = '#FBB829'
line.text = 'Hearts'
line.font_size = 10
line.loop = True
chart.add_element(line)
r = radar_axis(max=150)
r.step = 10
r.colour = '#DAD5E0'
r.grid_colour = '#EFEFEF'
chart.radar_axis = r
tip = tooltip(proximity=1)
chart.tooltip = tip
chart.bg_colour = '#FFFFFF'
return chart
| Good framework for live charting in Python? | I am working on a Python application that involves running regression analysis on live data, and charting both. That is, the application gets fed with live data, and the regression models re-calculates as the data updates. Please note that I want to plot both the input (the data) and output (the regression analysis) in the same one chart.
I have previously done some work with Matplotlib. Is that the best framework for this? It seems to be fairly static, I can't find any good examples similar to mine above. It also seems pretty bloated to me. Performance is key, so if there is any fast python charting framework out there with a small footprint, I'm all ears...
| [
"I've done quite a bit of animated graphing with matplotlib - it always took me some wrangling to get it to work.\nHere's a nice example though:\nhttp://matplotlib.sourceforge.net/examples/animation/simple_anim_gtk.html\n",
"I havent worked with Matplotlib but I've always found gnuplot to be adequate for all my c... | [
4,
1,
1
] | [] | [] | [
"charts",
"live",
"python"
] | stackoverflow_0003351963_charts_live_python.txt |
Q:
Dynamic Class Instantiation in Python
I have a bunch of classes in a module. Let's say:
'''players.py'''
class Player1:
def __init__(self, name='Homer'):
self.name = name
class Player2:
def __init__(self, name='Barney'):
self.name = name
class Player3:
def __init__(self, name='Moe'):
self.name = name
...
Now, in another module, I want to dynamically load all classes in players.py and instantiate them. I use the python inspect module for this.
'''main.py'''
import inspect
import players
ai_players = inspect.getmembers(players, inspect.isclass)
# ai_players is now a list like: [('Player1',<class instance>),(...]
ai_players = [x[1] for x in ai_players]
# ai_players is now a list like: [<class instance>, <class...]
# now let's try to create a list of initialized objects
ai_players = [x() for x in ai_players]
I would ai_players expect to be a list of object instances like so (let's suppose __str__ returns the name): ['Homer', 'Barney...]
However, I get the following error
TypeError: __init__() takes exactly 2 arguments (1 given)
The funny thing is that when I don't instantiate the class objects in a list comprehension or a for loop everything works fine.
# this does not throw an error
ai_player1 = ai_players[0]()
print ai_player1
# >> 'Homer'
So why doesn't Python allow me to instantiate the classes in a list comprehensions/for loop (I tried it in a for loop, too)?
Or better: How would you go about dynamically loading all classes in a module and instantiating them in a list?
* Note, I'm using Python 2.6
EDIT
Turns out I oversimplified my problem and the above code is just fine.
However if I change players.py to
'''players.py'''
class Player():
"""This class is intended to be subclassed"""
def __init__(self, name):
self.name = name
class Player1(Player):
def __init__(self, name='Homer'):
Player.__init__(self, name)
class Player2(Player):
def __init__(self, name='Barney'):
Player.__init__(self, name)
class Player3(Player):
def __init__(self, name='Moe'):
Player.__init__(self, name)
and change the 5th line in main.py to
ai_players = inspect.getmembers(players,
lambda x: inspect.isclass(x) and issubclass(x, players.Player))
I encounter the described problem. Also that doesn't work either (it worked when I tested it on the repl)
ai_player1 = ai_players[0]()
It seems to have something to do with inheritance and default arguments. If I change the base class Player to
class Player():
"""This class is intended to be subclassed"""
def __init__(self, name='Maggie'):
self.name = name
Then I don't get the error but the players' names are always 'Maggie'.
EDIT2:
I played around a bit and it turns out the getmembers function somehow "eats" the default parameters. Have a look at this log from the repl:
>>> import players
>>> import inspect
>>> ai_players = inspect.getmembers(players,
... lambda x: inspect.isclass(x) and issubclass(x, players.Player))
>>> ai_players = [x[1] for x in ai_players]
>>> ai_players[0].__init__.func_defaults
>>> print ai_players[0].__init__.func_defaults
None
>>> players.Player1.__init__.func_defaults
('Homer',)
Do you know of any alternative to getmembers from the inspect module?
EDIT3:
Note that issubclass() returns True if given the SAME class twice. (Tryptich)
That's been the evildoer
A:
I ran your test code and it worked fine for me.
That error is indicating that you are not actually setting the default "name" parameter on ALL of your classes.
I'd double check.
Edit:
Note that issubclass() returns True if given the SAME class twice.
>>> class Foo: pass
>>> issubclass(Foo, Foo)
True
So your code tries to instantiate Player with no name argument and dies.
Seems like you're over-complicating things a bit. I'd suggest explaining your program in a bit more detail, perhaps in a different question, to get some feedback on whether this approach is even necessary (probably not).
A:
ai_players = inspect.getmembers(players, inspect.isclass())
This line tries to calls inspect.isclass with no arguments and gives the result of the call to getmembers as second argument. You want to pass the function inspect.isclass, so just do that (edit: to be clearer, you want inspect.getmembers(players, inspect.isclass)). What do you think these parentheses do?
A:
The following is a better way of doing what you want.
player.py:
class Player:
def __init__(self, name):
self.name = name
main.py:
import player
names = ['Homer', 'Barney', 'Moe']
players = [player.Player(name) for name in names]
Note that you want to define a single class to hold the details of a player, then you can create as many instances of this class as required.
| Dynamic Class Instantiation in Python | I have a bunch of classes in a module. Let's say:
'''players.py'''
class Player1:
def __init__(self, name='Homer'):
self.name = name
class Player2:
def __init__(self, name='Barney'):
self.name = name
class Player3:
def __init__(self, name='Moe'):
self.name = name
...
Now, in another module, I want to dynamically load all classes in players.py and instantiate them. I use the python inspect module for this.
'''main.py'''
import inspect
import players
ai_players = inspect.getmembers(players, inspect.isclass)
# ai_players is now a list like: [('Player1',<class instance>),(...]
ai_players = [x[1] for x in ai_players]
# ai_players is now a list like: [<class instance>, <class...]
# now let's try to create a list of initialized objects
ai_players = [x() for x in ai_players]
I would ai_players expect to be a list of object instances like so (let's suppose __str__ returns the name): ['Homer', 'Barney...]
However, I get the following error
TypeError: __init__() takes exactly 2 arguments (1 given)
The funny thing is that when I don't instantiate the class objects in a list comprehension or a for loop everything works fine.
# this does not throw an error
ai_player1 = ai_players[0]()
print ai_player1
# >> 'Homer'
So why doesn't Python allow me to instantiate the classes in a list comprehensions/for loop (I tried it in a for loop, too)?
Or better: How would you go about dynamically loading all classes in a module and instantiating them in a list?
* Note, I'm using Python 2.6
EDIT
Turns out I oversimplified my problem and the above code is just fine.
However if I change players.py to
'''players.py'''
class Player():
"""This class is intended to be subclassed"""
def __init__(self, name):
self.name = name
class Player1(Player):
def __init__(self, name='Homer'):
Player.__init__(self, name)
class Player2(Player):
def __init__(self, name='Barney'):
Player.__init__(self, name)
class Player3(Player):
def __init__(self, name='Moe'):
Player.__init__(self, name)
and change the 5th line in main.py to
ai_players = inspect.getmembers(players,
lambda x: inspect.isclass(x) and issubclass(x, players.Player))
I encounter the described problem. Also that doesn't work either (it worked when I tested it on the repl)
ai_player1 = ai_players[0]()
It seems to have something to do with inheritance and default arguments. If I change the base class Player to
class Player():
"""This class is intended to be subclassed"""
def __init__(self, name='Maggie'):
self.name = name
Then I don't get the error but the players' names are always 'Maggie'.
EDIT2:
I played around a bit and it turns out the getmembers function somehow "eats" the default parameters. Have a look at this log from the repl:
>>> import players
>>> import inspect
>>> ai_players = inspect.getmembers(players,
... lambda x: inspect.isclass(x) and issubclass(x, players.Player))
>>> ai_players = [x[1] for x in ai_players]
>>> ai_players[0].__init__.func_defaults
>>> print ai_players[0].__init__.func_defaults
None
>>> players.Player1.__init__.func_defaults
('Homer',)
Do you know of any alternative to getmembers from the inspect module?
EDIT3:
Note that issubclass() returns True if given the SAME class twice. (Tryptich)
That's been the evildoer
| [
"I ran your test code and it worked fine for me.\nThat error is indicating that you are not actually setting the default \"name\" parameter on ALL of your classes.\nI'd double check.\nEdit:\nNote that issubclass() returns True if given the SAME class twice.\n>>> class Foo: pass\n>>> issubclass(Foo, Foo)\nTrue\n\nSo... | [
2,
1,
0
] | [] | [] | [
"introspection",
"python"
] | stackoverflow_0003352258_introspection_python.txt |
Q:
Accessing methods of nested widgets
Im working on optimizing my design in terms of mvc, intent on simplifying the api of the view which is quite nested even though Iv built composite widgets(with there own events and/ pubsub messages) in an attempt to simpify things.
For example I have a main top level gui class a wxFrame which has a number of widgets including a notebook, the notebook contains a number of tabs some of which are notebooks that contain composite widgets. So to call the methods of one of these composite widgets from the controller I would have
self.gui.nb.sub_nb.composite_widget.method()
To create a suitable abstraction for the view I have created references to these widgets (whose methods need to be called in the controller) in the view like so
self.composite_widget = self.nb.sub_nb.composite_widget()
so that in the controller the call is now simplified to
self.gui.composite_widget.method()
Is this an acceptable way to create an abstraction layer for the gui?
A:
Well that's definitely one way to handle the issue. I tend to use pubsub to call methods the old fashioned way though. Some people like pyDispatcher better than pubsub. The main problem with using multi-dot method calling is that it's hard to debug if you have to change a method name.
| Accessing methods of nested widgets | Im working on optimizing my design in terms of mvc, intent on simplifying the api of the view which is quite nested even though Iv built composite widgets(with there own events and/ pubsub messages) in an attempt to simpify things.
For example I have a main top level gui class a wxFrame which has a number of widgets including a notebook, the notebook contains a number of tabs some of which are notebooks that contain composite widgets. So to call the methods of one of these composite widgets from the controller I would have
self.gui.nb.sub_nb.composite_widget.method()
To create a suitable abstraction for the view I have created references to these widgets (whose methods need to be called in the controller) in the view like so
self.composite_widget = self.nb.sub_nb.composite_widget()
so that in the controller the call is now simplified to
self.gui.composite_widget.method()
Is this an acceptable way to create an abstraction layer for the gui?
| [
"Well that's definitely one way to handle the issue. I tend to use pubsub to call methods the old fashioned way though. Some people like pyDispatcher better than pubsub. The main problem with using multi-dot method calling is that it's hard to debug if you have to change a method name.\n"
] | [
1
] | [] | [] | [
"design_patterns",
"model_view_controller",
"oop",
"python",
"user_interface"
] | stackoverflow_0003352521_design_patterns_model_view_controller_oop_python_user_interface.txt |
Q:
pydev importerror: no module named thread, debugging no longer works after pydev upgrade
My Eclipse 3.6 /PyDev setup just did a pydev upgrade to 1.6.0.2010071813 and debugging no longer works. My default python interpreter is 3.1 although I doubt that matters. Until the Eclipse upgrade of pydev, it was working very nicely.
A:
This is already fixed in the current nightly (1.6.1). See: http://pydev.org/download.html for details on getting it.
Note that you can just change that "import thread" locally (in org.python.pydev.debug/pysrc/pydevd.py) for:
try:
import thread
except ImportError:
import _thread as thread #Py3K changed it.
Cheers,
Fabio
A:
Downgrade to 1.5.9. Eclipse updates has the option to show all versions, but by default it shows only the latest version. Turn off that setting, and install 1.5.9. It works with python 3.1
A:
Same problem here, I am on MacOs 10.6. I tried to reinitialize the configured interpreters, it did not fix the problem. I switched between the built-in Python 2.6 and the newer 2.6.5 provided by MacPorts, this also did not fix it. Looks like it needs another update?
Update: I just tried the same upgrade on Linux (this time with a backup of the Eclipse setup :-) ), and experienced the same problem. It is not a platform issue on Mac.
| pydev importerror: no module named thread, debugging no longer works after pydev upgrade | My Eclipse 3.6 /PyDev setup just did a pydev upgrade to 1.6.0.2010071813 and debugging no longer works. My default python interpreter is 3.1 although I doubt that matters. Until the Eclipse upgrade of pydev, it was working very nicely.
| [
"This is already fixed in the current nightly (1.6.1). See: http://pydev.org/download.html for details on getting it.\nNote that you can just change that \"import thread\" locally (in org.python.pydev.debug/pysrc/pydevd.py) for:\ntry: \n import thread \nexcept ImportError:\n import _thread as thread #Py3K ... | [
8,
1,
0
] | [] | [] | [
"eclipse",
"pydev",
"python"
] | stackoverflow_0003326740_eclipse_pydev_python.txt |
Q:
Using Python Functions From the Clips Expert System
Using PyClips, I'm trying to build rules in Clips that dynamically retrieve data from the Python interpreter. To do this, I register an external function as outlined in the manual.
The code below is a toy example of the problem. I'm doing this because I have an application with a large corpus of data, in the form of a SQL database, which I want to reason with using Clips. However, I don't want to waste time converting all this data into Clips assertions, if I can simply "plug" Clips directly into Python's namespace.
However, when I try to create the rule, I get an error. What am I doing wrong?
import clips
#user = True
#def py_getvar(k):
# return globals().get(k)
def py_getvar(k):
return True if globals.get(k) else clips.Symbol('FALSE')
clips.RegisterPythonFunction(py_getvar)
print clips.Eval("(python-call py_getvar user)") # Outputs "nil"
# If globals().get('user') is not None: assert something
clips.BuildRule("user-rule", "(neq (python-call py_getvar user) nil)", "(assert (user-present))", "the user rule")
#clips.BuildRule("user-rule", "(python-call py_getvar user)", "(assert (user-present))", "the user rule")
clips.Run()
clips.PrintFacts()
A:
I received some help on the PyClips support group. The solution is to ensure your Python function returns a clips.Symbol object and use (test ...) to evaluate functions in the LHS of rules. The use of Reset() also appears to be necessary to activate certain rules.
import clips
clips.Reset()
user = True
def py_getvar(k):
return (clips.Symbol('TRUE') if globals().get(k) else clips.Symbol('FALSE'))
clips.RegisterPythonFunction(py_getvar)
# if globals().get('user') is not None: assert something
clips.BuildRule("user-rule", "(test (eq (python-call py_getvar user) TRUE))",
'(assert (user-present))',
"the user rule")
clips.Run()
clips.PrintFacts()
A:
Your problem has something to do with the (neq (python-call py_getvar user) 'None'). Apparently clips doesn't like the nested statement. It appears that trying to wrap a function call in an equality statement does bad things. However you'll never assert the value anyway as your function returns either Nil or the value. Instead what you'll want to do is this:
def py_getvar(k):
return clips.Symbol('TRUE') if globals.get(k) else clips.Symbol('FALSE')
then just change "(neq (python-call py_getvar user) 'None')" to "(python-call py_getvar user)"
And that should work. Haven't used pyclips before messing with it just now, but that should do what you want.
HTH!
>>> import clips
>>> def py_getvar(k):
... return clips.Symbol('TRUE') if globals.get(k) else clips.Symbol('FALSE')
...
>>> clips.RegisterPythonFunction(py_getvar)
>>> clips.BuildRule("user-rule", "(python-call py_getvar user)", "(assert (user-
present))", "the user rule")
<Rule 'user-rule': defrule object at 0x00A691D0>
>>> clips.Run()
0
>>> clips.PrintFacts()
>>>
| Using Python Functions From the Clips Expert System | Using PyClips, I'm trying to build rules in Clips that dynamically retrieve data from the Python interpreter. To do this, I register an external function as outlined in the manual.
The code below is a toy example of the problem. I'm doing this because I have an application with a large corpus of data, in the form of a SQL database, which I want to reason with using Clips. However, I don't want to waste time converting all this data into Clips assertions, if I can simply "plug" Clips directly into Python's namespace.
However, when I try to create the rule, I get an error. What am I doing wrong?
import clips
#user = True
#def py_getvar(k):
# return globals().get(k)
def py_getvar(k):
return True if globals.get(k) else clips.Symbol('FALSE')
clips.RegisterPythonFunction(py_getvar)
print clips.Eval("(python-call py_getvar user)") # Outputs "nil"
# If globals().get('user') is not None: assert something
clips.BuildRule("user-rule", "(neq (python-call py_getvar user) nil)", "(assert (user-present))", "the user rule")
#clips.BuildRule("user-rule", "(python-call py_getvar user)", "(assert (user-present))", "the user rule")
clips.Run()
clips.PrintFacts()
| [
"I received some help on the PyClips support group. The solution is to ensure your Python function returns a clips.Symbol object and use (test ...) to evaluate functions in the LHS of rules. The use of Reset() also appears to be necessary to activate certain rules.\nimport clips\nclips.Reset()\n\nuser = True\n\ndef... | [
3,
1
] | [] | [] | [
"clips",
"expert_system",
"machine_learning",
"python"
] | stackoverflow_0003247952_clips_expert_system_machine_learning_python.txt |
Q:
python: proper usage of global variable
here's the code!
import csv
def do_work():
global data
global b
get_file()
samples_subset1()
return
def get_file():
start_file='thefile.csv'
with open(start_file, 'rb') as f:
data = list(csv.reader(f))
import collections
counter = collections.defaultdict(int)
for row in data:
counter[row[10]] += 1
return
def samples_subset1():
with open('/pythonwork/samples_subset1.csv', 'wb') as outfile:
writer = csv.writer(outfile)
sample_cutoff=5000
b_counter=0
global b
b=[]
for row in data:
if counter[row[10]] >= sample_cutoff:
global b
b.append(row)
writer.writerow(row)
#print b[b_counter]
b_counter+=1
return
i am a beginner at python. the way my code runs is i call do_work and do_Work will call the other functions. here are my questions:
if i need datato be seen by only 2 functions should i make it global? if not then how should i call samples_subset1? should i call it from get_file or from do_work?
the code works but can you please point other good/bad things about the way it is written?
i am processing a csv file and there are multiple steps. i am breaking down the steps into different functions like get_file, samples_subset1, and there are more that i will add. should i continue to do it the way i am doing it right now here i call each individual function from do_work?
here is the new code, according to one of the answers below:
import csv
import collections
def do_work():
global b
(data,counter)=get_file('thefile.csv')
samples_subset1(data, counter,'/pythonwork/samples_subset1.csv')
return
def get_file(start_file):
with open(start_file, 'rb') as f:
global data
data = list(csv.reader(f))
counter = collections.defaultdict(int)
for row in data:
counter[row[10]] += 1
return (data,counter)
def samples_subset1(data,counter,output_file):
with open(output_file, 'wb') as outfile:
writer = csv.writer(outfile)
sample_cutoff=5000
b_counter=0
global b
b=[]
for row in data:
if counter[row[10]] >= sample_cutoff:
global b
b.append(row)
writer.writerow(row)
#print b[b_counter]
b_counter+=1
return
A:
As a rule of thumb, avoid global variables.
Here, it's easy:
let get_file return data
then you can say
data = get_file()
samples_subset1(data)
Also, I'd do all the imports on the top of the file
A:
if you must use a global (and sometimes we must) you can define it in a Pythonic way and give only certain modules access to it without the nasty global keyword at the top of all of your functions/classes.
Create a new module containing only global data (in your case let's say csvGlobals.py):
# create an instance of some data you want to share across modules
data=[]
and then each file you want to have access to this data can do so in this fashion:
import csvGlobals
csvGlobals.data = [1,2,3,4]
for i in csvGlobals.data:
print i
A:
If you want to share data between two or more functions then it is generally better to use a class and turn the functions into methods and the global variable into attributes on the class instance.
BTW, you do not need the return statement at the end of every function. You only need to explicitly return if you want to either return a value or to return in the middle of the function.
| python: proper usage of global variable | here's the code!
import csv
def do_work():
global data
global b
get_file()
samples_subset1()
return
def get_file():
start_file='thefile.csv'
with open(start_file, 'rb') as f:
data = list(csv.reader(f))
import collections
counter = collections.defaultdict(int)
for row in data:
counter[row[10]] += 1
return
def samples_subset1():
with open('/pythonwork/samples_subset1.csv', 'wb') as outfile:
writer = csv.writer(outfile)
sample_cutoff=5000
b_counter=0
global b
b=[]
for row in data:
if counter[row[10]] >= sample_cutoff:
global b
b.append(row)
writer.writerow(row)
#print b[b_counter]
b_counter+=1
return
i am a beginner at python. the way my code runs is i call do_work and do_Work will call the other functions. here are my questions:
if i need datato be seen by only 2 functions should i make it global? if not then how should i call samples_subset1? should i call it from get_file or from do_work?
the code works but can you please point other good/bad things about the way it is written?
i am processing a csv file and there are multiple steps. i am breaking down the steps into different functions like get_file, samples_subset1, and there are more that i will add. should i continue to do it the way i am doing it right now here i call each individual function from do_work?
here is the new code, according to one of the answers below:
import csv
import collections
def do_work():
global b
(data,counter)=get_file('thefile.csv')
samples_subset1(data, counter,'/pythonwork/samples_subset1.csv')
return
def get_file(start_file):
with open(start_file, 'rb') as f:
global data
data = list(csv.reader(f))
counter = collections.defaultdict(int)
for row in data:
counter[row[10]] += 1
return (data,counter)
def samples_subset1(data,counter,output_file):
with open(output_file, 'wb') as outfile:
writer = csv.writer(outfile)
sample_cutoff=5000
b_counter=0
global b
b=[]
for row in data:
if counter[row[10]] >= sample_cutoff:
global b
b.append(row)
writer.writerow(row)
#print b[b_counter]
b_counter+=1
return
| [
"As a rule of thumb, avoid global variables.\nHere, it's easy: \nlet get_file return data\nthen you can say\ndata = get_file()\nsamples_subset1(data)\n\nAlso, I'd do all the imports on the top of the file\n",
"if you must use a global (and sometimes we must) you can define it in a Pythonic way and give only certa... | [
5,
3,
2
] | [] | [] | [
"csv",
"python"
] | stackoverflow_0003354218_csv_python.txt |
Q:
python newbie question: converting code to classes
i have this code:
import csv
import collections
def do_work():
(data,counter)=get_file('thefile.csv')
b=samples_subset1(data, counter,'/pythonwork/samples_subset3.csv',500)
return
def get_file(start_file):
with open(start_file, 'rb') as f:
data = list(csv.reader(f))
counter = collections.defaultdict(int)
for row in data:
counter[row[10]] += 1
return (data,counter)
def samples_subset1(data,counter,output_file,sample_cutoff):
with open(output_file, 'wb') as outfile:
writer = csv.writer(outfile)
b_counter=0
b=[]
for row in data:
if counter[row[10]] >= sample_cutoff:
b.append(row)
writer.writerow(row)
b_counter+=1
return (b)
i recently started learning python, and would like to start off with good habits. therefore, i was wondering if you can help me get started to turn this code into classes. i dont know where to start.
A:
Per my comment on the original post, I don't think a class is necessary here. Still, if other Python programmers will ever read this, I'd suggest getting it inline with PEP8, the Python style guide. Here's a quick rewrite:
import csv
import collections
def do_work():
data, counter = get_file('thefile.csv')
b = samples_subset1(data, counter, '/pythonwork/samples_subset3.csv', 500)
def get_file(start_file):
with open(start_file, 'rb') as f:
counter = collections.defaultdict(int)
data = list(csv.reader(f))
for row in data:
counter[row[10]] += 1
return (data, counter)
def samples_subset1(data, counter, output_file, sample_cutoff):
with open(output_file, 'wb') as outfile:
writer = csv.writer(outfile)
b = []
for row in data:
if counter[row[10]] >= sample_cutoff:
b.append(row)
writer.writerow(row)
return b
Notes:
No one uses more than 4 spaces to
indent ever. Use 2 - 4. And all
your levels of indentation should
match.
Use a single space after the commas between arguments
to functions ("F(a, b, c)" not
"F(a,b,c)")
Naked return statements at the end of a function
are meaningless. Functions without
return statements implicitly return
None
Single space around all
operators (a = 1, not a=1)
Do not
wrap single values in parentheses.
It looks like a tuple, but it isn't.
b_counter wasn't used at all, so I
removed it.
csv.reader returns an iterator, which you are casting to a list. That's usually a bad idea because it forces Python to load the entire file into memory at once, whereas the iterator will just return each line as needed. Understanding iterators is absolutely essential to writing efficient Python code. I've left data in for now, but you could rewrite to use an iterator everywhere you're using data, which is a list.
A:
Well, I'm not sure what you want to turn into a class. Do you know what a class is? You want to make a class to represent some type of thing. If I understand your code correctly, you want to filter a CSV to show only those rows whose row[ 10 ] is shared by at least sample_cutoff other rows. Surely you could do that with an Excel filter much more easily than by reading through the file in Python?
What the guy in the other thread suggested is true, but not really applicable to your situation. You used a lot of global variables unnecessarily: if they'd been necessary to the code you should have put everything into a class and made them attributes, but as you didn't need them in the first place, there's no point in making a class.
Some tips on your code:
Don't cast the file to a list. That makes Python read the whole thing into memory at once, which is bad if you have a big file. Instead, simply iterate through the file itself: for row in csv.reader(f): Then, when you want to go through the file a second time, just do f.seek(0) to return to the top and start again.
Don't put return at the end of every function; that's just unnecessary. You don't need parentheses, either: return spam is fine.
Rewrite
import csv
import collections
def do_work():
with open( 'thefile.csv' ) as f:
# Open the file and count the rows.
data, counter = get_file(f)
# Go back to the start of the file.
f.seek(0)
# Filter to only common rows.
b = samples_subset1(data, counter,
'/pythonwork/samples_subset3.csv', 500)
return b
def get_file(f):
counter = collections.defaultdict(int)
data = csv.reader(f)
for row in data:
counter[row[10]] += 1
return data, counter
def samples_subset1(data, counter, output_file, sample_cutoff):
with open(output_file, 'wb') as outfile:
writer = csv.writer(outfile)
b = []
for row in data:
if counter[row[10]] >= sample_cutoff:
b.append(row)
writer.writerow(row)
return b
| python newbie question: converting code to classes | i have this code:
import csv
import collections
def do_work():
(data,counter)=get_file('thefile.csv')
b=samples_subset1(data, counter,'/pythonwork/samples_subset3.csv',500)
return
def get_file(start_file):
with open(start_file, 'rb') as f:
data = list(csv.reader(f))
counter = collections.defaultdict(int)
for row in data:
counter[row[10]] += 1
return (data,counter)
def samples_subset1(data,counter,output_file,sample_cutoff):
with open(output_file, 'wb') as outfile:
writer = csv.writer(outfile)
b_counter=0
b=[]
for row in data:
if counter[row[10]] >= sample_cutoff:
b.append(row)
writer.writerow(row)
b_counter+=1
return (b)
i recently started learning python, and would like to start off with good habits. therefore, i was wondering if you can help me get started to turn this code into classes. i dont know where to start.
| [
"Per my comment on the original post, I don't think a class is necessary here. Still, if other Python programmers will ever read this, I'd suggest getting it inline with PEP8, the Python style guide. Here's a quick rewrite:\nimport csv\nimport collections\n\ndef do_work():\n data, counter = get_file('thefile.csv... | [
4,
2
] | [] | [] | [
"class",
"csv",
"python"
] | stackoverflow_0003354593_class_csv_python.txt |
Q:
In Jinja2 whats the easiest way to set all the keys to be the values of a dictionary?
I've got a dashboard that namespaces the context for each dashboard item. Is there a quick way I can set all the values of a dictionary to the keys in a template?
I want to reuse templates and not always namespace my variables.
My context can be simplified to look something like this:
{
"business": {"businesses": [], "new_promotions": []},
"messages": {"one_to_one": [], "announcements": []
}
So in a with statement I want to set all the business items to be local for the including. To do this currently I have to set each variable individually:
{% with %}
{% set businesses = business["businesses"] %}
{% set new_promotions = business["new_promotions"] %}
{% include "businesses.html" %}
{% endwith %}
I tried:
{% with %}
{% for key, value in my_dict %}
{% set key = value %}
{% endfor %}
{% include "businesses.html" %}
{% endwith %}
But they only have scope in the for loop so aren't scoped in the include...
A:
Long story short: you can't set arbitrary variables in the context. The {% set key = value %} is just setting the variable named key to the given value.
The reason is because Jinja2 compiles templates down to Python code. (If you want to see the code your template generates, download the script at http://ryshcate.leafstorm.us/paste/71c95831ca0f1d5 and pass it your template's filename.) In order to make processing faster, it creates local variables for every variable you use in the template (only looking up the variable in the context the first time it's encountered), as opposed to Django, which uses the context for all variable lookups.
In order to generate this code properly, it needs to be able to track which local or global variables exist at any given time, so it knows when to look up in the context. And setting random variables would prevent this from working, which is why contextfunctions are not allowed to modify the context, just view it.
What I would do, though, is instead of having your business-displaying code be an included template, is have it be a macro in another template. For example, in businesses.html:
{% macro show_businesses(businesses, new_promotions) %}
{# whatever you're displaying... #}
{% endmacro %}
And then in your main template:
{% from "businesses.html" import show_businesses %}
{% show_businesses(**businesses) %}
Or, better yet, separate them into two separate macros - one for businesses, and one for new promotions. You can see a lot of cool template tricks at http://bitbucket.org/plurk/solace/src/tip/solace/templates/, and of course check the Jinja2 documentation at http://jinja.pocoo.org/2/documentation/templates.
A:
I've found a work around - by creating a context function I can render the template and directly set the context or merge the context (not sure thats good practise though).
@jinja2.contextfunction
def render(context, template_name, extra_context, merge=False):
template = jinja_env.get_template(template_name)
# Merge or standalone context?
if merge:
ctx = context.items()
ctx.update(extra_context)
else:
ctx = extra_context
return jinja2.Markup(template.render(ctx))
So my templates look like:
{{ render("businesses.html", business) }}
| In Jinja2 whats the easiest way to set all the keys to be the values of a dictionary? | I've got a dashboard that namespaces the context for each dashboard item. Is there a quick way I can set all the values of a dictionary to the keys in a template?
I want to reuse templates and not always namespace my variables.
My context can be simplified to look something like this:
{
"business": {"businesses": [], "new_promotions": []},
"messages": {"one_to_one": [], "announcements": []
}
So in a with statement I want to set all the business items to be local for the including. To do this currently I have to set each variable individually:
{% with %}
{% set businesses = business["businesses"] %}
{% set new_promotions = business["new_promotions"] %}
{% include "businesses.html" %}
{% endwith %}
I tried:
{% with %}
{% for key, value in my_dict %}
{% set key = value %}
{% endfor %}
{% include "businesses.html" %}
{% endwith %}
But they only have scope in the for loop so aren't scoped in the include...
| [
"Long story short: you can't set arbitrary variables in the context. The {% set key = value %} is just setting the variable named key to the given value.\nThe reason is because Jinja2 compiles templates down to Python code. (If you want to see the code your template generates, download the script at http://ryshcate... | [
4,
1
] | [] | [] | [
"jinja2",
"python",
"templates"
] | stackoverflow_0003352724_jinja2_python_templates.txt |
Q:
html form submission
I'm looking at the html form of an external website (not maintained by myself) in the following format :
<form onsubmit="return check(this)" method=post action=address.aspx target=ttPOST>
....
</form>
I wish to post data to this external website without having to go through the form and pressing submit.
Can I simply go to the url 'address.aspx' and append it with the required input parameters ?
My goal is to automate the periodic posting of information, chosen from a list of frequently changing values on the website. (Using Python and AutoIt)
A:
I should note that I'm unclear if you were wanting to automate the posting of data from outside a web browser or not. Others have answered doing it with script and such like from the web page so I thought I'd cover how it works when you are doing it from a standalone program.
For most languages you can get things that will help you simulate web requests. Most of these would probably allow you to make a post request and supply post data. I dont' know python and autoit but in teh general sense you'd just need to get the name value pairs by looking at the HTML of the form (or by examining a request being made to the server is probably better) and then make a post request.
As others have said if you want to just append the values to teh url then the server will need to be happy to accept a GET request instead of a post. Sometimes you will find that servers will do this happily (they don't care how form submission is done, as long as they get data), other times it will be specifically looking for a post and will ignore parameters passed as part of the querystring.
A:
You can use JQuery.post()
<form action="#" class="myForm" method="post">
<input type="text" id="field1" name="field1" value="" runat="server" />
</form>
// Submit
<div onclick="submit();return false;">Submit</div>
Then the submit() function looks like
function submit() {
$.post("address.aspx", $("form.myForm").serialize(), function(data, textStatus)
{
// Handle success or error
});
}
In codebehind, you can access the post variables
Request.Form["field1"]
A:
Change the method attribute from POST to GET.
Read about the differences here.
You can get away with changing the URL to go to the external site using the same syntax of GET parameter (?param1=val1¶m2=val2...), but to achieve this you will need to write this logic yourself in javascript.
A:
Why do you want to get around the form submit? Submit the form and use the values in HTTP_POST. You can use HTTP_GET by changing the method to GET in the above html but the form will still submit.
One way to submit the params to address.aspx would be to use javascript....build a string of all the params and submit that to address.aspx when the user clicks on the submit button. You'll need to disable the submit buttons default behaviour.
A:
You could make AJAX GET requests (appending the data to the URL) and execute them with a certain time interval.
I would recommend jQuery, but it may not be ideal in a .NET environment.
jQuery.ajax() - http://api.jquery.com/jQuery.ajax/
jQuery.get() - http://api.jquery.com/jQuery.get/
NOTE:
I wish to post data to this external website without having to go through the form and pressing submit.
I took this as meaning you didn't want to actually submit the form, do you want to submit the form but simply not by pressing the submit button? If you actually want the form to submit, then you should indeed just change the form's method attribute to get instead of post.
If however you do wish to stay on the form page and need to post the data to some other resource with the data in the URL string (there are reasons for and aganist doing this that you should look into -- scroll down to 9.3 GET), then just make an AJAX GET request, otherwise just POST the data using AJAX. Either way, you'll have to make some sort of an asynchronous call.
A:
POST is not the same as GET -- if you append the data that you want to the URL, the page will not see your variables. That is to say GETTING (a.k.a going to) http://www.the-other-site.com/address.aspx?variable1=a&variable2=b will not work.
You need to build POST request and submit it to address.aspx.
If you want to do it yourself then you'll need Python's urllib2 module (More specifically, the Request object). If you want to use a pre-built solution, you'll need something like mechanize to POST your variables.
| html form submission | I'm looking at the html form of an external website (not maintained by myself) in the following format :
<form onsubmit="return check(this)" method=post action=address.aspx target=ttPOST>
....
</form>
I wish to post data to this external website without having to go through the form and pressing submit.
Can I simply go to the url 'address.aspx' and append it with the required input parameters ?
My goal is to automate the periodic posting of information, chosen from a list of frequently changing values on the website. (Using Python and AutoIt)
| [
"I should note that I'm unclear if you were wanting to automate the posting of data from outside a web browser or not. Others have answered doing it with script and such like from the web page so I thought I'd cover how it works when you are doing it from a standalone program.\nFor most languages you can get things... | [
1,
1,
0,
0,
0,
0
] | [] | [] | [
"automation",
"html",
"mechanize",
"python"
] | stackoverflow_0003354620_automation_html_mechanize_python.txt |
Q:
Can I use Google App Engine for processing data?
I would like to execute some long running JRuby scripts [ nothing to do with web requests and url fetch ] on Google App Engine. There is a 30 seconds limit on URL Fetch requests. Does the same apply for plain JRuby/Python scripts? If yes, is there a workaround?
A:
The 30-second applies to everything that happens on AppEngine. It's really not an ideal platform for hosting long-running processes. There are some techniques that you can use to simulate what you want. Task Queues can be set up to perform work in the background, for example.
Still, you might want to look into one of the vast variety of hosting alternatives that will let you simply launch a process and let it keep running.
| Can I use Google App Engine for processing data? | I would like to execute some long running JRuby scripts [ nothing to do with web requests and url fetch ] on Google App Engine. There is a 30 seconds limit on URL Fetch requests. Does the same apply for plain JRuby/Python scripts? If yes, is there a workaround?
| [
"The 30-second applies to everything that happens on AppEngine. It's really not an ideal platform for hosting long-running processes. There are some techniques that you can use to simulate what you want. Task Queues can be set up to perform work in the background, for example.\nStill, you might want to look into on... | [
4
] | [] | [] | [
"google_app_engine",
"jruby",
"python"
] | stackoverflow_0003355197_google_app_engine_jruby_python.txt |
Q:
issue running a program (R) in Python to perform an operation (execute a script)
I'm tying to execute an R script from python, ideally displaying and saving the results. Using rpy2 has been a bit of a struggle, so I thought I'd just call R directly. I have a feeling that I'll need to use something like "os.system" or "subprocess.call," but I am having difficulty deciphering the module guides.
Here's the R script "MantelScript", which uses a particular stat test to compare two distance matrices at a time (distmatA1 and distmatB1). This works in R, though I haven't yet put in the iterating bits in order to read through and compare a bunch of files in a pairwise fashion (I really need some assistance with this, too btw!):
library(ade4)
M1<-read.table("C:\\pythonscripts\\distmatA1.csv", header = FALSE, sep = ",")
M2<-read.table("C:\\pythonscripts\\distmatB1.csv", header = FALSE, sep = ",")
mantel.rtest(dist(matrix(M1, 14, 14)), dist(matrix(M2, 14, 14)), nrepet = 999)
Here's the relevant bit of my python script, which reads through some previously formulated lists and pulls out matrices in order to compare them via this Mantel Test (it should pull the first matrix from identityA and sequentially compare it to every matrix in identityB, then repeat with the second matrix from identityB etc). I want to save these files and then call on the R program to compare them:
# windownA and windownB are lists containing ascending sequences of integers
# identityA and identityB are lists where each field is a distance matrix.
z = 0
v = 0
import subprocess
import os
for i in windownA:
M1 = identityA[i]
z += 1
filename = "C:/pythonscripts/distmatA"+str(z)+".csv"
file = csv.writer(open(filename, 'w'))
file.writerow(M1)
for j in windownB:
M2 = identityB[j]
v += 1
filename2 = "C:/pythonscripts/distmatB"+str(v)+".csv"
file = csv.writer(open(filename2, 'w'))
file.writerow(M2)
## result = os.system('R CMD BATCH C:/R/library/MantelScript.R') - maybe something like this??
## result = subprocess.call(['C:/R/library/MantelScript.txt']) - or maybe this??
print result
print ' '
A:
If your R script only has side effects that's fine, but if you want to process further the results with Python, you'll still be better of using rpy2.
import rpy2.robjects
f = file("C:/R/library/MantelScript.R")
code = ''.join(f.readlines())
result = rpy2.robjects.r(code)
# assume that MantelScript creates a variable "X" in the R GlobalEnv workspace
X = rpy2.rojects.globalenv['X']
A:
Stick with this.
process = subprocess.Popen(['R', 'CMD', 'BATCH', 'C:/R/library/MantelScript.R'])
process.wait()
When the the wait() function returns a value the .R file is finished.
Note that you should write your .R script to produce a file that your Python program can read.
with open( 'the_output_from_mantelscript', 'r' ) as result:
for line in result:
print( line )
Don't waste a lot of time trying to hook up a pipeline.
Invest time in getting a basic "Python spawns R" process working.
You can add to this later.
A:
In case you're interested in generally invoking an R subprocess from Python.
#!/usr/bin/env python3
from io import StringIO
from subprocess import PIPE, Popen
def rnorm(n):
rscript = Popen(["Rscript", "-"], stdin=PIPE, stdout=PIPE, stderr=PIPE)
with StringIO() as s:
s.write("x <- rnorm({})\n".format(n))
s.write("cat(x, \"\\n\")\n")
return rscript.communicate(s.getvalue().encode())
if __name__ == '__main__':
output, errmsg = rnorm(5)
print("stdout:")
print(output.decode('utf-8').strip())
print("stderr:")
print(errmsg.decode('utf-8').strip())
Better to do it through Rscript.
A:
Given what you're trying to do, a pure R solution might be neater:
file.pairs <- combn(dir(pattern="*.csv"), 2) # get every pair of csv files in the current dir
The pairs are columns in a 2xN matrix:
file.pairs[,1]
[1] "distmatrix1.csv" "distmatrix2.csv"
You can run a function on these columns by using apply (with option '2', meaning 'act over columns'):
my.func <- function(v) paste(v[1], v[2], sep="::")
apply(file.pairs, 2, my.func)
In this example my.func just glues the two file names together; you could replace this with a function that does the Mantel Test, something like (untested):
my.func <- function(v){
M1<-read.table(v[1], header = FALSE, sep = ",")
M2<-read.table(v[2], header = FALSE, sep = ",")
mantel.rtest(dist(matrix(M1, 14, 14)), dist(matrix(M2, 14, 14)), nrepet = 999)
}
| issue running a program (R) in Python to perform an operation (execute a script) | I'm tying to execute an R script from python, ideally displaying and saving the results. Using rpy2 has been a bit of a struggle, so I thought I'd just call R directly. I have a feeling that I'll need to use something like "os.system" or "subprocess.call," but I am having difficulty deciphering the module guides.
Here's the R script "MantelScript", which uses a particular stat test to compare two distance matrices at a time (distmatA1 and distmatB1). This works in R, though I haven't yet put in the iterating bits in order to read through and compare a bunch of files in a pairwise fashion (I really need some assistance with this, too btw!):
library(ade4)
M1<-read.table("C:\\pythonscripts\\distmatA1.csv", header = FALSE, sep = ",")
M2<-read.table("C:\\pythonscripts\\distmatB1.csv", header = FALSE, sep = ",")
mantel.rtest(dist(matrix(M1, 14, 14)), dist(matrix(M2, 14, 14)), nrepet = 999)
Here's the relevant bit of my python script, which reads through some previously formulated lists and pulls out matrices in order to compare them via this Mantel Test (it should pull the first matrix from identityA and sequentially compare it to every matrix in identityB, then repeat with the second matrix from identityB etc). I want to save these files and then call on the R program to compare them:
# windownA and windownB are lists containing ascending sequences of integers
# identityA and identityB are lists where each field is a distance matrix.
z = 0
v = 0
import subprocess
import os
for i in windownA:
M1 = identityA[i]
z += 1
filename = "C:/pythonscripts/distmatA"+str(z)+".csv"
file = csv.writer(open(filename, 'w'))
file.writerow(M1)
for j in windownB:
M2 = identityB[j]
v += 1
filename2 = "C:/pythonscripts/distmatB"+str(v)+".csv"
file = csv.writer(open(filename2, 'w'))
file.writerow(M2)
## result = os.system('R CMD BATCH C:/R/library/MantelScript.R') - maybe something like this??
## result = subprocess.call(['C:/R/library/MantelScript.txt']) - or maybe this??
print result
print ' '
| [
"If your R script only has side effects that's fine, but if you want to process further the results with Python, you'll still be better of using rpy2.\nimport rpy2.robjects\nf = file(\"C:/R/library/MantelScript.R\")\ncode = ''.join(f.readlines())\nresult = rpy2.robjects.r(code)\n# assume that MantelScript creates a... | [
5,
2,
2,
0
] | [] | [] | [
"os.system",
"python",
"r",
"rpy2",
"subprocess"
] | stackoverflow_0003339147_os.system_python_r_rpy2_subprocess.txt |
Q:
Provide Global Access Point to an Instance of an Object
Imagine a system (Python) where the different parts constantly interact with one instance of a given object. What is the best way to provide a global access point to this instance?
So far I can only think of building the (Singleton) instance in __init__.py and import the module as needed:
# __init__.py
class Thing(object, Singleton):
pass
TheThing = Thing()
__all__ = ['TheThing']
Is there a better way to provide a global access point to TheThing?
Thanks,
J.
A:
Don't use singletons in python. Python modules are great singletons (they are initialized only once and are available everywhere) and you can have a global variable in one, if you need it.
Here is explanation: Is there a simple, elegant way to define singletons?
| Provide Global Access Point to an Instance of an Object | Imagine a system (Python) where the different parts constantly interact with one instance of a given object. What is the best way to provide a global access point to this instance?
So far I can only think of building the (Singleton) instance in __init__.py and import the module as needed:
# __init__.py
class Thing(object, Singleton):
pass
TheThing = Thing()
__all__ = ['TheThing']
Is there a better way to provide a global access point to TheThing?
Thanks,
J.
| [
"Don't use singletons in python. Python modules are great singletons (they are initialized only once and are available everywhere) and you can have a global variable in one, if you need it.\nHere is explanation: Is there a simple, elegant way to define singletons?\n"
] | [
6
] | [] | [] | [
"global_variables",
"python"
] | stackoverflow_0003355458_global_variables_python.txt |
Q:
Python MySQLdb cursor truncating values
I ran into an interesting problem. I have a MySQL database that contains some doubles with very precise decimal values (for example, 0.00895406607247756, 17 decimal places). This is scientific data so this high level of precision is very important.
I'm using MySQLdb in Python to select data from the database:
cursor.execute("SELECT * FROM " ...etc...)
for n in range(cursor.rowcount):
row = cursor.fetchone()
print row
For some reason, when it gets to my very precise decimals, they've been truncated to a maximum of 14 decimal places. (i.e. the previous decimal becomes 0.00895406607248)
Is there any way to get the data from MySQLdb in its original form without truncating?
A:
What MySQL datatype are you using to store the data? Is it DECIMAL(18,17)? DECIMALs have up to 65 digits of precision.
If you set the MySQL data type to use DECIMAL(...), then MySQLdb will convert the data to a Python decimal.Decimal object, which should preserve the precision.
| Python MySQLdb cursor truncating values | I ran into an interesting problem. I have a MySQL database that contains some doubles with very precise decimal values (for example, 0.00895406607247756, 17 decimal places). This is scientific data so this high level of precision is very important.
I'm using MySQLdb in Python to select data from the database:
cursor.execute("SELECT * FROM " ...etc...)
for n in range(cursor.rowcount):
row = cursor.fetchone()
print row
For some reason, when it gets to my very precise decimals, they've been truncated to a maximum of 14 decimal places. (i.e. the previous decimal becomes 0.00895406607248)
Is there any way to get the data from MySQLdb in its original form without truncating?
| [
"What MySQL datatype are you using to store the data? Is it DECIMAL(18,17)? DECIMALs have up to 65 digits of precision.\nIf you set the MySQL data type to use DECIMAL(...), then MySQLdb will convert the data to a Python decimal.Decimal object, which should preserve the precision.\n"
] | [
3
] | [] | [] | [
"decimal",
"floating_point",
"mysql",
"python"
] | stackoverflow_0003355353_decimal_floating_point_mysql_python.txt |
Q:
Creating a C wrapper with Cython - Python
I've been trying to figure out how to wrap the following C functions = compress.c , compress.h.
I tried following the tutorials, but after creating the .pxd file I don't know what to do :|
From what I have understood this is the pxd file that I should have
cdef extern from "compress.h":
size_t compress(void *s_start, void *d_start, size_t s_len)
size_t decompress(void *s_start, void *d_start, size_t s_len, size_t d_len)
uint32_t checksum32(void *cp_arg, size_t length)
After this I have no clue what to do :|
Help would be great guys! =)
Edit:
Getting this error
~/essentials/pylzjb# gcc -I /usr/include/python2.6 -shared -o pylzjb.so pylzjb.c compress.c
/usr/bin/ld: /tmp/ccQLZTaG.o: relocation R_X86_64_32S against `_Py_NoneStruct'
can not be used when making a shared object; recompile with -fPIC
/tmp/ccQLZTaG.o: could not read symbols: Bad value
collect2: ld returned 1 exit status
thanks anyway =)
A:
write a .pyx file that implements a wrapper calling the C functions?
I think the toughest part might be buffer handling...
pylzjb.pyx could look as follows (note that your .pxd is inlined):
cdef extern from "compress.h":
size_t compress(void *s_start, void *d_start, size_t s_len)
from stdlib cimport *
def cmpr(bytes s):
cdef size_t n = len(s)
cdef unsigned char *dst = <unsigned char *> malloc(n * sizeof(unsigned char))
try:
m = compress(<unsigned char *> s, dst, n)
ret = [dst[i] for i from 0 <= i < m]
finally:
free(dst)
return ret
compile with:
cython -I. pylzjb.pyx
gcc -I /usr/include/python2.5 -shared -o pylzjb.so pylzjb.c compres.c
and try to use with
import pylzjb
pylzjb.cmpr('asfd')
A:
the Google code project pylzjb seems to implement a Python interface for compress.c|h already?
| Creating a C wrapper with Cython - Python | I've been trying to figure out how to wrap the following C functions = compress.c , compress.h.
I tried following the tutorials, but after creating the .pxd file I don't know what to do :|
From what I have understood this is the pxd file that I should have
cdef extern from "compress.h":
size_t compress(void *s_start, void *d_start, size_t s_len)
size_t decompress(void *s_start, void *d_start, size_t s_len, size_t d_len)
uint32_t checksum32(void *cp_arg, size_t length)
After this I have no clue what to do :|
Help would be great guys! =)
Edit:
Getting this error
~/essentials/pylzjb# gcc -I /usr/include/python2.6 -shared -o pylzjb.so pylzjb.c compress.c
/usr/bin/ld: /tmp/ccQLZTaG.o: relocation R_X86_64_32S against `_Py_NoneStruct'
can not be used when making a shared object; recompile with -fPIC
/tmp/ccQLZTaG.o: could not read symbols: Bad value
collect2: ld returned 1 exit status
thanks anyway =)
| [
"write a .pyx file that implements a wrapper calling the C functions?\nI think the toughest part might be buffer handling...\npylzjb.pyx could look as follows (note that your .pxd is inlined): \ncdef extern from \"compress.h\":\n size_t compress(void *s_start, void *d_start, size_t s_len)\n\nfrom stdlib cimport ... | [
3,
1
] | [] | [] | [
"boost_python",
"c",
"cython",
"python",
"ubuntu"
] | stackoverflow_0003352285_boost_python_c_cython_python_ubuntu.txt |
Q:
FTP upload file works manually, but fails using Python ftplib
I installed vsFTP in a Debian box. When manually upload file using ftp command, it's ok. i.e, the following session works:
john@myhost:~$ ftp xxx.xxx.xxx.xxx 5111
Connected to xxx.xxx.xxx.xxx.
220 Hello,Welcom to my FTP server.
Name (xxx.xxx.xxx.xxx:john): ftpuser
331 Please specify the password.
Password:
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> put st.zip
local: st.zip remote: st.zip
200 PORT command successful. Consider using PASV.
150 Ok to send data.
226 File receive OK.
12773 bytes sent in 0.00 secs (277191.8 kB/s)
ftp> 221 Goodbye.
(Please noted that as above, I configured vsFTP server to use a non-default port,e.g 5111 for some reason)
Now when I write a script in python to upload file programmatically, it failed. the error says 'time out', as the following session shows:
john@myhost:~$ ipython
Python 2.5.2 (r252:60911, Jan 24 2010, 14:53:14)
Type "copyright", "credits" or "license" for more information.
IPython 0.8.4 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object'. ?object also works, ?? prints more.
In [1]: import ftplib
In [2]: ftp=ftplib.FTP()
In [3]: ftp.connect('xxx.xxx.xxx.xxx','5111')
Out[3]: "220 Hello,Welcom to my FTP server."
In [4]: ftp.login('ftpuser','ftpuser')
Out[4]: '230 Login successful.'
In [5]: f=open('st.zip','rb')
In [6]: ftp.storbinary('STOR %s' % 'my_ftp_file.zip', f)
---------------------------------------------------------------------------
error Traceback (most recent call last)
...
/usr/lib/python2.5/ftplib.pyc in ntransfercmd(self, cmd, rest)
322 af, socktype, proto, canon, sa = socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM)[0]
323 conn = socket.socket(af, socktype, proto)
--> 324 conn.connect(sa)
325 if rest is not None:
326 self.sendcmd("REST %s" % rest)
/usr/lib/python2.5/socket.pyc in connect(self, *args)
error: (110, 'Connection timed out')
I guess there is some wrong config in my vsFTP server, but cannot figure it out. Anyone can help ?
my vsFTP config is:
listen=YES
connect_from_port_20=YES
listen_port=5111
ftp_data_port=5110
# Passive FTP mode allowed
pasv_enable=YES
pasv_min_port=5300
pasv_max_port=5400
max_per_ip=2
A:
The timeout doesn't happen until you try to send the data, so you were able to connect to the server successfully. The only difference I see is that ftplib uses passive mode by default, whereas your command-line client does not appear to. Try doing
ftp.set_pasv(False)
before initiating the transfer and see what happens.
Note that non-passive mode is essentially obsolete because it cannot be used across NAT firewalls, so you should probably configure vsFTP to allow passive mode.
| FTP upload file works manually, but fails using Python ftplib | I installed vsFTP in a Debian box. When manually upload file using ftp command, it's ok. i.e, the following session works:
john@myhost:~$ ftp xxx.xxx.xxx.xxx 5111
Connected to xxx.xxx.xxx.xxx.
220 Hello,Welcom to my FTP server.
Name (xxx.xxx.xxx.xxx:john): ftpuser
331 Please specify the password.
Password:
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> put st.zip
local: st.zip remote: st.zip
200 PORT command successful. Consider using PASV.
150 Ok to send data.
226 File receive OK.
12773 bytes sent in 0.00 secs (277191.8 kB/s)
ftp> 221 Goodbye.
(Please noted that as above, I configured vsFTP server to use a non-default port,e.g 5111 for some reason)
Now when I write a script in python to upload file programmatically, it failed. the error says 'time out', as the following session shows:
john@myhost:~$ ipython
Python 2.5.2 (r252:60911, Jan 24 2010, 14:53:14)
Type "copyright", "credits" or "license" for more information.
IPython 0.8.4 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object'. ?object also works, ?? prints more.
In [1]: import ftplib
In [2]: ftp=ftplib.FTP()
In [3]: ftp.connect('xxx.xxx.xxx.xxx','5111')
Out[3]: "220 Hello,Welcom to my FTP server."
In [4]: ftp.login('ftpuser','ftpuser')
Out[4]: '230 Login successful.'
In [5]: f=open('st.zip','rb')
In [6]: ftp.storbinary('STOR %s' % 'my_ftp_file.zip', f)
---------------------------------------------------------------------------
error Traceback (most recent call last)
...
/usr/lib/python2.5/ftplib.pyc in ntransfercmd(self, cmd, rest)
322 af, socktype, proto, canon, sa = socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM)[0]
323 conn = socket.socket(af, socktype, proto)
--> 324 conn.connect(sa)
325 if rest is not None:
326 self.sendcmd("REST %s" % rest)
/usr/lib/python2.5/socket.pyc in connect(self, *args)
error: (110, 'Connection timed out')
I guess there is some wrong config in my vsFTP server, but cannot figure it out. Anyone can help ?
my vsFTP config is:
listen=YES
connect_from_port_20=YES
listen_port=5111
ftp_data_port=5110
# Passive FTP mode allowed
pasv_enable=YES
pasv_min_port=5300
pasv_max_port=5400
max_per_ip=2
| [
"The timeout doesn't happen until you try to send the data, so you were able to connect to the server successfully. The only difference I see is that ftplib uses passive mode by default, whereas your command-line client does not appear to. Try doing \nftp.set_pasv(False)\n\nbefore initiating the transfer and see ... | [
7
] | [] | [] | [
"ftp",
"python"
] | stackoverflow_0003349722_ftp_python.txt |
Q:
How to extend pretty print module to tables?
I have the pretty print module, which I prepared because I was not happy the pprint module produced zillion lines for list of numbers which had one list of list. Here is example use of my module.
>>> a=range(10)
>>> a.insert(5,[range(i) for i in range(10)])
>>> a
[0, 1, 2, 3, 4, [[], [0], [0, 1], [0, 1, 2], [0, 1, 2, 3], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5, 6], [0, 1, 2, 3, 4, 5, 6, 7], [0, 1, 2, 3, 4, 5, 6, 7, 8]], 5, 6, 7, 8, 9]
>>> import pretty
>>> pretty.ppr(a,indent=6)
[0, 1, 2, 3, 4,
[
[],
[0],
[0, 1],
[0, 1, 2],
[0, 1, 2, 3],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4, 5],
[0, 1, 2, 3, 4, 5, 6],
[0, 1, 2, 3, 4, 5, 6, 7],
[0, 1, 2, 3, 4, 5, 6, 7, 8]], 5, 6, 7, 8, 9]
Code is like this:
""" pretty.py prettyprint module version alpha 0.2
mypr: pretty string function
ppr: print of the pretty string
ONLY list and tuple prettying implemented!
"""
def mypr(w, i = 0, indent = 2, nl = '\n') :
""" w = datastructure, i = indent level, indent = step size for indention """
startend = {list : '[]', tuple : '()'}
if type(w) in (list, tuple) :
start, end = startend[type(w)]
pr = [mypr(j, i + indent, indent, nl) for j in w]
return nl + ' ' * i + start + ', '.join(pr) + end
else : return repr(w)
def ppr(w, i = 0, indent = 2, nl = '\n') :
""" see mypr, this is only print of mypr with same parameters """
print mypr(w, i, indent, nl)
Here is one fixed text for table printing in my pretty print module:
## let's do it "manually"
width = len(str(10+10))
widthformat = '%'+str(width)+'i'
for i in range(10):
for j in range(10):
print widthformat % (i+j),
print
Have you better alternative for this code to be generalized enough for the pretty printing module?
What I found for this kind of regular cases after posting the question is this module: prettytable A simple Python library for easily displaying tabular data in a visually appealing ASCII table format
A:
If you're looking for nice formatting for matrices, numpy's output looks great right out of the box:
from numpy import *
print array([[i + j for i in range(10)] for j in range(10)])
Output:
[[ 0 1 2 3 4 5 6 7 8 9]
[ 1 2 3 4 5 6 7 8 9 10]
[ 2 3 4 5 6 7 8 9 10 11]
[ 3 4 5 6 7 8 9 10 11 12]
[ 4 5 6 7 8 9 10 11 12 13]
[ 5 6 7 8 9 10 11 12 13 14]
[ 6 7 8 9 10 11 12 13 14 15]
[ 7 8 9 10 11 12 13 14 15 16]
[ 8 9 10 11 12 13 14 15 16 17]
[ 9 10 11 12 13 14 15 16 17 18]]
A:
You can write:
'\n'.join( # join the lines with '\n'
' '.join( # join one line with ' '
"%2d" % (i + j) # format each item
for i in range(10))
for j in range(10))
A:
Using George Sakkis' table indention recipe:
print(indent(((i + j for i in range(10)) for j in range(10)),
delim=' ', justify='right'))
yields:
0 1 2 3 4 5 6 7 8 9
1 2 3 4 5 6 7 8 9 10
2 3 4 5 6 7 8 9 10 11
3 4 5 6 7 8 9 10 11 12
4 5 6 7 8 9 10 11 12 13
5 6 7 8 9 10 11 12 13 14
6 7 8 9 10 11 12 13 14 15
7 8 9 10 11 12 13 14 15 16
8 9 10 11 12 13 14 15 16 17
9 10 11 12 13 14 15 16 17 18
PS. To get the above to work, I made one minor change to the recipe. I changed wrapfunc(item) to wrapfunc(str(item)):
def rowWrapper(row):
newRows = [wrapfunc(str(item)).split('\n') for item in row]
A:
My answer to this kind of regular cases would be to use this module:
prettytable
A simple Python library for easily displaying tabular data in a visually appealing ASCII table format
| How to extend pretty print module to tables? | I have the pretty print module, which I prepared because I was not happy the pprint module produced zillion lines for list of numbers which had one list of list. Here is example use of my module.
>>> a=range(10)
>>> a.insert(5,[range(i) for i in range(10)])
>>> a
[0, 1, 2, 3, 4, [[], [0], [0, 1], [0, 1, 2], [0, 1, 2, 3], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5, 6], [0, 1, 2, 3, 4, 5, 6, 7], [0, 1, 2, 3, 4, 5, 6, 7, 8]], 5, 6, 7, 8, 9]
>>> import pretty
>>> pretty.ppr(a,indent=6)
[0, 1, 2, 3, 4,
[
[],
[0],
[0, 1],
[0, 1, 2],
[0, 1, 2, 3],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4, 5],
[0, 1, 2, 3, 4, 5, 6],
[0, 1, 2, 3, 4, 5, 6, 7],
[0, 1, 2, 3, 4, 5, 6, 7, 8]], 5, 6, 7, 8, 9]
Code is like this:
""" pretty.py prettyprint module version alpha 0.2
mypr: pretty string function
ppr: print of the pretty string
ONLY list and tuple prettying implemented!
"""
def mypr(w, i = 0, indent = 2, nl = '\n') :
""" w = datastructure, i = indent level, indent = step size for indention """
startend = {list : '[]', tuple : '()'}
if type(w) in (list, tuple) :
start, end = startend[type(w)]
pr = [mypr(j, i + indent, indent, nl) for j in w]
return nl + ' ' * i + start + ', '.join(pr) + end
else : return repr(w)
def ppr(w, i = 0, indent = 2, nl = '\n') :
""" see mypr, this is only print of mypr with same parameters """
print mypr(w, i, indent, nl)
Here is one fixed text for table printing in my pretty print module:
## let's do it "manually"
width = len(str(10+10))
widthformat = '%'+str(width)+'i'
for i in range(10):
for j in range(10):
print widthformat % (i+j),
print
Have you better alternative for this code to be generalized enough for the pretty printing module?
What I found for this kind of regular cases after posting the question is this module: prettytable A simple Python library for easily displaying tabular data in a visually appealing ASCII table format
| [
"If you're looking for nice formatting for matrices, numpy's output looks great right out of the box:\nfrom numpy import *\nprint array([[i + j for i in range(10)] for j in range(10)])\n\nOutput:\n[[ 0 1 2 3 4 5 6 7 8 9]\n [ 1 2 3 4 5 6 7 8 9 10]\n [ 2 3 4 5 6 7 8 9 10 11]\n [ 3 4 5 6 7 ... | [
7,
3,
1,
1
] | [] | [] | [
"formatting",
"generator",
"python"
] | stackoverflow_0003319540_formatting_generator_python.txt |
Q:
python: getting element in a list
def do_work():
medications_subset2(b,['HYDROCODONE','MORPHINE','OXYCODONE'])
def medications_subset2(b,drugs_needed):
MORPHINE=['ASTRAMORPH','AVINZA','CONTIN','DURAMORPH','INFUMORPH',
'KADIAN','MS CONTIN','MSER','MSIR','ORAMORPH',
'ORAMORPH SR','ROXANOL','ROXANOL 100']
print drugs_needed[1][0]
how do i print ASTRAMORPH (this is the first element in MORPHINE)
i NEED to make use of drugs_needed, since this is being passed in from do_work
A:
Can you define MORPHINE this way?
drugs = {
'MORPHINE': ['ASTRAMORPH',...],
'HYDROCODONE': [...],
...
}
then you can refer to it by
print ( drugs[drugs_needed[1]][0] )
| python: getting element in a list | def do_work():
medications_subset2(b,['HYDROCODONE','MORPHINE','OXYCODONE'])
def medications_subset2(b,drugs_needed):
MORPHINE=['ASTRAMORPH','AVINZA','CONTIN','DURAMORPH','INFUMORPH',
'KADIAN','MS CONTIN','MSER','MSIR','ORAMORPH',
'ORAMORPH SR','ROXANOL','ROXANOL 100']
print drugs_needed[1][0]
how do i print ASTRAMORPH (this is the first element in MORPHINE)
i NEED to make use of drugs_needed, since this is being passed in from do_work
| [
"Can you define MORPHINE this way?\ndrugs = {\n 'MORPHINE': ['ASTRAMORPH',...],\n 'HYDROCODONE': [...],\n ...\n}\n\nthen you can refer to it by\nprint ( drugs[drugs_needed[1]][0] )\n\n"
] | [
4
] | [] | [] | [
"python"
] | stackoverflow_0003355745_python.txt |
Q:
Automatically passing extra attributes to Widget
I have a custom model fields, that can have 'chain' argument.
from django.db import models
class ChainField(object):
def __init__(self, *args, **kwargs):
chain = kwargs.get('chain', False)
if chain:
self.chain = chain
del kwargs['chain']
super(self.__class__.__mro__[2], self).__init__(*args, **kwargs)
class DateTimeField(ChainField, models.DateTimeField):
pass
And now the question: how I can automatically pass 'chain' argument of model field to widget class when initializing ModelForm? I neen that in html it become 'class="chainxxx"' attribute of form field.
A:
Override __init__ of the ModelForm like this:
class MyClass(ModelForm):
def __init__(self, *args, **kwargs):
super(MyClass, self).__init__(*args, **kwargs)
chain_value = self.fields['name_of_the_field'].chain
self.fields['name_of_the_field'].widget = CustomWidget(chain=chain_value)
| Automatically passing extra attributes to Widget | I have a custom model fields, that can have 'chain' argument.
from django.db import models
class ChainField(object):
def __init__(self, *args, **kwargs):
chain = kwargs.get('chain', False)
if chain:
self.chain = chain
del kwargs['chain']
super(self.__class__.__mro__[2], self).__init__(*args, **kwargs)
class DateTimeField(ChainField, models.DateTimeField):
pass
And now the question: how I can automatically pass 'chain' argument of model field to widget class when initializing ModelForm? I neen that in html it become 'class="chainxxx"' attribute of form field.
| [
"Override __init__ of the ModelForm like this:\nclass MyClass(ModelForm):\n def __init__(self, *args, **kwargs):\n super(MyClass, self).__init__(*args, **kwargs)\n\n chain_value = self.fields['name_of_the_field'].chain\n self.fields['name_of_the_field'].widget = CustomWidget(chain=chain_valu... | [
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0003350958_django_python.txt |
Q:
Module "mymodule" does not define a "MyBackend" authentication backend
I'm trying to use a custom authentication backend for a Django project I'm working on. My backend is based on the LDAPBackend found in the article LDAP Authentication in Django with Backends.
I'm getting the floowing error when I attempt to log in:
ImproperlyConfigured at /admin/
Module "challenge.backends" does not define a "LDAPBackend" authentication backend
My project is called "challenge". There is a subdirectory, "backends", which contains __init__.py and LDAPBackend.py.
My settings.py is configured to use this backend thusly:
AUTHENTICATION_BACKENDS = (
'challenge.backends.LDAPBackend',
'django.contrib.auth.backends.ModelBackend',
)
I am able to import the module myself using python manage.py shell and then from challenge.backends import LDAPBackend.
I'm not sure what to check now since everything appears to be in the right place.
A:
You are importing it in wrong way. You are importing a module, rather than a class. That's why shell allows you to import it, but django complains.
You should use challenge.backends.LDAPBackend.LDAPBackend.
Also, it's a good idea to stick with PEP8 when naming modules, this way you won't be confused that way again. Modules should be names all in lowcase and without spaces, underscores, etc.
| Module "mymodule" does not define a "MyBackend" authentication backend | I'm trying to use a custom authentication backend for a Django project I'm working on. My backend is based on the LDAPBackend found in the article LDAP Authentication in Django with Backends.
I'm getting the floowing error when I attempt to log in:
ImproperlyConfigured at /admin/
Module "challenge.backends" does not define a "LDAPBackend" authentication backend
My project is called "challenge". There is a subdirectory, "backends", which contains __init__.py and LDAPBackend.py.
My settings.py is configured to use this backend thusly:
AUTHENTICATION_BACKENDS = (
'challenge.backends.LDAPBackend',
'django.contrib.auth.backends.ModelBackend',
)
I am able to import the module myself using python manage.py shell and then from challenge.backends import LDAPBackend.
I'm not sure what to check now since everything appears to be in the right place.
| [
"You are importing it in wrong way. You are importing a module, rather than a class. That's why shell allows you to import it, but django complains.\nYou should use challenge.backends.LDAPBackend.LDAPBackend.\nAlso, it's a good idea to stick with PEP8 when naming modules, this way you won't be confused that way aga... | [
6
] | [] | [] | [
"authentication",
"django",
"python"
] | stackoverflow_0003355492_authentication_django_python.txt |
Q:
Fork and exit in Python
This code is supposed to try and start a server process and return.
If the port was taken, it should say "couldn't bind to that port" and return. If the server started, it should print "Bound to port 51231" and return. But it does not return.
import socket
from multiprocessing import Process
def serverMainLoop(s,t):
s.listen(5)
while 1:
pass # server goes here
host = ''
port = 51231
so = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
so.bind((host,port))
print "Bound to port %i"%port
serverProcess = Process(target=serverMainloop, args=(so,60))
serverProcess.start()
sys.exit()
except socket.error, (value,message):
if value==98:
print "couldn't bind to that port"
sys.exit()
Is there some switch I can throw that will make multiprocessing let me do this?
A:
Check this page, it describes how to use os.fork() and os._exit(1) to build a daemon which forks to background.
A prototype of what you perhaps want would be:
pid = os.fork()
if (pid == 0): # The first child.
os.chdir("/")
os.setsid()
os.umask(0)
pid2 = os.fork()
if (pid2 == 0): # Second child
# YOUR CODE HERE
else:
sys.exit() #First child exists
else: # Parent Code
sys.exit() # Parent exists
For the reason why to fork twice see this Question (Short Version: It's needed to orphan the child and make it a child of the init-process)
A:
To do what you describe, I wouldn't use multiprocessing, I'd look into writing a daemon.
A:
As an alternative to writing a daemon, just write your program as a console process (testing it as such), and use a service management application like supervisord to run it as a service. Supervisord also does much more that just run your program as a service. It will monitor, restart, log, report status and status changes, and even provide a minimal web interface to manage your process.
| Fork and exit in Python | This code is supposed to try and start a server process and return.
If the port was taken, it should say "couldn't bind to that port" and return. If the server started, it should print "Bound to port 51231" and return. But it does not return.
import socket
from multiprocessing import Process
def serverMainLoop(s,t):
s.listen(5)
while 1:
pass # server goes here
host = ''
port = 51231
so = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
so.bind((host,port))
print "Bound to port %i"%port
serverProcess = Process(target=serverMainloop, args=(so,60))
serverProcess.start()
sys.exit()
except socket.error, (value,message):
if value==98:
print "couldn't bind to that port"
sys.exit()
Is there some switch I can throw that will make multiprocessing let me do this?
| [
"Check this page, it describes how to use os.fork() and os._exit(1) to build a daemon which forks to background.\nA prototype of what you perhaps want would be:\npid = os.fork()\nif (pid == 0): # The first child.\n os.chdir(\"/\")\n os.setsid()\n os.umask(0) \n pid2 = os.fork() \n if (pid2 == 0): # Secon... | [
4,
1,
0
] | [] | [] | [
"fork",
"python"
] | stackoverflow_0003355995_fork_python.txt |
Q:
Positionally matching substrings in Python
How would you parse the ['i386', 'x86_64'] out of a string like '-foo 23 -bar -arch ppc -arch i386 -isysroot / -fno-strict-aliasing -fPIC'?
>>> my_arch_parse_function('-foo 23 -bar -arch i386 -arch x86_64 -isysroot / -fno-strict-aliasing -fPIC')
>>> ['i386', 'x86_64']
Can this be done using regex, or only using modules like PyParsing, or manually splitting and iterating over the splits?
Assumption: -arch VAL are grouped together.
A:
Why not use the argument parsing modules? optparse in Python 2.6 (and 3.1) and argparse in Python 2.7 (and 3.2).
EDIT: On second thought, that's not as simple as it sounds, because you may have to define all the arguments you are likely to see (not sure if these modules have a catchall mechanism). I'll leave the answer here because might work, but take it with a grain of salt.
A:
Regex: (?<=-arch )[^ ]+
>>> re.findall( r"(?<=-arch )([^ ]+)", r"'-foo 23 -bar -arch ppc -arch i386 -isysroot -fno-strict-aliasing -fPIC'" )
['ppc', 'i386']
Arbitrary whitespace
>>> foo = re.compile( r"(?<=-arch)\s+[^\s]+" )
>>> [ str.strip() for str in re.findall( foo, r"'-foo 23 -bar -arch ppc -arch i386 -isysroot -fno-strict-aliasing -fPIC'" ) ]
['ppc', 'i386']
P.S. There's no x86_64 in that string, and are you trying to differentiate between -arch ppc and -arch i386?
A:
Would you consider a non-regex solution? Simpler:
>>> def my_arch_parse_function(s):
... args = s.split()
... idxs = (i+1 for i,v in enumerate(args) if v == '-arch')
... return [args[i] for i in idxs]
...
...
>>> s='-foo 23 -bar -arch ppc -arch i386 -isysroot / -fno-strict-aliasing -fPIC'
>>> my_arch_parse_function(s)
['ppc', 'i386']
A:
Answering my own question, I found a regex via this tool:
>>> regex = re.compile("(?P<key>\-arch\s?)(?P<value>[^\s]+?)\s|$")
>>> r = regex.search(string)
>>> r
<_sre.SRE_Match object at 0x8aa59232ae397b10>
>>> regex.match(string)
None
# List the groups found
>>> r.groups()
(u'-arch ', u'ppc')
# List the named dictionary objects found
>>> r.groupdict()
{u'key': u'-arch ', u'value': u'ppc'}
# Run findall
>>> regex.findall(string)
[(u'-arch ', u'ppc'), (u'-arch ', u'i386'), (u'', u'')]
A:
Try this if you want regex:
arch_regex = re.compile('\s+('+'|'.join(arch_list)+')\s+',re.I)
results = arch_regex.findall(arg_string)
A little too much regex for my taste, but it works. For future reference, it is better to use optparse for command line option parsing.
A:
Hand-made with Python2.6
I am sure that you or a library can do a better job.
inp = '-foo 23 -bar -arch ppc -arch i386 -isysroot / -fno-strict-aliasing -fPIC'.split()
dct = {}
noneSet = set([None])
flagName = None
values = []
for param in inp:
if param.startswith('-'):
flagName = param
if flagName not in dct:
dct[flagName] = set()
dct[flagName].add(None)
continue
# Else found a value
dct[flagName].add(param)
print(dct)
result = sorted(dct['-arch'] - noneSet)
print(result)
>>> ================================ RESTART ================================
>>>
{'-arch': set(['ppc', 'i386', None]), '-isysroot': set([None, '/']), '-fno-strict-aliasing': set([None]), '-fPIC': set([None]), '-foo': set([None, '23']), '-bar': set([None])}
['i386', 'ppc']
>>>
| Positionally matching substrings in Python | How would you parse the ['i386', 'x86_64'] out of a string like '-foo 23 -bar -arch ppc -arch i386 -isysroot / -fno-strict-aliasing -fPIC'?
>>> my_arch_parse_function('-foo 23 -bar -arch i386 -arch x86_64 -isysroot / -fno-strict-aliasing -fPIC')
>>> ['i386', 'x86_64']
Can this be done using regex, or only using modules like PyParsing, or manually splitting and iterating over the splits?
Assumption: -arch VAL are grouped together.
| [
"Why not use the argument parsing modules? optparse in Python 2.6 (and 3.1) and argparse in Python 2.7 (and 3.2).\nEDIT: On second thought, that's not as simple as it sounds, because you may have to define all the arguments you are likely to see (not sure if these modules have a catchall mechanism). I'll leave the ... | [
4,
3,
2,
0,
0,
0
] | [] | [] | [
"parsing",
"python",
"regex"
] | stackoverflow_0003356038_parsing_python_regex.txt |
Q:
python: complicated loop through list
import csv
import collections
def do_work():
(data,counter)=get_file('thefile.csv')
b=samples_subset1(data,counter,'/pythonwork/samples_subset4.csv',500)
medications_subset2(b,['HYDROCODONE','MORPHINE','OXYCODONE'])
def get_file(start_file):
with open(start_file,'rb') as f:
data=list(csv.reader(f))
counter=collections.defaultdict(int)
for row in data:
counter[row[10]]+=1
return (data,counter)
def samples_subset1(data,counter,output_file,sample_cutoff):
with open(output_file,'wb') as outfile:
writer=csv.writer(outfile)
b_counter=0
b=[]
for row in data:
if counter[row[10]]>=sample_cutoff:
b.append(row)
writer.writerow(row)
b_counter+=1
return b
def medications_subset2(b,drug_input):
brand_names={'MORPHINE':['ASTRAMORPH','AVINZA','CONTIN','DURAMORPH','INFUMORPH',
'KADIAN','MS CONTIN','MSER','MSIR','ORAMORPH',
'ORAMORPH SR','ROXANOL','ROXANOL 100'],
'OXYCODONE':['COMBUNOX','DIHYDRONE','DINARCON','ENDOCET','ENDODAN',
'EUBINE','EUCODAL','EUKODAL','EUTAGEN','OXYCODONE WITH ACETAMINOPHEN CAPSULES',
'OXYCODONE WITH ASPIRIN,','OXYCONTIN','OXYDOSE','OXYFAST','OXYIR',
'PANCODINE','PERCOCET','PERCODAN','PROLADONE','ROXICET',
'ROXICODONE','ROXIPRIM','ROXIPRIN','TECODIN','TEKODIN',
'THECODIN','THEKOKIN','TYLOX'],
'OXYMORPHONE':['NUMORPHAN','OPANA','OPANA ER'],
'METHADONE':['ALGIDON','ALGOLYSIN','AMIDON','DEPRIDOL','DOLOPHINE','FENADONE',
'METHADOSE','MIADONE','PHENADONE'],
'BUPRENORPHINE':['BUPRENEX','LEPTAN','SUBOXONE','SUBUTEX','TEMGESIC'],
'HYDROMORPHONE':['DILAUDID','HYDAL','HYDROMORFAN','HYDROMORPHAN','HYDROSTAT',
'HYMORPHAN','LAUDICON','NOVOLAUDON','OPIDOL','PALLADONE',
'PALLADONE IR','PALLADONE SR'],
'CODEINE':['ACETAMINOPHEN WITH CODEINE','ASPIRIN WITH CODEINE','EMPIRIN WITH CODEINE',
'FLORINAL WITH CODEINE','TYLENOL 3','TYLENOL 4','TYLENOL 5']
'HYDROCODONE':['ANEXSIA','BEKADID','CO-GESIC','CODAL-DH','CODICLEAR-DH',
'CODIMAL-DH','CODINOVO','CONATUSSIN-DC','CYNDAL-HD','CYTUSS-HC',
'DETUSSIN','DICODID','DUODIN','DURATUSS-HD','ENDAL-HC','ENTUSS',
'ENTUSS-D','G-TUSS','HISTINEX-D','HISTINEX-HC','HISTUSSIN-D','HISTUSSIN-HC',
'HYCET','HYCODAN','HYCOMINE','HYDROCODONE/APAP','HYDROKON',
'HYDROMET','HYDROVO','KOLIKODOL','LORCET','LORTAB',
'MERCODINONE','NOROCO','NORGAN','NOVAHISTEX','ORTHOXYCOL',
'POLYGESIC','STAGESIC','SYMTAN','SYNKONIN','TUSSIONEX','VICODIN',
'VICOPROFEN','XODOL','ZYDONE']}
...
...
let's say drug_input = 'METHADONE'
i need to be able to go through the b array and delete every row that DOES NOT have ANY ONE of these:
['ALGIDON','ALGOLYSIN','AMIDON','DEPRIDOL','DOLOPHINE','FENADONE',
'METHADOSE','MIADONE','PHENADONE']
for example if b[1] = "yes,no,yes,amidon,blah" then do nothing
but
if b[2] = "yes,yes,yes,vicodin,yes" then DELETE this record
A:
I didn't really read your code paragraph, but from the problem you described afterwards it sounds like you want:
needed = set(['ALGIDON','ALGOLYSIN','AMIDON','DEPRIDOL','DOLOPHINE','FENADONE', 'METHADOSE','MIADONE','PHENADONE'])
b = filter(lambda s: len(set(s.upper().split(',')) & needed) > 0, b)
| python: complicated loop through list | import csv
import collections
def do_work():
(data,counter)=get_file('thefile.csv')
b=samples_subset1(data,counter,'/pythonwork/samples_subset4.csv',500)
medications_subset2(b,['HYDROCODONE','MORPHINE','OXYCODONE'])
def get_file(start_file):
with open(start_file,'rb') as f:
data=list(csv.reader(f))
counter=collections.defaultdict(int)
for row in data:
counter[row[10]]+=1
return (data,counter)
def samples_subset1(data,counter,output_file,sample_cutoff):
with open(output_file,'wb') as outfile:
writer=csv.writer(outfile)
b_counter=0
b=[]
for row in data:
if counter[row[10]]>=sample_cutoff:
b.append(row)
writer.writerow(row)
b_counter+=1
return b
def medications_subset2(b,drug_input):
brand_names={'MORPHINE':['ASTRAMORPH','AVINZA','CONTIN','DURAMORPH','INFUMORPH',
'KADIAN','MS CONTIN','MSER','MSIR','ORAMORPH',
'ORAMORPH SR','ROXANOL','ROXANOL 100'],
'OXYCODONE':['COMBUNOX','DIHYDRONE','DINARCON','ENDOCET','ENDODAN',
'EUBINE','EUCODAL','EUKODAL','EUTAGEN','OXYCODONE WITH ACETAMINOPHEN CAPSULES',
'OXYCODONE WITH ASPIRIN,','OXYCONTIN','OXYDOSE','OXYFAST','OXYIR',
'PANCODINE','PERCOCET','PERCODAN','PROLADONE','ROXICET',
'ROXICODONE','ROXIPRIM','ROXIPRIN','TECODIN','TEKODIN',
'THECODIN','THEKOKIN','TYLOX'],
'OXYMORPHONE':['NUMORPHAN','OPANA','OPANA ER'],
'METHADONE':['ALGIDON','ALGOLYSIN','AMIDON','DEPRIDOL','DOLOPHINE','FENADONE',
'METHADOSE','MIADONE','PHENADONE'],
'BUPRENORPHINE':['BUPRENEX','LEPTAN','SUBOXONE','SUBUTEX','TEMGESIC'],
'HYDROMORPHONE':['DILAUDID','HYDAL','HYDROMORFAN','HYDROMORPHAN','HYDROSTAT',
'HYMORPHAN','LAUDICON','NOVOLAUDON','OPIDOL','PALLADONE',
'PALLADONE IR','PALLADONE SR'],
'CODEINE':['ACETAMINOPHEN WITH CODEINE','ASPIRIN WITH CODEINE','EMPIRIN WITH CODEINE',
'FLORINAL WITH CODEINE','TYLENOL 3','TYLENOL 4','TYLENOL 5']
'HYDROCODONE':['ANEXSIA','BEKADID','CO-GESIC','CODAL-DH','CODICLEAR-DH',
'CODIMAL-DH','CODINOVO','CONATUSSIN-DC','CYNDAL-HD','CYTUSS-HC',
'DETUSSIN','DICODID','DUODIN','DURATUSS-HD','ENDAL-HC','ENTUSS',
'ENTUSS-D','G-TUSS','HISTINEX-D','HISTINEX-HC','HISTUSSIN-D','HISTUSSIN-HC',
'HYCET','HYCODAN','HYCOMINE','HYDROCODONE/APAP','HYDROKON',
'HYDROMET','HYDROVO','KOLIKODOL','LORCET','LORTAB',
'MERCODINONE','NOROCO','NORGAN','NOVAHISTEX','ORTHOXYCOL',
'POLYGESIC','STAGESIC','SYMTAN','SYNKONIN','TUSSIONEX','VICODIN',
'VICOPROFEN','XODOL','ZYDONE']}
...
...
let's say drug_input = 'METHADONE'
i need to be able to go through the b array and delete every row that DOES NOT have ANY ONE of these:
['ALGIDON','ALGOLYSIN','AMIDON','DEPRIDOL','DOLOPHINE','FENADONE',
'METHADOSE','MIADONE','PHENADONE']
for example if b[1] = "yes,no,yes,amidon,blah" then do nothing
but
if b[2] = "yes,yes,yes,vicodin,yes" then DELETE this record
| [
"I didn't really read your code paragraph, but from the problem you described afterwards it sounds like you want:\nneeded = set(['ALGIDON','ALGOLYSIN','AMIDON','DEPRIDOL','DOLOPHINE','FENADONE', 'METHADOSE','MIADONE','PHENADONE'])\nb = filter(lambda s: len(set(s.upper().split(',')) & needed) > 0, b)\n\n"
] | [
1
] | [] | [] | [
"csv",
"list",
"python"
] | stackoverflow_0003356209_csv_list_python.txt |
Q:
Python: Regex to extract part of URL found between parentheses
I have this weirdly formatted URL. I have to extract the contents in '()'.
Sample URL : http://sampleurl.com/(K(ThinkCode))/profile/view.aspx
If I can extract ThinkCode out of it, I will be a happy man! I am having a tough time with regexing special chars like '(' and '/'.
A:
>>> foo = re.compile( r"(?<=\(K\()[^\)]*" )
>>> foo.findall( r"http://sampleurl.com/(K(ThinkCode))/profile/view.aspx" )
['ThinkCode']
Explanation
In regex-world, a lookbehind is a way of saying "I want to match ham, but only if it's preceded by spam. We write this as (?<=spam)ham. So in this case, we want to match [^\)]*, but only if it's preceded by \(K\(.
Now \(K\( is a nice, easy regex, because it's plain text! It means, match exactly the string (K(. Notice that we have to escape the brackets (by putting \ in front of them), since otherwise the regex parser would think they were part of the regex instead of a character to match!
Finally, when you put something in square brackets in regex-world, it means "any of the characters in here is OK". If you put something inside square brackets where the first character is ^, it means "any character not in here is OK". So [^\)] means "any character that isn't a right-bracket", and [^\)]* means "as many characters as possible that aren't right-brackets".
Putting it all together, (?<=\(K\()[^\)]* means "match as many characters as you can that aren't right-brackets, preceded by the string (K(.
Oh, one last thing. Because \ means something inside strings in Python as well as inside regexes, we use raw strings -- r"spam" instead of just "spam". That tells Python to ignore the \'s.
Another way
If lookbehind is a bit complicated for you, you can also use capturing groups. The idea behind those is that the regex matches patterns, but can also remember subpatterns. That means that you don't have to worry about lookaround, because you can match the entire pattern and then just extract the subpattern inside it!
To capture a group, simply put it inside brackets: (foo) will capture foo as the first group. Then, use .groups() to spit out all the groups that you matched! This is the way the other answer works.
A:
It's not too hard, especially since / isn't actually a special character in Python regular expressions. You just backslash the literal parens you want. How about this:
s = "http://sampleurl.com/(K(ThinkCode))/profile/view.aspx"
mo = re.match(r"http://sampleurl\.com/\(K\(([^)]+)\)\)/profile.view\.aspx", s);
print mo.group(1)
Note the use of r"" raw strings to preserve the backslashes in the regular expression pattern string.
A:
If you want to have special characters in a regex, you need to escape them, such as \(, \/, \\.
Matching things inside of nested parenthesis is quite a bit of a pain in regex. if that format is always the same, you could use this:
\(.*?\((.*?)\).*?\)
Basically: find a open paren, match characters until you find another open paren, group characters until I see a close paren, then make sure there are two more close paren somewhere in there.
A:
mystr = "http://sampleurl.com/(K(ThinkCode))/profile/view.aspx"
import re
re.sub(r'^.*\((\w+)\).*',r'\1',mystr)
| Python: Regex to extract part of URL found between parentheses | I have this weirdly formatted URL. I have to extract the contents in '()'.
Sample URL : http://sampleurl.com/(K(ThinkCode))/profile/view.aspx
If I can extract ThinkCode out of it, I will be a happy man! I am having a tough time with regexing special chars like '(' and '/'.
| [
">>> foo = re.compile( r\"(?<=\\(K\\()[^\\)]*\" )\n>>> foo.findall( r\"http://sampleurl.com/(K(ThinkCode))/profile/view.aspx\" )\n['ThinkCode']\n\nExplanation\nIn regex-world, a lookbehind is a way of saying \"I want to match ham, but only if it's preceded by spam. We write this as (?<=spam)ham. So in this case, we... | [
4,
1,
0,
0
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0003356300_python_regex.txt |
Q:
Solving jumbled word puzzles with python?
I have an interesting programming puzzle for you:
You will be given two things:
A word containing a list of English words put together, e.g:
word = "iamtiredareyou"
Possible subsets:
subsets = [
'i', 'a', 'am', 'amt', 'm', 't', 'ti', 'tire', 'tired', 'i',
'ire', 'r', 're', 'red', 'redare', 'e', 'd', 'da', 'dar', 'dare',
'a', 'ar', 'are', 'r', 're', 'e', 'ey', 'y', 'yo', 'you', 'o', 'u'
]
Challenges:
Level-1: I need to pragmatically find the members in subsets which together in an order will make "iamtiredareyou" i.e. ['i', 'am', 'tired', 'are', 'you']
Level-2: The original string may consist of some extra characters in sequence which are not present in the subset. e.g. "iamtired12aareyou". The subset given is same as above, the solution should automatically include this subset in the right place in the result array. i.e. ['i', 'am', 'tired', '12a', 'are', 'you']
How can I do this?
A:
Generally, a recursive algorithm would do.
Start with checking all subsets against start of a given word, if found — add (append) to found values and recurse with remaining part of the word and current found values.
Or if it's an end of the string — print found values.
something like that:
all=[]
def frec(word, values=[]):
gobal all
if word == "": # got result.
all+=[values]
for s in subsets:
if word.startswith(s):
frec(word[len(s):], values+[s])
frec(word)
note that there are lots of possible solutions since subsets include many one-character strings. You might want to find some shortest of results. (13146 solutions... use “all.sort(cmp=lambda x, y: cmp(len(x), len(y)))” to get shortest)
For a level2 — you need another loop if no subset matches that adds more and more symbols to next value (and recurses into that) until match is found.
all=[]
def frec(word, values=[]):
global all
if word == "": # got result.
all+=[values]
return true
match = False
for s in subsets:
if word.startswith(s):
match = True
frec(word[len(s):], values+[s])
if not match:
return frec(word[1:], values+[word[0]])
frec(word)
This does not try to combine non-subset values into one string, though.
A:
i think you should do your own programming excercises....
A:
For the Level 1 challenge you could do it recursively. Probably not the most efficient solution, but the easiest:
word = "iamtiredareyou"
subsets = ['i', 'a', 'am', 'amt', 'm', 't', 'ti', 'tire', 'tired', 'i', 'ire', 'r', 're', 'red', 'redare', 'e', 'd', 'da', 'dar', 'dare', 'a', 'ar', 'are', 'r', 're', 'e', 'ey', 'y', 'yo', 'you', 'o', 'u']
def findsubset():
global word
for subset in subsets:
if word.startswith(subset):
setlist.append(subset)
word = word[len(subset):]
if word == "":
print setlist
else:
findsubset()
word = subset + word
setlist.pop()
# Remove duplicate entries by making a set
subsets = set(subsets)
setlist = []
findsubset()
Your list of subsets has duplicates in it - e.g. 'a' appears twice - so my code makes it a set to remove the duplicates before searching for results.
A:
Sorry about the lack of programming snippet, but I'd like to suggest dynamic programming. Attack level 1 and level 2 at the same time by giving each word a cost, and adding all the single characters not present as single character high cost words. The problem is then to find the way of splitting the sequence up into words that gives the least total cost.
Work from left to right along the sequence, at each point working out and saving the least cost solution up to and including the current point, and the length of the word that ends that solution. To work out the answer for the next point in the sequence, consider all of the known words that are suffixes of the sequence. For each such word, work out the best total cost by adding the cost of that word to the (already worked out) cost of the best solution ending just before that word starts. Note the smallest total cost and the length of the word that produces it.
Once you have the best cost for the entire sequence, use the length of the last word in that sequence to work out what the last word is, and then step back that number of characters to inspect the answer worked out at that point and get the word just preceding the last word, and so on.
A:
Isn't it just the same as finding the permutations, but with some conditions? Like you start the permutation algorithm (a recursive one) you check if the string you already have matches the first X characters of your to find word, if yes you continue the recursion until you find the whole word, otherwise you go back.
Level 2 is a bit silly if you ask me, because then you could actually write anything as the "word to be found", but basically it would be just like level1 with the exception that if you can't find a substring in your list you simply add it (letter by letter i.e. you have "love" and a list of ['l','e'] you match 'l' but you lack 'o' so you add it and check if any of your words in the list start with a 'v' and match your word to be found, they don't so you add 'v' to 'o' etc.).
And if you're bored you can implement a genetical algorithm, it's really fun but not really efficient.
A:
Here is a recursive, inefficient Java solution:
private static void findSolutions(Set<String> fragments, String target, HashSet<String> solution, Collection<Set<String>> solutions) {
if (target.isEmpty()) {
solutions.add(solution);
return;
}
for (String frag : fragments) {
if (target.startsWith(frag)) {
HashSet<String> solution2 = new HashSet<String>(solution);
solution2.add(frag);
findSolutions(fragments, target.substring(frag.length()), solution2, solutions);
}
}
}
public static Collection<Set<String>> findSolutions(Set<String> fragments, String target) {
HashSet<String> solution = new HashSet<String>();
Collection<Set<String>> solutions = new ArrayList<Set<String>>();
findSolutions(fragments, target, solution, solutions);
return solutions;
}
| Solving jumbled word puzzles with python? | I have an interesting programming puzzle for you:
You will be given two things:
A word containing a list of English words put together, e.g:
word = "iamtiredareyou"
Possible subsets:
subsets = [
'i', 'a', 'am', 'amt', 'm', 't', 'ti', 'tire', 'tired', 'i',
'ire', 'r', 're', 'red', 'redare', 'e', 'd', 'da', 'dar', 'dare',
'a', 'ar', 'are', 'r', 're', 'e', 'ey', 'y', 'yo', 'you', 'o', 'u'
]
Challenges:
Level-1: I need to pragmatically find the members in subsets which together in an order will make "iamtiredareyou" i.e. ['i', 'am', 'tired', 'are', 'you']
Level-2: The original string may consist of some extra characters in sequence which are not present in the subset. e.g. "iamtired12aareyou". The subset given is same as above, the solution should automatically include this subset in the right place in the result array. i.e. ['i', 'am', 'tired', '12a', 'are', 'you']
How can I do this?
| [
"Generally, a recursive algorithm would do.\nStart with checking all subsets against start of a given word, if found — add (append) to found values and recurse with remaining part of the word and current found values.\nOr if it's an end of the string — print found values.\nsomething like that:\nall=[]\ndef frec(wor... | [
3,
2,
1,
1,
0,
0
] | [] | [] | [
"algorithm",
"python"
] | stackoverflow_0003350951_algorithm_python.txt |
Q:
Is there a Python ORM framework for interacting with data via XML-RPC?
I am working on a webapp that interacts with data via XML-RPC rather than with a direct connection to a database. I can execute SQL queries via an XML-RPC methods.
I would like to interact with the data in an ORM framework fashion that has lazy/eager fetching, etc., although I can't seem to figure out how that would be possible with Python or even Django's libraries.
A:
You would have to write your own database backend. Take a look at existing backends for how to do this.
A:
Check out XML Models. It's REST rather than XML-RPC, but much of it is probably reusable.
| Is there a Python ORM framework for interacting with data via XML-RPC? | I am working on a webapp that interacts with data via XML-RPC rather than with a direct connection to a database. I can execute SQL queries via an XML-RPC methods.
I would like to interact with the data in an ORM framework fashion that has lazy/eager fetching, etc., although I can't seem to figure out how that would be possible with Python or even Django's libraries.
| [
"You would have to write your own database backend. Take a look at existing backends for how to do this.\n",
"Check out XML Models. It's REST rather than XML-RPC, but much of it is probably reusable.\n"
] | [
0,
0
] | [] | [] | [
"django",
"orm",
"python",
"xml_rpc"
] | stackoverflow_0003356428_django_orm_python_xml_rpc.txt |
Q:
How to modify django cms multilingual middleware
hey guys, im trying to internationalize my site, so i have the django cms multilingual middleware class in my settings.py , when viewed from brasil, the url changes to
www.ashtangayogavideo.com/pt/ash/homepage/ resulting in a 404, because my site is in www.ashtangayogavideo.com/ash/en/homepage, how can i configure the middleware, or settings.py, so that the language code is added after the /ash/ ? .
A:
Sounds like you need to modify your urls.py, not your settings or middleware.
| How to modify django cms multilingual middleware | hey guys, im trying to internationalize my site, so i have the django cms multilingual middleware class in my settings.py , when viewed from brasil, the url changes to
www.ashtangayogavideo.com/pt/ash/homepage/ resulting in a 404, because my site is in www.ashtangayogavideo.com/ash/en/homepage, how can i configure the middleware, or settings.py, so that the language code is added after the /ash/ ? .
| [
"Sounds like you need to modify your urls.py, not your settings or middleware.\n"
] | [
2
] | [] | [] | [
"django",
"django_cms",
"python"
] | stackoverflow_0003327590_django_django_cms_python.txt |
Q:
Finding the architectures that Python was built for, but from within Python itself
Essentially I am looking for a way to find the following, but from within Python without having to run system commands:
$ file `which python2.7`
/Library/.../2.7/bin/python2.7: Mach-O universal binary with 2 architectures
/Library/.../2.7/bin/python2.7 (for architecture i386): Mach-O executable i386
/Library/.../2.7/bin/python2.7 (for architecture x86_64): Mach-O 64-bit executable x86_64
Something like:
>>> get_mac_python_archs()
['i386', 'x86_64']
>>>
Possible?
A:
As far as I know, there is no truly reliable way other than to examine the executable files themselves to see which architectures have been lipo-ed together, in other words, what file does. While the distutils.util.get_platform() noted elsewhere probably comes the closest, it is based on configuration information at Python build time and the criteria used has changed between releases and even among distributions of the same release.
For example, if you built a Python 2.6 on OS X 10.6 with the 4-way universal option (ppc, ppc64, i386, x86_64), get_platform() should report macosx-10.6-universal. However, the Apple-suppled Python 2.6 in OS X 10.6 reports the same string even though it is only a 3-way build (no ppc64). EDIT: That may not be the best example since, come to think of it, you probably couldn't build a ppc64 variant with the 10.6 SDK. However, the point still holds that the platform string is too context dependent to be totally reliable. It may be reliable enough for some needs, though. Otherwise, calling out to file or otool etc is likely the best way to go.
A:
The function platform.architecture returns just the platform working:
Python 2.6.5 (r265:79359, Mar 24 2010, 01:32:55)
[GCC 4.0.1 (Apple Inc. build 5493)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import platform
>>> platform.machine()
'i386'
the nearest I can get to is by using distutils.util.get_platform:
>>> import distutils.util
>>> distutils.util.get_platform()
'macosx-10.3-fat'
which is the full answer if you are using Python 2.7/3.2, as you can see in the documentation. "Starting from Python 2.7 and Python 3.2 the architecture fat3 is used for a 3-way universal build (ppc, i386, x86_64) and intel is used for a univeral build with the i386 and x86_64 architectures"
| Finding the architectures that Python was built for, but from within Python itself | Essentially I am looking for a way to find the following, but from within Python without having to run system commands:
$ file `which python2.7`
/Library/.../2.7/bin/python2.7: Mach-O universal binary with 2 architectures
/Library/.../2.7/bin/python2.7 (for architecture i386): Mach-O executable i386
/Library/.../2.7/bin/python2.7 (for architecture x86_64): Mach-O 64-bit executable x86_64
Something like:
>>> get_mac_python_archs()
['i386', 'x86_64']
>>>
Possible?
| [
"As far as I know, there is no truly reliable way other than to examine the executable files themselves to see which architectures have been lipo-ed together, in other words, what file does. While the distutils.util.get_platform() noted elsewhere probably comes the closest, it is based on configuration information ... | [
1,
0
] | [] | [] | [
"architecture",
"macos",
"python"
] | stackoverflow_0003355846_architecture_macos_python.txt |
Q:
How do I access data programatically (load a pickled file) that is stored as a static file?
how is this solution I am using now:
I have a 1MB .dbf file in the same directory of all my .py modules. In main.py I have
import tools
In tool.py code is :
the_list_that_never_changes = loadDbf(file).variables['CNTYIDFP'].
So the_list_that_never_changes is only loaded once and is always in memory ready to be used...correct?
A:
Static files are stored apart from application files. If you need to load data.pkl from main.py, then don't mark it as a static file and it will be accessible by main.py like any other application file.
Reference: Application Configuration's Handlers For Static Files.
Alternative: Why not define the information stored in data.pkl as a global variable in your Python source? Then you don't have to go through the trouble of reading a file and deserializing its pickled contents, and it will be a bit faster too. This will also make it easy to take advantage of app caching - your data will be loaded once, and then cached for use by subsequent requests.
A:
Put data.pkl in the same directory with main.py and use something along these lines:
pickle_path = os.path.join(os.path.dirname(__file__), 'data.pkl')
f = open(pickle_path)
data = pickle.load(f)
Do not add data.pkl to app.yaml.
If you read this data often, it might be beneficial to memcache it after unpickling. Then you can read it from memcache, which is usually faster than reading file from disk.
| How do I access data programatically (load a pickled file) that is stored as a static file? | how is this solution I am using now:
I have a 1MB .dbf file in the same directory of all my .py modules. In main.py I have
import tools
In tool.py code is :
the_list_that_never_changes = loadDbf(file).variables['CNTYIDFP'].
So the_list_that_never_changes is only loaded once and is always in memory ready to be used...correct?
| [
"Static files are stored apart from application files. If you need to load data.pkl from main.py, then don't mark it as a static file and it will be accessible by main.py like any other application file.\nReference: Application Configuration's Handlers For Static Files.\n\nAlternative: Why not define the informati... | [
2,
0
] | [] | [] | [
"google_app_engine",
"python"
] | stackoverflow_0003340015_google_app_engine_python.txt |
Q:
Suggestions for passing large table between Python and C#
I have a C# application that needs to be run several thousand times. Currently it precomputes a large table of constant values at the start of the run for reference. As these values will be the same from run to run I would like to compute them independently in a simple python script and then just have the C# app import the file at the start of each run.
The table consists of a sorted 2D array (500-3000+ rows/columns) of simple (int x, double y) tuples. I am looking for recommendations concerning the best/simplest way to store and then import this data. For example, I could store the data in a text file like this "(x1,y1)|(x2,y2)|(x3,y3)|...|(xn,yn)" This seems like a very ugly solution to a problem that seems to lend itself to a specific data structure or library I am currently unaware of. Any suggestions would be welcome.
A:
I would go for a simplified csv file.
Given that all your values are numbers, you can read them in C# using
File.ReadAllText(filename).Split(',')
You can find more C# options for csv here
On Python you can use the csv module to read and write them. Better explanation here, but the short of it is
import csv
writer = csv.writer(filename)
writer.writerows(data)
Using CSV also gives you flexibility for future improvements, as well as exporting and importing from other programs like Excel for further processing.
A:
You may consider running IronPython - then you can pass values back and forth across C#/Python
A:
Have a look at NetCDF and/or HDF5 file formats. HDF5 in particular seems to have a .NET implementation, and PyTables is handy on the Python side of things.
A:
Why not just have your C# program check for the existence of a file called something like "constants.bin". If the file does not exist, then have it generate the array and serialize it out to "constants.bin". If the file does exist then just use serialization to read it back in.
int[,] constants;
if(!File.Exists("constants.bin")) {
GenerateConstants();
Stream stream = new FileStream("constants.bin", FileMode.Create, FileAccess.Write, FileShare.None);
new BinaryFormatter.Serialize(stream, constants);
stream.Close();
}
else
{
Stream stream = new FileStream("constants.bin", FileMode.Open, FileAccess.Read, FileShare.Read);
constants = (int[,])(new BinaryFormatter.Deserialize(stream));
stream.Close();
}
I haven't tested this, so you may need to tweak it a little bit.
The first time you run the C# app the "constants.bin" won't exist so it will generate the 2D array and then serialize it out to the file. Each subsequent run of the program will find the "constants.bin" file and deserialize it into the local 2D array.
A:
CSV is fine suggestion, but may be clumsy with values being int and double. Generally tab or semicomma are best separators.
A:
Python standard library includes the sqlite3 module - a lightweight disk-based database.
For C# there are a few libraries providing sqlite support.
For example, System.Data.SQLite - a complete ADO.NET 2.0/3.5 provider.
For your application, use datatypes REAL (stored as an 8-byte IEEE floating point number)
and INTEGER (stored in 1, 2, 3, 4, 6, or 8 bytes depending on the magnitude of the value).
| Suggestions for passing large table between Python and C# | I have a C# application that needs to be run several thousand times. Currently it precomputes a large table of constant values at the start of the run for reference. As these values will be the same from run to run I would like to compute them independently in a simple python script and then just have the C# app import the file at the start of each run.
The table consists of a sorted 2D array (500-3000+ rows/columns) of simple (int x, double y) tuples. I am looking for recommendations concerning the best/simplest way to store and then import this data. For example, I could store the data in a text file like this "(x1,y1)|(x2,y2)|(x3,y3)|...|(xn,yn)" This seems like a very ugly solution to a problem that seems to lend itself to a specific data structure or library I am currently unaware of. Any suggestions would be welcome.
| [
"I would go for a simplified csv file.\nGiven that all your values are numbers, you can read them in C# using\nFile.ReadAllText(filename).Split(',')\n\nYou can find more C# options for csv here\nOn Python you can use the csv module to read and write them. Better explanation here, but the short of it is\nimport csv\... | [
2,
2,
2,
1,
1,
1
] | [] | [] | [
"c#",
"file",
"python"
] | stackoverflow_0003355832_c#_file_python.txt |
Q:
is it bad convention to call sys.argv from somewhere besides main function in python
I am writing a script and I have a function, call it f(), that needs one of the command line arguments (a filename it needs to open). However, f() is not called directly in the main function.
I was wondering if it was bad coding convention to call sys.argv[1] straight from f()? If I don't I would have to pass it as an argument to all of the functions that eventually call f().
A:
It would be a bad practice to always assume that the arguments that your function needs are available on the command-line - what if this code was invoked in some other manner?
A function should declare input parameters for the data it needs to access.
At the very least, passing the necessary argument into f() rather than accessing sys.argv helps make f() much more re-usable.
A:
It is bad form to access sys.argv from anywhere other than your main routine since it couples that code to the command line, making it difficult to use from other Python scripts.
A:
You should pass the file name into f(). This way, if you want to use that function later (as a module function, perhaps), you aren't bound to use sys.argv.
You could always make a wrapper to f() (if you are lazy or want to keep your 'main' code clean), that does something like:
def wrapperf():
return f(sys.argv[1])
You could define wrapperf inside of main() if you want to keep it away from other scripts importing your module.
A:
Yes, it would be bad practice to access sys.argv from f.
I would suggest passing the neccessary arguments into f. But since f won't neccessarily be called from main...maybe use a global variable? I normally try to avoid them, but in your case... >.>
I guess it would help to know a little more info about the way the program is setup. :P
A:
I would recommend setting the file name once, from sys.argv[1] if it exists, or demand user input if it doesn't. Then store it as part of an object, or a global variable. That will prevent having to pass what is essentially a static string all over the place, once it is set.
A:
You definitely don't want to parse sys.argv in some low-level function. To avoid lots of parameter-passing, you could provide a helper-function called by main() that simply stores the filename into a static variable for later use by f().
A:
I would generally advise against it, but BaseHttpServer in python actually uses sys.argv[1] to determine which port to listen on. Convenience maybe.
| is it bad convention to call sys.argv from somewhere besides main function in python | I am writing a script and I have a function, call it f(), that needs one of the command line arguments (a filename it needs to open). However, f() is not called directly in the main function.
I was wondering if it was bad coding convention to call sys.argv[1] straight from f()? If I don't I would have to pass it as an argument to all of the functions that eventually call f().
| [
"It would be a bad practice to always assume that the arguments that your function needs are available on the command-line - what if this code was invoked in some other manner?\nA function should declare input parameters for the data it needs to access.\nAt the very least, passing the necessary argument into f() ra... | [
6,
4,
3,
1,
1,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0003356262_python.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.