content stringlengths 85 101k | title stringlengths 0 150 | question stringlengths 15 48k | answers list | answers_scores list | non_answers list | non_answers_scores list | tags list | name stringlengths 35 137 |
|---|---|---|---|---|---|---|---|---|
Q:
How can I design a dict like sqlite class in python which can using different field as "key"?
I have a such a data structure,
"ID NAME BIRTH AGE SEX"
=================================
1 Joe 01011980 30 M
2 Rose 12111986 24 F
3 Tom 31121965 35 M
4 Joe 15091990 20 M
I want to use python + sqlite to store and query data in a easy way. I am in trying to design a dict like object to store and retrieve those information, also the database can be shared with other application in an easy way.(just a plain database table for other application, then the pickle and ySerial like object should not fit for it.)
For example:
d = mysqlitedict.open('student_table')
d['1'] = ["Joe","01011980","30","M"]
d['2'] = ["Rose","12111986","24","F"]
This can be reasonable because I can use __setitem__() to get ride of that if "ID" as the key and rest part as the value of that dict like object.
The problem is if I want to use other field either as key semantically, takes "NAME" for example:
d['Joe'] = ["1","01011980","30","M"]
That will be a problem, because a dict like object should have a key/value pair semantically, as now "ID" is the key, "NAME" can not as overrode key here.
Then my question is, can I design my class then I may do like this?
d[key="NAME", "Joe"] = ["1","01011980","30","M"]
d[key="ID",'1'] = ["Joe","01011980","30","M"]
d.update(key = "ID", {'1':["Joe","01011980","30","M"]})
>>>d[key="NAME", 'Joe']
["1","Joe","01011980","30","M"]
["1","Joe","15091990","20","M"]
>>>d.has_key(key="NAME", 'Joe']
True
I will be appreciated for any reply!
KC
A:
sqlite is a SQL database and works by far best when used as such (wrapped in SQLAlchemy or whatever if you really insist;-).
Syntax such as d[key="NAME", 'Joe'] is simply illegal Python, no matter how much wrapping and huffing and puffing you may do. A simple class wrapper around the DB connection is easy, but it will never give you that syntax -- something like d.fetch('Joe', key='Name') is reasonably easy to achieve, but indexing has very different syntax from function calls, and even in the latter named arguments must come after positional ones.
If you're willing to renounce your ambitious syntax dreams in favor of sensible Python syntax, and need help designing a class to implement the latter, feel free to ask, of course (I'm off to bed pretty soon, but I'm sure other, later-sleepers will be eager to help;-).
Edit: given the OP's clarifications (in a comment), it looks like a set_key method is acceptable to maintain Python-acceptable syntax (though the semantics of course will still be a tad off, since the OP wants a "dict-like" object which may have non unique keys -- no such thing in Python, really... but, we can approximate it a bit, at least).
So, here's a very first sketch (requires Python 2.6 or better -- just because I've used collections.MutableMapping to get other dict-like methods and .format to format strings; if you're stuck in 2.5, %-formatting of strings and UserDict.DictMixin will work instead):
import collections
import sqlite3
class SqliteDict(collections.MutableMapping):
@classmethod
def create(cls, path, columns):
conn = sqlite3.connect(path)
conn.execute('DROP TABLE IF EXISTS SqliteDict')
conn.execute('CREATE TABLE SqliteDict ({0})'.format(','.join(columns.split())))
conn.commit()
return cls(conn)
@classmethod
def open(cls, path):
conn = sqlite3.connect(path)
return cls(conn)
def __init__(self, conn):
# looks like for sime weird reason you want str, not unicode, when feasible, so...:
conn.text_factory = sqlite3.OptimizedUnicode
c = conn.cursor()
c.execute('SELECT * FROM SqliteDict LIMIT 0')
self.cols = [x[0] for x in c.description]
self.conn = conn
# start with a keyname (==column name) of `ID`
self.set_key('ID')
def set_key(self, key):
self.i = self.cols.index(key)
self.kn = key
def __len__(self):
c = self.conn.cursor()
c.execute('SELECT COUNT(*) FROM SqliteDict')
return c.fetchone()[0]
def __iter__(self):
c = self.conn.cursor()
c.execute('SELECT * FROM SqliteDict')
while True:
result = c.fetchone()
if result is None: break
k = result.pop(self.i)
return k, result
def __getitem__(self, k):
c = self.conn.cursor()
# print 'doing:', 'SELECT * FROM SqliteDict WHERE {0}=?'.format(self.kn)
# print ' with:', repr(k)
c.execute('SELECT * FROM SqliteDict WHERE {0}=?'.format(self.kn), (k,))
result = [list(r) for r in c.fetchall()]
# print ' resu:', repr(result)
for r in result: del r[self.i]
return result
def __contains__(self, k):
c = self.conn.cursor()
c.execute('SELECT * FROM SqliteDict WHERE {0}=?'.format(self.kn), (k,))
return c.fetchone() is not None
def __delitem__(self, k):
c = self.conn.cursor()
c.execute('DELETE FROM SqliteDict WHERE {0}=?'.format(self.kn), (k,))
self.conn.commit()
def __setitem__(self, k, v):
r = list(v)
r.insert(self.i, k)
if len(r) != len(self.cols):
raise ValueError, 'len({0}) is {1}, must be {2} instead'.format(r, len(r), len(self.cols))
c = self.conn.cursor()
# print 'doing:', 'REPLACE INTO SqliteDict VALUES({0})'.format(','.join(['?']*len(r)))
# print ' with:', r
c.execute('REPLACE INTO SqliteDict VALUES({0})'.format(','.join(['?']*len(r))), r)
self.conn.commit()
def close(self):
self.conn.close()
def main():
d = SqliteDict.create('student_table', 'ID NAME BIRTH AGE SEX')
d['1'] = ["Joe", "01011980", "30", "M"]
d['2'] = ["Rose", "12111986", "24", "F"]
print len(d), 'items in table created.'
print d['2']
print d['1']
d.close()
d = SqliteDict.open('student_table')
d.set_key('NAME')
print len(d), 'items in table opened.'
print d['Joe']
if __name__ == '__main__':
main()
The class is not meant to be instantiated directly (though it's OK to do so by passing an open sqlite3 connection to a DB with an appropriate SqliteDict table) but through the two class methods create (to make a new DB or wipe out an existing one) and open, which seems to match the OP's desires better than the alternative (have __init__ take a DB file path an an option string describing how to open it, just like modules such as gdbm take -- 'r' to open read-only, 'c' to create or wipe out, 'w' to open read-write -- easy to adjust of course). Among the columns passed (as a whitespace-separated string) to create, there must be one named ID (I haven't given much care to raising "the right" errors for any of the many, many user errors that can occur on building and using instances of this class; errors will occur on all incorrect usage, but not necessarily ones obvious to the user).
Once an instance is opened (or created), it behaves as closely to a dict as possible, except that all values set must be lists of exactly the right length, while the values returned are lists of lists (due to the weird "non-unique key" issue). For example, the above code, when run, prints
2 items in table created.
[['Rose', '12111986', '24', 'F']]
[['Joe', '01011980', '30', 'M']]
2 items in table opened.
[['1', '01011980', '30', 'M']]
The "Pythonically absurd" behavior is that d[x] = d[x] will fail -- because the right hand side is a list e.g. with a single item (which is a list of the column values) while the item assignment absolutely requires a list with e.g. four items (the column values). This absurdity is in the OP's requested semantics, and could be altered only by drastically changing such absurd required semantics again (e.g., forcing item assignment to have a list of lists on the RHS, and using executemany in lieu of plain execute).
Non-uniqueness of keys also makes it impossible to guess if d[x] = v, for a key k which corresponds to some number n of table entries, is meant to replace one (and if so, which one?!) or all of those entries, or add another new entry instead. In the code above I've taken the "add another entry" interpretation, but with a SQL statement REPLACE that, should the CREATE TABLE be changed to specify some uniqueness constraints, will change some semantics from "add entry" to "replace entries" if and when uniqueness constraints would otherwise be violated.
I'll let you all to play with this code, and reflect how huge the semantic gap is between Python mappings and relational tables, that the OP is desperately keen to bridge (apparently as a side effect of his urge to "use nicer syntax" than SQL affords -- I wonder if he has looked at SqlAlchemy as I recommended).
I think, in the end, the important lesson is what I stated right at the start, in the first paragraph of the part of the answer I wrote yesterday, and I self-quote...:
sqlite is a SQL database and works
by far best when used as such (wrapped
in SQLAlchemy or whatever if you
really insist;-).
| How can I design a dict like sqlite class in python which can using different field as "key"? | I have a such a data structure,
"ID NAME BIRTH AGE SEX"
=================================
1 Joe 01011980 30 M
2 Rose 12111986 24 F
3 Tom 31121965 35 M
4 Joe 15091990 20 M
I want to use python + sqlite to store and query data in a easy way. I am in trying to design a dict like object to store and retrieve those information, also the database can be shared with other application in an easy way.(just a plain database table for other application, then the pickle and ySerial like object should not fit for it.)
For example:
d = mysqlitedict.open('student_table')
d['1'] = ["Joe","01011980","30","M"]
d['2'] = ["Rose","12111986","24","F"]
This can be reasonable because I can use __setitem__() to get ride of that if "ID" as the key and rest part as the value of that dict like object.
The problem is if I want to use other field either as key semantically, takes "NAME" for example:
d['Joe'] = ["1","01011980","30","M"]
That will be a problem, because a dict like object should have a key/value pair semantically, as now "ID" is the key, "NAME" can not as overrode key here.
Then my question is, can I design my class then I may do like this?
d[key="NAME", "Joe"] = ["1","01011980","30","M"]
d[key="ID",'1'] = ["Joe","01011980","30","M"]
d.update(key = "ID", {'1':["Joe","01011980","30","M"]})
>>>d[key="NAME", 'Joe']
["1","Joe","01011980","30","M"]
["1","Joe","15091990","20","M"]
>>>d.has_key(key="NAME", 'Joe']
True
I will be appreciated for any reply!
KC
| [
"sqlite is a SQL database and works by far best when used as such (wrapped in SQLAlchemy or whatever if you really insist;-).\nSyntax such as d[key=\"NAME\", 'Joe'] is simply illegal Python, no matter how much wrapping and huffing and puffing you may do. A simple class wrapper around the DB connection is easy, but... | [
3
] | [] | [] | [
"dictionary",
"key",
"python",
"sqlite"
] | stackoverflow_0003464787_dictionary_key_python_sqlite.txt |
Q:
question about python names using a default parameter value
I was reading this today: http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html#default-parameter-values and I can't seem to understand what's happening under the hood.
def bad_append(new_item, a_list=[]):
a_list.append(new_item)
return a_list
The problem here is that the default
value of a_list, an empty list, is
evaluated at function definition time.
So every time you call the function,
you get the same default value. Try it
several times:
I guess first of all, when is the function definition stage? Is it an initialization stage just before the actual main function executes?
My original thinking was that the name a_list gets discarded right after the function runs so whatever [] mutated to will be garbage collected. Now, I think that a_list is not discarded at all since it's only a name bound to the object [] so it never gets garbage collected because a_list is still bound to it. But then again, I'm wondering how I still get the same default value instead of a new []. Can someone straighten this out for me?
Thanks!
A:
when is the function definition stage?
Look at "Function definitions" in the Python reference:
Default parameter values are evaluated when the function definition is executed. This means that the expression is evaluated once, when the function is defined, and that that same “pre-computed” value is used for each call. This is especially important to understand when a default parameter is a mutable object, such as a list or a dictionary: if the function modifies the object (e.g. by appending an item to a list), the default value is in effect modified. This is generally not what was intended. A way around this is to use None as the default, and explicitly test for it in the body of the function, e.g.:
def whats_on_the_telly(penguin=None):
if penguin is None:
penguin = []
penguin.append("property of the zoo")
return penguin
The parameters are evaluated when the function definition is executed. If this is in a module, it happens when the module is imported. If it's in a class, it's when the class definition runs. If it's in a function, it happens when the function executes. Remember that a Python module is evaluated from top to bottom, and doesn't automatically have an explicit "main function" like some languages.
For example, if you put the function definition inside a function, you get a new copy of the function each time:
>>> def make_function():
... def f(value=[]):
... value.append('hello')
... return value
... return f
...
>>> f1 = make_function()
>>> f2 = make_function()
>>> f1()
['hello']
>>> f1()
['hello', 'hello']
>>> f2()
['hello']
The function definition creates a new function object, assigns it various properties including the code, formal parameters, and default values, and stores it in a name in the scope. Typically this only happens once for any given function, but there are cases where a function's definition can be executed again.
My original thinking was that the name a_list gets discarded right after the function runs so whatever [] mutated to will be garbage collected.
Inside the body of the function, the name a_list is available to the code. So the name, and the value it is pointing to, must both still be available. It is stored in the func_defaults attribute of the function.
But then again, I'm wondering how I still get the same default value instead of a new [].
Because the [] is evaluated only when the function is defined, not when it is called, and the name a_list points to the same object even across repeated calls. Again, if you want the alternative behavior, use something immutable like None and check for it.
| question about python names using a default parameter value | I was reading this today: http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html#default-parameter-values and I can't seem to understand what's happening under the hood.
def bad_append(new_item, a_list=[]):
a_list.append(new_item)
return a_list
The problem here is that the default
value of a_list, an empty list, is
evaluated at function definition time.
So every time you call the function,
you get the same default value. Try it
several times:
I guess first of all, when is the function definition stage? Is it an initialization stage just before the actual main function executes?
My original thinking was that the name a_list gets discarded right after the function runs so whatever [] mutated to will be garbage collected. Now, I think that a_list is not discarded at all since it's only a name bound to the object [] so it never gets garbage collected because a_list is still bound to it. But then again, I'm wondering how I still get the same default value instead of a new []. Can someone straighten this out for me?
Thanks!
| [
"\nwhen is the function definition stage?\n\nLook at \"Function definitions\" in the Python reference:\n\nDefault parameter values are evaluated when the function definition is executed. This means that the expression is evaluated once, when the function is defined, and that that same “pre-computed” value is used f... | [
3
] | [] | [] | [
"python",
"variables"
] | stackoverflow_0003465017_python_variables.txt |
Q:
ISDN dial up connection with python
I have a requirement to create a Python application that accepts dial up connections over ISDN from client software and relays messages from this connection to a website application running on a LAMP webserver.
Do we have some modules or support for this kind of implementation in python?
Please suggest.
Thanks in advance.
A:
You should have system hardware and software that handles establishing ISDN links, that's not something you should be trying to reimplement yourself.
You need to consult the documentation for that hardware and software, and the documentation for the client software, to determine how that connection can be made available to your application, and what communications protocol the client will be using over the ISDN link.
(If you're really lucky, the client actually uses PPP to establish a TCP/IP connection.)
| ISDN dial up connection with python | I have a requirement to create a Python application that accepts dial up connections over ISDN from client software and relays messages from this connection to a website application running on a LAMP webserver.
Do we have some modules or support for this kind of implementation in python?
Please suggest.
Thanks in advance.
| [
"You should have system hardware and software that handles establishing ISDN links, that's not something you should be trying to reimplement yourself.\nYou need to consult the documentation for that hardware and software, and the documentation for the client software, to determine how that connection can be made av... | [
1
] | [] | [] | [
"dial_up",
"isdn",
"python"
] | stackoverflow_0003464996_dial_up_isdn_python.txt |
Q:
Getting HTTP GET variables using Tipfy
I'm currently playing around with tipfy on Google's Appengine and just recently ran into a problem: I can't for the life of me find any documentation on how to use GET variables in my application, I've tried sifting through both tipfy and Werkzeug's documentations with no success. I know that I can use request.form.get('variable') to get POST variables and **kwargs in my handlers for URL variables, but that's as much as the documentation will tell me. Any ideas?
A:
request.args.get('variable') should work for what I think you mean by "GET data".
A:
Source: http://www.tipfy.org/wiki/guide/request/
The Request object contains all the information transmitted by the client of the application. You will retrieve from it GET and POST values, uploaded files, cookies and header information and more. All these things are so common that you will be very used to it.
To access the Request object, simply import the request variable from tipfy:
from tipfy import request
# GET
request.args.get('foo')
# POST
request.form.get('bar')
# FILES
image = request.files.get('image_upload')
if image:
# User uploaded a file. Process it.
# This is the filename as uploaded by the user.
filename = image.filename
# This is the file data to process and/or save.
filedata = image.read()
else:
# User didn't select any file. Show an error if it is required.
pass
A:
this works for me (tipfy 0.6):
from tipfy import RequestHandler, Response
from tipfy.ext.session import SessionMiddleware, SessionMixin
from tipfy.ext.jinja2 import render_response
from tipfy import Tipfy
class I18nHandler(RequestHandler, SessionMixin):
middleware = [SessionMiddleware]
def get(self):
language = Tipfy.request.args.get('lang')
return render_response('hello_world.html', message=language)
| Getting HTTP GET variables using Tipfy | I'm currently playing around with tipfy on Google's Appengine and just recently ran into a problem: I can't for the life of me find any documentation on how to use GET variables in my application, I've tried sifting through both tipfy and Werkzeug's documentations with no success. I know that I can use request.form.get('variable') to get POST variables and **kwargs in my handlers for URL variables, but that's as much as the documentation will tell me. Any ideas?
| [
"request.args.get('variable') should work for what I think you mean by \"GET data\".\n",
"Source: http://www.tipfy.org/wiki/guide/request/\nThe Request object contains all the information transmitted by the client of the application. You will retrieve from it GET and POST values, uploaded files, cookies and heade... | [
3,
2,
0
] | [] | [] | [
"google_app_engine",
"mod_wsgi",
"python",
"tipfy",
"werkzeug"
] | stackoverflow_0002569895_google_app_engine_mod_wsgi_python_tipfy_werkzeug.txt |
Q:
simplest way to return a new list by remove index/value from another list? (order required)
All,
o1 = ["a","b","c","d","e","f","g","h"]
index = [3,4]
value = ["c","d"]
[x for x in o1 if x not in value]
[x for x in o1 if x not in [o1[y] for y in index]]
any simpler solution for above lc?
Thanks
A:
(x for x in o1 if x not in value)
(x for i, x in enumerate( o1 ) if i not in index )
Note that using generator expressions will save you a pass through the list, and using sets instead of lists for index and value will be more efficient.
| simplest way to return a new list by remove index/value from another list? (order required) | All,
o1 = ["a","b","c","d","e","f","g","h"]
index = [3,4]
value = ["c","d"]
[x for x in o1 if x not in value]
[x for x in o1 if x not in [o1[y] for y in index]]
any simpler solution for above lc?
Thanks
| [
"(x for x in o1 if x not in value)\n(x for i, x in enumerate( o1 ) if i not in index )\n\nNote that using generator expressions will save you a pass through the list, and using sets instead of lists for index and value will be more efficient.\n"
] | [
2
] | [] | [] | [
"indexing",
"list",
"python"
] | stackoverflow_0003465417_indexing_list_python.txt |
Q:
How to efficiently merge multiple list of different length into a tree dictonary in python
given
[
('object-top-1','object-lvl1-1','object-lvl2-1'),
('object-top-2','object-lvl1-1','object-lvl2-2','object-lvl3-1')
('object-top-1','object-lvl1-1','object-lvl2-3'),
('object-top-2','object-lvl1-2','object-lvl2-4','object-lvl3-2','object-lvl4-1'),
]
and so on .. where all the tuples are of arbitrary length
Any way to efficiently convert them to
{'object-top-1': {
'object-lvl1-1': {
'object-lvl2-1': {},
'object-lvl2-3':{}
}
},
'object-top-2': {
'object-lvl1-1':{
'object-lvl2-2': {
'object-lvl3-1' : {}
}
}
}
'object-lvl1-2':{
'object-lvl2-4': {
'object-lvl3-2' : {
'object-lvl4-1': {}
}
}
}
}
I've been stuck trying to figure this out for quite some time now >.<
Thanks!
A:
def treeify(seq):
ret = {}
for path in seq:
cur = ret
for node in path:
cur = cur.setdefault(node, {})
return ret
Example:
>>> pprint.pprint(treeify(L))
{'object-top-1': {'object-lvl1-1': {'object-lvl2-1': {}, 'object-lvl2-3': {}}},
'object-top-2': {'object-lvl1-1': {'object-lvl2-2': {'object-lvl3-1': {}}},
'object-lvl1-2': {'object-lvl2-4': {'object-lvl3-2': {'object-lvl4-1': {}}}}}}
dict.setdefault is an underappreciated method.
A:
This will do it, and let's you add other values as well, instead of being limited to empty dicts at the leaves:
def insert_in_dictionary_tree_at_address(dictionary, address, value):
if (len(address) == 0):
pass
elif (len(address) == 1):
dictionary[address[0]] = value
else:
this = address[0]
remainder = address[1:]
if not dictionary.has_key(this):
dictionary[this] = dict()
insert_in_dictionary_tree_at_address(dictionary[this], remainder, value)
addresses = [
('object-top-1','object-lvl1-1','object-lvl2-1'),
('object-top-2','object-lvl1-1','object-lvl2-2','object-lvl3-1'),
('object-top-1','object-lvl1-1','object-lvl2-3'),
('object-top-2','object-lvl1-2','object-lvl2-4','object-lvl3-2','object-lvl4-1'),
]
dictionary = dict()
for address in addresses:
insert_in_dictionary_tree_at_address(dictionary, address, dict())
def print_dictionary_tree(dictionary, prefix=" ", accumulated=""):
next_accumulated = accumulated + prefix
if type(dictionary) is dict and len(dictionary) > 0:
for (key, value) in dictionary.items():
print accumulated + str(key) + ":"
print_dictionary_tree(value, prefix, accumulated + prefix)
else:
print accumulated + str(dictionary)\
print_dictionary_tree(dictionary)
Output:
object-top-1:
object-lvl1-1:
object-lvl2-1:
{}
object-lvl2-3:
{}
object-top-2:
object-lvl1-2:
object-lvl2-4:
object-lvl3-2:
object-lvl4-1:
{}
object-lvl1-1:
object-lvl2-2:
object-lvl3-1:
{}
| How to efficiently merge multiple list of different length into a tree dictonary in python | given
[
('object-top-1','object-lvl1-1','object-lvl2-1'),
('object-top-2','object-lvl1-1','object-lvl2-2','object-lvl3-1')
('object-top-1','object-lvl1-1','object-lvl2-3'),
('object-top-2','object-lvl1-2','object-lvl2-4','object-lvl3-2','object-lvl4-1'),
]
and so on .. where all the tuples are of arbitrary length
Any way to efficiently convert them to
{'object-top-1': {
'object-lvl1-1': {
'object-lvl2-1': {},
'object-lvl2-3':{}
}
},
'object-top-2': {
'object-lvl1-1':{
'object-lvl2-2': {
'object-lvl3-1' : {}
}
}
}
'object-lvl1-2':{
'object-lvl2-4': {
'object-lvl3-2' : {
'object-lvl4-1': {}
}
}
}
}
I've been stuck trying to figure this out for quite some time now >.<
Thanks!
| [
"def treeify(seq):\n ret = {}\n for path in seq:\n cur = ret\n for node in path:\n cur = cur.setdefault(node, {})\n return ret\n\nExample:\n>>> pprint.pprint(treeify(L))\n{'object-top-1': {'object-lvl1-1': {'object-lvl2-1': {}, 'object-lvl2-3': {}}},\n 'object-top-2': {'object-lvl1... | [
4,
0
] | [] | [] | [
"python"
] | stackoverflow_0003464975_python.txt |
Q:
How to iterate a dict of dynamic "depths" in python?
I have a dict data structure with various "depths".
By "depths" I mean for example:
When depth is 1, dict will be like:
{'str_key1':int_value1, 'str_key2:int_value2}
When depth is 2, dict will be like:
{'str_key1':
{'str_key1_1':int_value1_1,
'str_key1_2':int_value1_2},
'str_key2':
{'str_key2_1':int_value2_1,
'str_key2_2':int_value2_2} }
so on and so forth.
When I need to process the data, now I'm doing this:
def process(keys,value):
#do sth with keys and value
pass
def iterate(depth,dict_data):
if depth == 1:
for k,v in dict_data:
process([k],v)
if depth == 2:
for k,v in dict_data:
for kk,vv, in v:
process([k,kk],v)
if depth == 3:
.........
So I need n for loops when depth is n. As depth can go up to 10, I'm wondering if there is a more dynamic way to do the iteration without having to write out all the if and for clauses.
Thanks.
A:
I'm not sure why everybody's thinking in terms of recursion (or recursion elimination) -- I'd just do depth steps, each of which rebuilds a list by expanding it one further level down.
E.g.:
def itr(depth, d):
cp = [([], d)]
for _ in range(depth):
cp = [(lk+[k], v) for lk, d in cp for k, v in d.items()]
for lk, ad in cp:
process(lk, ad)
easy to "expand" with longer identifiers and lower code density if it need to be made more readable for instructional purposes, but I think the logic is simple enough that it may not need such treatment (and, verbosity for its own sake has its drawbacks, too;-).
For example:
d = {'str_key1':
{'str_key1_1':'int_value1_1',
'str_key1_2':'int_value1_2'},
'str_key2':
{'str_key2_1':'int_value2_1',
'str_key2_2':'int_value2_2'} }
def process(lok, v):
print lok, v
itr(2, d)
prints
['str_key2', 'str_key2_2'] int_value2_2
['str_key2', 'str_key2_1'] int_value2_1
['str_key1', 'str_key1_1'] int_value1_1
['str_key1', 'str_key1_2'] int_value1_2
(if some specific order is desired, appropriate sorting can of course be performed on cp).
A:
The obvious answer is to use recursion. But, you can do something slick with Python here to flatten the dictionary. This is still fundamentally recursive --- we are just implementing our own stack.
def flatten(di):
stack = [di]
while stack:
e = stack[-1]
for k, v in e.items():
if isinstance(v, dict):
stack.append(v)
else:
yield k, v
stack.remove(e)
Then, you can do something like:
for k, v in flatten(mycomplexdict):
process(k, v)
A:
Recursion is your friend:
def process(keys,value):
#do sth with keys and value
pass
def iterate(depth, dict_data):
iterate_r(depth, dict_data, [])
def iterate_r(depth, data, keys):
if depth == 0:
process(keys, data)
else:
for k,v in dict_data.items():
iterate_r(depth-1, v, keys+[k])
A:
Recusive, just keep in mind python can only recurse 1000 times:
def process(key, value):
print key, value
def process_dict(dict, callback):
for k, v in dict.items():
if hasattr(v, 'items'):
process_dict(v, callback)
else:
callback(k, v)
d = {'a': 1, 'b':{'b1':1, 'b2':2, 'b3':{'bb1':1}}}
process_dict(d, process)
Prints:
a 1
b1 1
b2 2
bb1 1
A:
Assuming you want a fixed depth (most other answers seem to assume you want to recurse to max depth), and you need to preserve the path as in your original question, here's the most straightforward solution:
def process_dict(d, depth, callback, path=()):
for k, v in d.iteritems():
if depth == 1:
callback(path + (k,), v)
else:
process_dict(v, depth - 1, callback, path + (k,))
Here's an example of it in action:
>>> a_dict = {
... 'dog': {
... 'red': 5,
... 'blue': 6,
... },
... 'cat': {
... 'green': 7,
... },
... }
>>> def my_callback(k, v):
... print (k, v)
...
>>> process_dict(a_dict, 1, my_callback)
(('dog',), {'blue': 6, 'red': 5})
(('cat',), {'green': 7})
>>> process_dict(a_dict, 2, my_callback)
(('dog', 'blue'), 6)
(('dog', 'red'), 5)
(('cat', 'green'), 7)
| How to iterate a dict of dynamic "depths" in python? | I have a dict data structure with various "depths".
By "depths" I mean for example:
When depth is 1, dict will be like:
{'str_key1':int_value1, 'str_key2:int_value2}
When depth is 2, dict will be like:
{'str_key1':
{'str_key1_1':int_value1_1,
'str_key1_2':int_value1_2},
'str_key2':
{'str_key2_1':int_value2_1,
'str_key2_2':int_value2_2} }
so on and so forth.
When I need to process the data, now I'm doing this:
def process(keys,value):
#do sth with keys and value
pass
def iterate(depth,dict_data):
if depth == 1:
for k,v in dict_data:
process([k],v)
if depth == 2:
for k,v in dict_data:
for kk,vv, in v:
process([k,kk],v)
if depth == 3:
.........
So I need n for loops when depth is n. As depth can go up to 10, I'm wondering if there is a more dynamic way to do the iteration without having to write out all the if and for clauses.
Thanks.
| [
"I'm not sure why everybody's thinking in terms of recursion (or recursion elimination) -- I'd just do depth steps, each of which rebuilds a list by expanding it one further level down.\nE.g.:\ndef itr(depth, d):\n cp = [([], d)]\n for _ in range(depth):\n cp = [(lk+[k], v) for lk, d in cp for k, v in d.items(... | [
5,
3,
2,
2,
1
] | [] | [] | [
"algorithm",
"dictionary",
"iteration",
"python"
] | stackoverflow_0003464490_algorithm_dictionary_iteration_python.txt |
Q:
Python urllib2 URLError HTTP status code.
I want to grab the HTTP status code once it raises a URLError exception:
I tried this but didn't help:
except URLError, e:
logger.warning( 'It seems like the server is down. Code:' + str(e.code) )
A:
You shouldn't check for a status code after catching URLError, since that exception can be raised in situations where there's no HTTP status code available, for example when you're getting connection refused errors.
Use HTTPError to check for HTTP specific errors, and then use URLError to check for other problems:
try:
urllib2.urlopen(url)
except urllib2.HTTPError, e:
print e.code
except urllib2.URLError, e:
print e.args
Of course, you'll probably want to do something more clever than just printing the error codes, but you get the idea.
A:
Not sure why you are getting this error. If you are using urllib2 this should help:
import urllib2
from urllib2 import URLError
try:
urllib2.urlopen(url)
except URLError, e:
print e.code
| Python urllib2 URLError HTTP status code. | I want to grab the HTTP status code once it raises a URLError exception:
I tried this but didn't help:
except URLError, e:
logger.warning( 'It seems like the server is down. Code:' + str(e.code) )
| [
"You shouldn't check for a status code after catching URLError, since that exception can be raised in situations where there's no HTTP status code available, for example when you're getting connection refused errors.\nUse HTTPError to check for HTTP specific errors, and then use URLError to check for other problems... | [
65,
2
] | [] | [] | [
"exception",
"python",
"urllib2"
] | stackoverflow_0003465704_exception_python_urllib2.txt |
Q:
Testing Python console programs with Unicode strings in NetBeans 6.9
I try to run the following simple code in NetBeans 6.9
s = u"\u00B0 Celsius"
print u"{0}".format(s)
But I get the following error:
UnicodeEncodeError: 'ascii' codec can't encode character u'\xb0' in position 0: ordinal not in range(128)
A:
NetBeans's console apparently isn't properly set up to handle printing non-ASCII unicode strings.
In general, you should avoid printing unicode strings without explicitly encoding them (e.g. u_str.encode(some_codec) first.
In your specific case, you can probably just get away with:
print u'{0}'.format(s).encode('utf-8')
A:
You got a unicode string that you want to encode. Assuming that you want UTF-8 encoding use:
s.encode('utf-8')
| Testing Python console programs with Unicode strings in NetBeans 6.9 | I try to run the following simple code in NetBeans 6.9
s = u"\u00B0 Celsius"
print u"{0}".format(s)
But I get the following error:
UnicodeEncodeError: 'ascii' codec can't encode character u'\xb0' in position 0: ordinal not in range(128)
| [
"NetBeans's console apparently isn't properly set up to handle printing non-ASCII unicode strings.\nIn general, you should avoid printing unicode strings without explicitly encoding them (e.g. u_str.encode(some_codec) first.\nIn your specific case, you can probably just get away with:\nprint u'{0}'.format(s).encode... | [
4,
0
] | [] | [] | [
"netbeans",
"python",
"unicode"
] | stackoverflow_0003465944_netbeans_python_unicode.txt |
Q:
Using recursion in Python class methods
NB Noob alert ... !
I am trying to use recursion in a Python class method, but with limited results.
I'm trying to build a car class, with very basic attributes: id, position in a one lane road (represented by an integer), and velocity. One of the functions I have is used to return which car id is in front on this one -- i.e. if we have class:
class Car:
def __init__(self, position, id, velocity):
self.position = position
self.id = id
self.velocity = velocity
Now, I've come up with the following class method (additional details below the code):
def findSuccessorCar(self, cars):
successorCar = ""
smallestGapFound = 20000000
for car in cars:
if car.id == self.id: continue
currentGap = self.calculateGap(car)
if (currentGap > -1) and (currentGap < smallestGapFound):
smallestGapFound = currentGap
successorCar = car
if successorCar == "":
return 1 # calling code checks for 1 as an error code
else:
return successorCar
The plan is to create car objects, then store them in a list. Each time the findSuccessorMethod is called, this global list of cars is passed to it, e.g.
c1 = testCar.Car(4, 5, 1) # position, pos_y, Vel, ID
c2 = testCar.Car(7, 9, 2)
c3 = testCar.Car(9, 1, 2)
cars = [c1, c2, c3]
c1_succ = c1.findSuccessorCar(cars)
This works fine: the find successor car function will say that car c2 is in front of car c1 (position 7 ahead of position 4).
However, I want car c1 to work out what car is in front of its immediate successor -- that is, which car is in front of the car in front, which in this case is car c3. My thinking was that if I did c1_succ.findSuccessorCars(cars) then this should work fine: doing type(c1_succ) shows it is an instance and hasattr shows that it has the anticipated object attributes.
However, when I do try to execute c1_succ.findSuccessorCars(cars), an integer is returned. Hence, I am confused -- why doesn't this work? Why can you not recursively execute a class method in this fashion? Where does this integer come from?
NB Gut feel says that this has something to do with the self declaration, and that I'll need to modify my code so that as well as a global list of cars, there'll need to be a global list of their current positions, or another class method, e.g.
findSuccessorsSuccessor (yes, fully aware of crummy naming!). However, I am interested to understand why this recursive approach does not work.
UPDATE
Here is the requested code for calculating a gap between 2 cars -- I appreciate it is very basic, so not too much laughter at the back please.
def calculateGap(self, car):
''' Calculate the gap between two cars
'''
thisCar = self
otherCar = car
gap = otherCar.position_x - thisCar.position_x
return gap
A:
What you're calling a class method is actually an instance method. Class methods operate on the class, and instance methods operate on the instance. Here, we're dealing with Car instances, not the Car class itself.
class Car(object):
def __init__(self, position, id, velocity):
self.position = position
self.id = id
self.velocity = velocity
def __eq__(self, other):
return self.id == other.id
def __str__(self):
return 'Car(%d, %d, %d)' % (self.position, self.id, self.velocity)
def calculateGap(self, other):
return other.position - self.position
def findSuccessor(self, cars):
ret = smallestGap = None
for car in cars:
if car == self:
continue
gap = self.calculateGap(car)
if gap < 0:
continue
if smallestGap is None or gap < smallestGap:
ret, smallestGap = car, gap
return ret
def findNthSuccessor(self, n, cars):
cur = self
for x in xrange(n):
cur = cur.findSuccessor(cars)
if cur is None:
return None
return cur
c1 = Car(4, 5, 1)
c2 = Car(7, 9, 2)
c3 = Car(9, 1, 2)
cars = [c1, c2, c3]
print c1.findSuccessor(cars)
print c1.findSuccessor(cars).findSuccessor(cars)
print c1.findNthSuccessor(2, cars)
Output:
Car(7, 9, 2)
Car(9, 1, 2)
Car(9, 1, 2)
A:
Your method does work in theory; this is an implementation bug. That said, it is not the right way to do things; specifically, findSuccessorCar should not be a class method of Car. This is because the list of Car instances is a separate construct; the class Car doesn't and shouldn't know anything about it. If you wanted to make a class for it you should make a Road which is a list of Cars, and put findSuccessorCar on that.
That said, I don't see why you can't do
import operator
cars.sort( key = operator.attrgetter( "position" ) )
to sort the list of cars in position order. I think you're implementing your own sorting algorithm to find the successor car?
Other points of note: you should use exceptions (raise BadCarMojoError) to indicate failure, not magic return codes; classmethods traditionally use cls instead of self as the first argument; and Car should inherit from object.
import bisect
class Car( object) :
def __init__( self, position, id, velocity ):
self.position = position
self.id = id
self.velocity = velocity
def __lt__( self, other ):
return self.position < other.position
class Road( object ):
def __init__( self ):
self.cars = [ ]
def driveOn( self, car ):
bisect.insort( self.cars, car )
def successor( self, car ):
i = bisect.bisect_left( self.cars, car )
if i == len( self.cars ):
raise ValueError( 'No item found with key at or above: %r' % ( car, ) )
return self.cars[ i + 1 ]
c1 = Car( 4, 5, 1 )
c2 = Car( 7, 9, 2 )
c3 = Car( 9, 1, 2 )
c1 < c2
road = Road( )
for car in ( c1, c2, c3 ):
road.driveOn( car )
c1_succ = road.successor( c1 )
| Using recursion in Python class methods | NB Noob alert ... !
I am trying to use recursion in a Python class method, but with limited results.
I'm trying to build a car class, with very basic attributes: id, position in a one lane road (represented by an integer), and velocity. One of the functions I have is used to return which car id is in front on this one -- i.e. if we have class:
class Car:
def __init__(self, position, id, velocity):
self.position = position
self.id = id
self.velocity = velocity
Now, I've come up with the following class method (additional details below the code):
def findSuccessorCar(self, cars):
successorCar = ""
smallestGapFound = 20000000
for car in cars:
if car.id == self.id: continue
currentGap = self.calculateGap(car)
if (currentGap > -1) and (currentGap < smallestGapFound):
smallestGapFound = currentGap
successorCar = car
if successorCar == "":
return 1 # calling code checks for 1 as an error code
else:
return successorCar
The plan is to create car objects, then store them in a list. Each time the findSuccessorMethod is called, this global list of cars is passed to it, e.g.
c1 = testCar.Car(4, 5, 1) # position, pos_y, Vel, ID
c2 = testCar.Car(7, 9, 2)
c3 = testCar.Car(9, 1, 2)
cars = [c1, c2, c3]
c1_succ = c1.findSuccessorCar(cars)
This works fine: the find successor car function will say that car c2 is in front of car c1 (position 7 ahead of position 4).
However, I want car c1 to work out what car is in front of its immediate successor -- that is, which car is in front of the car in front, which in this case is car c3. My thinking was that if I did c1_succ.findSuccessorCars(cars) then this should work fine: doing type(c1_succ) shows it is an instance and hasattr shows that it has the anticipated object attributes.
However, when I do try to execute c1_succ.findSuccessorCars(cars), an integer is returned. Hence, I am confused -- why doesn't this work? Why can you not recursively execute a class method in this fashion? Where does this integer come from?
NB Gut feel says that this has something to do with the self declaration, and that I'll need to modify my code so that as well as a global list of cars, there'll need to be a global list of their current positions, or another class method, e.g.
findSuccessorsSuccessor (yes, fully aware of crummy naming!). However, I am interested to understand why this recursive approach does not work.
UPDATE
Here is the requested code for calculating a gap between 2 cars -- I appreciate it is very basic, so not too much laughter at the back please.
def calculateGap(self, car):
''' Calculate the gap between two cars
'''
thisCar = self
otherCar = car
gap = otherCar.position_x - thisCar.position_x
return gap
| [
"What you're calling a class method is actually an instance method. Class methods operate on the class, and instance methods operate on the instance. Here, we're dealing with Car instances, not the Car class itself.\nclass Car(object):\n def __init__(self, position, id, velocity):\n self.position = positi... | [
5,
2
] | [] | [] | [
"python"
] | stackoverflow_0003466143_python.txt |
Q:
Spawning more than one thread in Python causes RuntimeError
I'm trying to add multithreading to a Python app, and thus started with some toy examples :
import threading
def myfunc(arg1, arg2):
print 'In thread'
print 'args are', arg1, arg2
thread = threading.Thread(target=myfunc, args=('asdf', 'jkle'))
thread.start()
thread.join()
This works beautifully, but as soon as I try to start a second thread, I get a RuntimeError :
import threading
def myfunc(arg1, arg2):
print 'In thread'
print 'args are', arg1, arg2
thread = threading.Thread(target=myfunc, args=('asdf', 'jkle'))
thread2 = threading.Thread(target=myfunc, args=('1234', '3763763é'))
thread.start()
thread2.start()
thread.join()
thread2.join()
As others seems to have no problem running this code, let me add that I am on Windows 7 x64 Pro with Python 2.6.3 32bits (if that matters).
A:
thread2 = threading.Thread(target=myfunc, args=('1234', '3763763é'))
Are you declaring the file as UTF-8?-----------------------------------------------------^
A:
Can you post the exact error you get?
Runs fine for me (after replacing the é character with an e):
In thread
args areIn thread
asdfargs are jkle1234
3763763e
If I leave the original script you posted and save the file as UTF-8 with BOM on Windows:
In thread
args areIn thread
asdfargs are jkle1234
3763763é
Saving the code you posted as ASCII results in a SyntaxError:
SyntaxError: Non-ASCII character '\xe9' in file threadtest.py on line 8, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details
Environment information:
C:\python -V
Python 2.6.2
C:\cmd
Microsoft Windows XP [Version 5.1.2600]
(C) Copyright 1985-2001 Microsoft Corp.
A:
As said in the comments, I think that the problem comes from IDLE itself, and not from my code. Thanks for your help anyway !
I upvoted your answers but will be accepting mine, as there is no real solution to this problem.
A:
Maybe it is because you have the same filename or project name like "threading" or "Thread" under some directory and you have runned it once since this bootup.
| Spawning more than one thread in Python causes RuntimeError | I'm trying to add multithreading to a Python app, and thus started with some toy examples :
import threading
def myfunc(arg1, arg2):
print 'In thread'
print 'args are', arg1, arg2
thread = threading.Thread(target=myfunc, args=('asdf', 'jkle'))
thread.start()
thread.join()
This works beautifully, but as soon as I try to start a second thread, I get a RuntimeError :
import threading
def myfunc(arg1, arg2):
print 'In thread'
print 'args are', arg1, arg2
thread = threading.Thread(target=myfunc, args=('asdf', 'jkle'))
thread2 = threading.Thread(target=myfunc, args=('1234', '3763763é'))
thread.start()
thread2.start()
thread.join()
thread2.join()
As others seems to have no problem running this code, let me add that I am on Windows 7 x64 Pro with Python 2.6.3 32bits (if that matters).
| [
"thread2 = threading.Thread(target=myfunc, args=('1234', '3763763é'))\n\nAre you declaring the file as UTF-8?-----------------------------------------------------^\n",
"Can you post the exact error you get?\nRuns fine for me (after replacing the é character with an e):\nIn thread\nargs areIn thread\nasdfargs are ... | [
1,
1,
0,
0
] | [] | [] | [
"multithreading",
"python",
"runtime_error"
] | stackoverflow_0001595772_multithreading_python_runtime_error.txt |
Q:
What's the best layout for a python command line application?
What is the right way (or I'll settle for a good way) to lay out a command line python application of moderate complexity? I've created a python project skeleton using paster, which gave me a few files to start with:
myproj/__init__.py
MyProj.egg-info/
dependency_links.txt
entry_points.txt
PKG-INFO
SOURCES.txt
top_level.txt
zip-safe
setup.cfg
setup.py
I want to know, mainly, where should my program entry point go, and how can I get it installed on the path? Does setuptools create it for me? I'm trying to find this in the HHGTP, but maybe I'm just missing it.
A:
You don't need to create all that, the .egg-info directory is generated by setuptools. You mention the command line, so I assumed you have a 'top level' script somewhere, let's say myproj-bin. Then this would work:
./setup.py
./myproj
./myproj/__init__.py
./scripts
./scripts/myproj-bin
And then put something like this in setup.py:
#! /usr/bin/python
from setuptools import setup
setup(name="myproj",
description='shows how to create a python package',
version='123',
packages=['myproj'], # python package names here
scripts=['scripts/myproj-bin'], # scripts here
)
There's a lot more that you can do if your project is complex, the full manual of setuptools is here: http://peak.telecommunity.com/DevCenter/setuptools.
| What's the best layout for a python command line application? | What is the right way (or I'll settle for a good way) to lay out a command line python application of moderate complexity? I've created a python project skeleton using paster, which gave me a few files to start with:
myproj/__init__.py
MyProj.egg-info/
dependency_links.txt
entry_points.txt
PKG-INFO
SOURCES.txt
top_level.txt
zip-safe
setup.cfg
setup.py
I want to know, mainly, where should my program entry point go, and how can I get it installed on the path? Does setuptools create it for me? I'm trying to find this in the HHGTP, but maybe I'm just missing it.
| [
"You don't need to create all that, the .egg-info directory is generated by setuptools. You mention the command line, so I assumed you have a 'top level' script somewhere, let's say myproj-bin. Then this would work:\n./setup.py\n./myproj\n./myproj/__init__.py\n./scripts\n./scripts/myproj-bin\n\nAnd then put somet... | [
7
] | [] | [] | [
"distribute",
"packaging",
"python",
"setuptools"
] | stackoverflow_0003465045_distribute_packaging_python_setuptools.txt |
Q:
Django templates and variable attributes
I'm using Google App Engine and Django templates.
I have a table that I want to display the objects look something like:
Object Result:
Items = [item1,item2]
Users = [{name='username',item1=3,item2=4},..]
The Django template is:
<table>
<tr align="center">
<th>user</th>
{% for item in result.items %}
<th>{{item}}</th>
{% endfor %}
</tr>
{% for user in result.users %}
<tr align="center">
<td>{{user.name}}</td>
{% for item in result.items %}
<td>{{ user.item }}</td>
{% endfor %}
</tr>
{% endfor %}
</table>
Now the Django documention states that when it sees a . in variables
It tries several things to get the data, one of which is dictionary lookup which is exactly what I want but doesn't seem to happen...
A:
I found a "nicer"/"better" solution for getting variables inside
Its not the nicest way, but it works.
You install a custom filter into django which gets the key of your dict as a parameter
To make it work in google app-engine you need to add a file to your main directory,
I called mine django_hack.py which contains this little piece of code
from google.appengine.ext import webapp
register = webapp.template.create_template_register()
def hash(h,key):
if key in h:
return h[key]
else:
return None
register.filter(hash)
Now that we have this file, all we need to do is tell the app-engine to use it...
we do that by adding this little line to your main file
webapp.template.register_template_library('django_hack')
and in your template view add this template instead of the usual code
{{ user|hash:item }}
And its should work perfectly =)
A:
I'm assuming that the part the doesn't work is {{ user.item }}.
Django will be trying a dictionary lookup, but using the string "item" and not the value of the item loop variable. Django did the same thing when it resolved {{ user.name }} to the name attribute of the user object, rather than looking for a variable called name.
I think you will need to do some preprocessing of the data in your view before you render it in your template.
A:
Or you can use the default django system which is used to resolve attributes in tempaltes like this :
from django.template import Variable, VariableDoesNotExist
@register.filter
def hash(object, attr):
pseudo_context = { 'object' : object }
try:
value = Variable('object.%s' % attr).resolve(pseudo_context)
except VariableDoesNotExist:
value = None
return value
That just works
in your template :
{{ user|hash:item }}
A:
@Dave Webb (i haven't been rated high enough to comment yet)
The dot lookups can be summarized like this: when the template system encounters a dot in a variable name, it tries the following lookups, in this order:
* Dictionary lookup (e.e., foo["bar"])
* Attribute lookup (e.g., foo.bar)
* Method call (e.g., foo.bar())
* List-index lookup (e.g., foo[bar])
The system uses the first lookup type that works. It’s short-circuit logic.
A:
As a replacement for k,v in user.items on Google App Engine using django templates where user = {'a':1, 'b', 2, 'c', 3}
{% for pair in user.items %}
{% for keyval in pair %} {{ keyval }}{% endfor %}<br>
{% endfor %}
a 1
b 2
c 3
pair = (key, value) for each dictionary item.
A:
shouldn't this:
{{ user.item }}
be this?
{{ item }}
there is no user object in the context within that loop....?
| Django templates and variable attributes | I'm using Google App Engine and Django templates.
I have a table that I want to display the objects look something like:
Object Result:
Items = [item1,item2]
Users = [{name='username',item1=3,item2=4},..]
The Django template is:
<table>
<tr align="center">
<th>user</th>
{% for item in result.items %}
<th>{{item}}</th>
{% endfor %}
</tr>
{% for user in result.users %}
<tr align="center">
<td>{{user.name}}</td>
{% for item in result.items %}
<td>{{ user.item }}</td>
{% endfor %}
</tr>
{% endfor %}
</table>
Now the Django documention states that when it sees a . in variables
It tries several things to get the data, one of which is dictionary lookup which is exactly what I want but doesn't seem to happen...
| [
"I found a \"nicer\"/\"better\" solution for getting variables inside\nIts not the nicest way, but it works.\nYou install a custom filter into django which gets the key of your dict as a parameter\nTo make it work in google app-engine you need to add a file to your main directory,\nI called mine django_hack.py whic... | [
33,
10,
9,
4,
3,
0
] | [] | [] | [
"django",
"google_app_engine",
"python"
] | stackoverflow_0000035948_django_google_app_engine_python.txt |
Q:
Django CMS 2.1.0 App Extension NoReverseMatch TemplateSyntaxError
I'm writing a custom app for Django CMS, but get the following error when trying to view a published entry in the admin:
TemplateSyntaxError at /admin/cmsplugin_publisher/entry/
Caught NoReverseMatch while rendering: Reverse for 'cmsplugin_publisher_entry_detail' with arguments '()' and keyword arguments '{'slug': u'test-german'}' not found.
I can get the app working if I give the app a URL in my main application urls.py, but that fixes the app to a required URL, I just want extend Django CMS so the app will come from whichever page it's added to.
models.py Absolute URL Pattern
@models.permalink
def get_absolute_url(self):
return ('cmsplugin_publisher_entry_detail', (), {
'slug': self.slug})
urls/entries.py
from django.conf.urls.defaults import *
from cmsplugin_publisher.models import Entry
from cmsplugin_publisher.settings import PAGINATION, ALLOW_EMPTY, ALLOW_FUTURE
entry_conf_list = {'queryset': Entry.published.all(), 'paginate_by': PAGINATION,}
entry_conf = {'queryset': Entry.published.all(),
'date_field': 'creation_date',
'allow_empty': ALLOW_EMPTY,
'allow_future': ALLOW_FUTURE,
}
entry_conf_detail = entry_conf.copy()
del entry_conf_detail['allow_empty']
del entry_conf_detail['allow_future']
del entry_conf_detail['date_field']
entry_conf_detail['queryset'] = Entry.objects.all()
urlpatterns = patterns('cmsplugin_publisher.views.entries',
url(r'^$', 'entry_index', entry_conf_list,
name='cmsplugin_publisher_entry_archive_index'),
url(r'^(?P<page>[0-9]+)/$', 'entry_index', entry_conf_list,
name='cmsplugin_publisher_entry_archive_index_paginated'),
)
urlpatterns += patterns('django.views.generic.list_detail',
url(r'^(?P<slug>[-\w]+)/$', 'object_detail', entry_conf_detail,
name='cmsplugin_publisher_entry_detail'),
)
views/entries.py
from django.views.generic.list_detail import object_list
from cmsplugin_publisher.models import Entry
from cmsplugin_publisher.views.decorators import update_queryset
entry_index = update_queryset(object_list, Entry.published.all)
views/decorators.py
def update_queryset(view, queryset, queryset_parameter='queryset'):
'''Decorator around views based on a queryset passed in parameter which will force the update despite cache
Related to issue http://code.djangoproject.com/ticket/8378'''
def wrap(*args, **kwargs):
'''Regenerate the queryset before passing it to the view.'''
kwargs[queryset_parameter] = queryset()
return view(*args, **kwargs)
return wrap
The app integration with Django CMS is explained here: http://github.com/divio/django-cms/blob/master/cms/docs/app_integration.txt
It looks like the issue might be that I'm not correctly returning the RequestContext as I'm using a mis of generic views and custom in the application.
The CMS App extension py file:
cms_app.py
from django.utils.translation import ugettext_lazy as _
from cms.app_base import CMSApp
from cms.apphook_pool import apphook_pool
from cmsplugin_publisher.settings import APP_MENUS
class PublisherApp(CMSApp):
name = _('Publisher App Hook')
urls = ['cmsplugin_publisher.urls']
apphook_pool.register(PublisherApp)
Any pointers appreciated, it's proving to be a tough nut to crack!
A:
Looks like it's a bug in the URLconf parser in Django-CMS 2.1.0beta3, which is fixed in dev. The bug only occurs when including other URLconfs from within an app.
A:
UPDATE:
OK, I think your error originates from get_absolute_url:
@models.permalink
def get_absolute_url(self):
return ('cmsplugin_publisher_entry_detail', (), {'slug': self.slug})
I suspect it's because this ultimately calls object_detail which expects a positional parameter queryset (see django/views/generic/list_detail.py). You could try changing this to something like:
return ('cmsplugin_publisher_entry_detail', [Entry.objects.all(),], {'slug': self.slug})
A:
I would double-check that urls/entries.py is actually being imported somewhere otherwise it won't be able to get the reverse match.
| Django CMS 2.1.0 App Extension NoReverseMatch TemplateSyntaxError | I'm writing a custom app for Django CMS, but get the following error when trying to view a published entry in the admin:
TemplateSyntaxError at /admin/cmsplugin_publisher/entry/
Caught NoReverseMatch while rendering: Reverse for 'cmsplugin_publisher_entry_detail' with arguments '()' and keyword arguments '{'slug': u'test-german'}' not found.
I can get the app working if I give the app a URL in my main application urls.py, but that fixes the app to a required URL, I just want extend Django CMS so the app will come from whichever page it's added to.
models.py Absolute URL Pattern
@models.permalink
def get_absolute_url(self):
return ('cmsplugin_publisher_entry_detail', (), {
'slug': self.slug})
urls/entries.py
from django.conf.urls.defaults import *
from cmsplugin_publisher.models import Entry
from cmsplugin_publisher.settings import PAGINATION, ALLOW_EMPTY, ALLOW_FUTURE
entry_conf_list = {'queryset': Entry.published.all(), 'paginate_by': PAGINATION,}
entry_conf = {'queryset': Entry.published.all(),
'date_field': 'creation_date',
'allow_empty': ALLOW_EMPTY,
'allow_future': ALLOW_FUTURE,
}
entry_conf_detail = entry_conf.copy()
del entry_conf_detail['allow_empty']
del entry_conf_detail['allow_future']
del entry_conf_detail['date_field']
entry_conf_detail['queryset'] = Entry.objects.all()
urlpatterns = patterns('cmsplugin_publisher.views.entries',
url(r'^$', 'entry_index', entry_conf_list,
name='cmsplugin_publisher_entry_archive_index'),
url(r'^(?P<page>[0-9]+)/$', 'entry_index', entry_conf_list,
name='cmsplugin_publisher_entry_archive_index_paginated'),
)
urlpatterns += patterns('django.views.generic.list_detail',
url(r'^(?P<slug>[-\w]+)/$', 'object_detail', entry_conf_detail,
name='cmsplugin_publisher_entry_detail'),
)
views/entries.py
from django.views.generic.list_detail import object_list
from cmsplugin_publisher.models import Entry
from cmsplugin_publisher.views.decorators import update_queryset
entry_index = update_queryset(object_list, Entry.published.all)
views/decorators.py
def update_queryset(view, queryset, queryset_parameter='queryset'):
'''Decorator around views based on a queryset passed in parameter which will force the update despite cache
Related to issue http://code.djangoproject.com/ticket/8378'''
def wrap(*args, **kwargs):
'''Regenerate the queryset before passing it to the view.'''
kwargs[queryset_parameter] = queryset()
return view(*args, **kwargs)
return wrap
The app integration with Django CMS is explained here: http://github.com/divio/django-cms/blob/master/cms/docs/app_integration.txt
It looks like the issue might be that I'm not correctly returning the RequestContext as I'm using a mis of generic views and custom in the application.
The CMS App extension py file:
cms_app.py
from django.utils.translation import ugettext_lazy as _
from cms.app_base import CMSApp
from cms.apphook_pool import apphook_pool
from cmsplugin_publisher.settings import APP_MENUS
class PublisherApp(CMSApp):
name = _('Publisher App Hook')
urls = ['cmsplugin_publisher.urls']
apphook_pool.register(PublisherApp)
Any pointers appreciated, it's proving to be a tough nut to crack!
| [
"Looks like it's a bug in the URLconf parser in Django-CMS 2.1.0beta3, which is fixed in dev. The bug only occurs when including other URLconfs from within an app.\n",
"UPDATE: \nOK, I think your error originates from get_absolute_url:\n@models.permalink\ndef get_absolute_url(self):\n return ('cmsplugin_publi... | [
1,
0,
0
] | [] | [] | [
"django",
"django_cms",
"python"
] | stackoverflow_0003430383_django_django_cms_python.txt |
Q:
Copying a file with access locks, forcefully with python
I'm trying to copy an excel sheet with python, but I keep getting "access denied" error message. The file is closed and is not shared. It has macros though.
Is their anyway I can copy the file forcefully with python?
thanks.
A:
If you do not have sufficient file permissions you will not be able to access the file. In that case you will have to execute your Python program as an user with sufficient permissions.
If on the other hand the file is locked using other means specific to Excel then I am not sure what exactly is the solution. You might have to work around the protection using other means which will require a fair amount of understanding of how Excel sheets are "locked". I don't know of any Python libraries that will do this for you.
| Copying a file with access locks, forcefully with python | I'm trying to copy an excel sheet with python, but I keep getting "access denied" error message. The file is closed and is not shared. It has macros though.
Is their anyway I can copy the file forcefully with python?
thanks.
| [
"If you do not have sufficient file permissions you will not be able to access the file. In that case you will have to execute your Python program as an user with sufficient permissions.\nIf on the other hand the file is locked using other means specific to Excel then I am not sure what exactly is the solution. You... | [
0
] | [] | [] | [
"excel_2003",
"python"
] | stackoverflow_0003465231_excel_2003_python.txt |
Q:
b = a vs b = a[:] in strings|lists
in Lists -
I can always check that b=a points to same object and c=a[:] creates another copy.
>>> a = [1,2,3,4,5]
>>> b = a
>>> c = a[:]
>>> a[0] = 10
>>> b
[10, 2, 3, 4, 5]
>>> c
[1, 2, 3, 4, 5]
In Strings -
I cannot make a change to the original immutable string. How do I confirm myself that b=a makes b point to same object, while c = a[:] creates a new copy of the string?
A:
you can use the is operator.
a = 'aaaaa'
b = 'bbbbb'
print a is b
a = b
print a is b
c = a[:]
print c is a
This works because a is b if and only if id(a) == id(b). In CPython at least, id(foo) is just the memory address at which foo is stored. Hence if foo is bar, then foo and bar are literally the same object. It's interesting to note that
a = 'aaaaa'
b = 'aaaaa'
a is b
is True. This is because python interns (at least most) strings so that it doesn't waste memory storing the same string twice and more importantly can compare strings by comparing fixed length pointers in the C implementation instead of comparing the strings byte by byte.
A:
It's still possible to have the same string stored in more than one place - see this example.
>>> a="aaaaa"
>>> b=a[:]
>>> b is a
True
>>> b=a[:-1]+a[-1]
>>> b is a
False
>>> b==a
True
>>>
| b = a vs b = a[:] in strings|lists | in Lists -
I can always check that b=a points to same object and c=a[:] creates another copy.
>>> a = [1,2,3,4,5]
>>> b = a
>>> c = a[:]
>>> a[0] = 10
>>> b
[10, 2, 3, 4, 5]
>>> c
[1, 2, 3, 4, 5]
In Strings -
I cannot make a change to the original immutable string. How do I confirm myself that b=a makes b point to same object, while c = a[:] creates a new copy of the string?
| [
"you can use the is operator.\na = 'aaaaa'\nb = 'bbbbb'\n\nprint a is b\na = b\nprint a is b\n\nc = a[:]\nprint c is a\n\nThis works because a is b if and only if id(a) == id(b). In CPython at least, id(foo) is just the memory address at which foo is stored. Hence if foo is bar, then foo and bar are literally the s... | [
2,
1
] | [] | [] | [
"python"
] | stackoverflow_0003465897_python.txt |
Q:
How to depends of a system command with python/distutils?
I'm looking for the most elegant way to notify users of my library that they need a specific unix command to ensure that it will works...
When is the bet time for my lib to raise an error:
Installation ?
When my app call the command ?
At the import of my lib ?
both?
And also how should you detect that the command is missing (if not commands.getoutput("which CommandIDependsOn"): raise Exception("you need CommandIDependsOn")).
I need advices.
A:
IMO, the best way is to check at install if the user has this specific *nix command.
If you're using distutils to distribute your package, in order to install it you have to do:
python setup.py build
python setup.py install
or simply
python setup.py install (in that case python setup.py build is implicit)
To check if the *nix command is installed, you can subclass the build method in your setup.py like this :
from distutils.core import setup
from distutils.command.build import build as _build
class build(_build):
description = "Custom Build Process"
user_options= _build.user_options[:]
# You can also define extra options like this :
#user_options.extend([('opt=', None, 'Name of optionnal option')])
def initialize_options(self):
# Initialize here you're extra options... Not needed in your case
#self.opt = None
_build.initialize_options(self)
def finalize_options(self):
# Finalize your options, you can modify value
if self.opt is None :
self.opt = "default value"
_build.finalize_options(self)
def run(self):
# Extra Check
# Enter your code here to verify if the *nix command is present
.................
# Start "classic" Build command
_build.run(self)
setup(
....
# Don't forget to register your custom build command
cmdclass = {'build' : build},
....
)
But what if the user uninstall the required command after the package installation? To solve this problem, the only "good" solution is to use a packaging systems such as deb or rpm and put a dependency between the command and your package.
Hope this helps
A:
I wouldn't have any check at all. Document that your library requires this command, and if the user tries to use whatever part of your library needs it, an exception will be raised by whatever runs the command. It should still be possible to import your library and use it, even if only a subset of functionality is offered.
(PS: commands is old and broken and shouldn't be used in new code. subprocess is the hot new stuff.)
| How to depends of a system command with python/distutils? | I'm looking for the most elegant way to notify users of my library that they need a specific unix command to ensure that it will works...
When is the bet time for my lib to raise an error:
Installation ?
When my app call the command ?
At the import of my lib ?
both?
And also how should you detect that the command is missing (if not commands.getoutput("which CommandIDependsOn"): raise Exception("you need CommandIDependsOn")).
I need advices.
| [
"IMO, the best way is to check at install if the user has this specific *nix command.\nIf you're using distutils to distribute your package, in order to install it you have to do:\n\npython setup.py build\n python setup.py install \n\nor simply\n\npython setup.py install (in that case python setup.py build is impl... | [
5,
4
] | [] | [] | [
"command",
"distutils",
"packaging",
"python"
] | stackoverflow_0003465295_command_distutils_packaging_python.txt |
Q:
Python module to convert from document to html
Is there any python package which converts the uploaded Ms Word document to html content.As in my application am uploading a document and i want to convert it into html
any suggestion o nthis will help
Thanks
A:
You could call unoconv from within Python.
| Python module to convert from document to html | Is there any python package which converts the uploaded Ms Word document to html content.As in my application am uploading a document and i want to convert it into html
any suggestion o nthis will help
Thanks
| [
"You could call unoconv from within Python.\n"
] | [
1
] | [] | [] | [
"python"
] | stackoverflow_0003467636_python.txt |
Q:
Get_by_key_name doesn't work with unicode key names of several characters
I'm using unicode strings for non latin characters as key names for my models.
I can create objects without problems, and the appengine admin shows key name correctly (I'm using chinese characters, and the right characters)
However, MyModel.get_by_key_name() returns None if the key_name is made of several characters.
For 1 character key name, everything works fine.
Does anyone know about that?
Thanks!
A:
Actually, I made some stupid encoding error when testing yesterday, which made me think the error came from the function.
The problem doesnt come from the keys. It is just an error in my algorithm that won't check for keys of 2 characters if there is no object which the 1st character as keyname.
| Get_by_key_name doesn't work with unicode key names of several characters | I'm using unicode strings for non latin characters as key names for my models.
I can create objects without problems, and the appengine admin shows key name correctly (I'm using chinese characters, and the right characters)
However, MyModel.get_by_key_name() returns None if the key_name is made of several characters.
For 1 character key name, everything works fine.
Does anyone know about that?
Thanks!
| [
"Actually, I made some stupid encoding error when testing yesterday, which made me think the error came from the function.\nThe problem doesnt come from the keys. It is just an error in my algorithm that won't check for keys of 2 characters if there is no object which the 1st character as keyname.\n"
] | [
0
] | [] | [] | [
"google_app_engine",
"python"
] | stackoverflow_0003451983_google_app_engine_python.txt |
Q:
How can Python lists within objects be freed?
I have a Python class containing a list, to which I append() values.
If I delete an object of this class then create a second object later on in the same script, the second object's list is the same as the first's was at the time of deletion.
For example:
class myObj:
a = []
b = False
o = myObj()
o.a.append("I'm still here!")
o.b = True
del o
import gc
gc.collect() # just to be safe
o = myObj()
print(o.a) # ["I'm still here!"]
print(o.b) # False
There is a way to empty the list:
while len(o.a):
o.a.pop()
But that's ridiculous, not to mention slow. Why is it still in memory after garbage collection, and more to the point why is it not overwritten when the class inits? All non-list member vars are handled correctly. append(), extend() and insert() all lead to the same result -- is there another method I should be using?
Here's my full test script. Notice how Python gives an AttributeError if I try to delete a member list directly, even though I can read from it fine.
A:
When you create a variable inside a class declaration, it's a class attribute, not an instance attribute. To create instance attributes you have to do self.a inside a method, e.g. __init__.
Change it to:
class myObj:
def __init__(self):
self.a = []
self.b = False
| How can Python lists within objects be freed? | I have a Python class containing a list, to which I append() values.
If I delete an object of this class then create a second object later on in the same script, the second object's list is the same as the first's was at the time of deletion.
For example:
class myObj:
a = []
b = False
o = myObj()
o.a.append("I'm still here!")
o.b = True
del o
import gc
gc.collect() # just to be safe
o = myObj()
print(o.a) # ["I'm still here!"]
print(o.b) # False
There is a way to empty the list:
while len(o.a):
o.a.pop()
But that's ridiculous, not to mention slow. Why is it still in memory after garbage collection, and more to the point why is it not overwritten when the class inits? All non-list member vars are handled correctly. append(), extend() and insert() all lead to the same result -- is there another method I should be using?
Here's my full test script. Notice how Python gives an AttributeError if I try to delete a member list directly, even though I can read from it fine.
| [
"When you create a variable inside a class declaration, it's a class attribute, not an instance attribute. To create instance attributes you have to do self.a inside a method, e.g. __init__.\nChange it to:\nclass myObj:\n def __init__(self):\n self.a = []\n self.b = False\n\n"
] | [
9
] | [] | [] | [
"list",
"python"
] | stackoverflow_0003467704_list_python.txt |
Q:
Python SOAP to MS WebService(SharePoint)(GetListItems)
I am attempting to pull list Items from a sharepoint server(via python/suds) and am having some difficulty with making queries to GetListItems. If i provide no other parameters to GetListItems other than the list Name, it will return all the items in the default view for that List. I want to query a specific set of items(ID=15) to be returned and a specific set of fields(Date and Description) to be returned for those items. Here is my SOAP packet:
<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope xmlns:ns0="http://schemas.xmlsoap.org/soap/envelope "xmlns:ns1="http://schemas.microsoft.com/sharepoint/soap/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">
<SOAP-ENV:Header/>
<ns0:Body>
<ns1:GetListItems>
<ns1:listName>MyCalendar</ns1:listName>
<query>
<Where>
<Eq>
<FieldRef Name="_ows_ID">15</FieldRef>
</Eq>
</Where>
</query>
<viewFields>
<FieldRef Name="Description"/>
<FieldRef Name="EventDate"/>
</viewFields>
</ns1:GetListItems>
</ns0:Body>
</SOAP-ENV:Envelope>
This appears to conform to the WSDL, however, when making the query i get back a full list of items with all the fields(as if i did not pass any query XML at all).
Any suggestions for a SOAP noob?
Also, here is my python code that generated this XML:
query = Element('query')
where = Element('Where')
eq = Element('Eq')
eq.append(Element('FieldRef').append(Attribute('Name', '_ows_ID')).setText('15'))
where.append(eq)
query.append(where)
viewFields = Element('viewFields')
viewFields.append(Element('FieldRef').append(Attribute('Name','Description')))
viewFields.append(Element('FieldRef').append(Attribute('Name','EventDate')))
results = c_lists.service.GetListItems('MyCalendar', None, query, viewFields, None, None)
Any help would be greatly appreciated!
A:
I figured this one out. The problem was the CAML query.
1. You need to wrap the CAML 'Query' in a 'query' element.
2. You need to set the proper 'Value Type' Element and attributes.
See me other posting titled 'Sharepoint Filter for List Items(GetListItems)' for more info and code.
Thanks!
Nick
| Python SOAP to MS WebService(SharePoint)(GetListItems) | I am attempting to pull list Items from a sharepoint server(via python/suds) and am having some difficulty with making queries to GetListItems. If i provide no other parameters to GetListItems other than the list Name, it will return all the items in the default view for that List. I want to query a specific set of items(ID=15) to be returned and a specific set of fields(Date and Description) to be returned for those items. Here is my SOAP packet:
<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope xmlns:ns0="http://schemas.xmlsoap.org/soap/envelope "xmlns:ns1="http://schemas.microsoft.com/sharepoint/soap/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">
<SOAP-ENV:Header/>
<ns0:Body>
<ns1:GetListItems>
<ns1:listName>MyCalendar</ns1:listName>
<query>
<Where>
<Eq>
<FieldRef Name="_ows_ID">15</FieldRef>
</Eq>
</Where>
</query>
<viewFields>
<FieldRef Name="Description"/>
<FieldRef Name="EventDate"/>
</viewFields>
</ns1:GetListItems>
</ns0:Body>
</SOAP-ENV:Envelope>
This appears to conform to the WSDL, however, when making the query i get back a full list of items with all the fields(as if i did not pass any query XML at all).
Any suggestions for a SOAP noob?
Also, here is my python code that generated this XML:
query = Element('query')
where = Element('Where')
eq = Element('Eq')
eq.append(Element('FieldRef').append(Attribute('Name', '_ows_ID')).setText('15'))
where.append(eq)
query.append(where)
viewFields = Element('viewFields')
viewFields.append(Element('FieldRef').append(Attribute('Name','Description')))
viewFields.append(Element('FieldRef').append(Attribute('Name','EventDate')))
results = c_lists.service.GetListItems('MyCalendar', None, query, viewFields, None, None)
Any help would be greatly appreciated!
| [
"I figured this one out. The problem was the CAML query. \n1. You need to wrap the CAML 'Query' in a 'query' element.\n2. You need to set the proper 'Value Type' Element and attributes.\nSee me other posting titled 'Sharepoint Filter for List Items(GetListItems)' for more info and code.\nThanks!\nNick\n"
] | [
2
] | [] | [] | [
"python",
"soap"
] | stackoverflow_0003443578_python_soap.txt |
Q:
Document Similarity: Comparing two documents efficiently
I have a loop that calculates the similarity between two documents. It collects all the tokens in a document and their scores, and places them in dictionary. It then compares the dictionaries
This is what I have so far, it works, but is super slow:
# Doc A
cursor1.execute("SELECT token, tfidf_norm FROM index WHERE doc_id = %s", (docid[i][0]))
doca = cursor1.fetchall()
#convert tuple to a dictionary
doca_dic = dict((row[0], row[1]) for row in doca)
#Doc B
cursor2.execute("SELECT token, tfidf_norm FROM index WHERE doc_id = %s", (docid[j][0]))
docb = cursor2.fetchall()
#convert tuple to a dictionary
docb_dic = dict((row[0], row[1]) for row in docb)
# loop through each token in doca and see if one matches in docb
for x in doca_dic:
if docb_dic.has_key(x):
#calculate the similarity by summing the products of the tf-idf_norm
similarity += doca_dic[x] * docb_dic[x]
print "similarity"
print similarity
I'm pretty new to Python, hence this mess. I need to speed it up, any help would be appreciated.
Thanks.
A:
A Python point: adict.has_key(k) is obsolete in Python 2.X and vanished in Python 3.X. k in adict as an expression has been available since Python 2.2; use it instead. It will be faster (no method call).
An any-language practical point: iterate over the shorter dictionary.
Combined result:
if len(doca_dic) < len(docb_dict):
short_dict, long_dict = doca_dic, docb_dic
else:
short_dict, long_dict = docb_dic, doca_dic
similarity = 0
for x in short_dict:
if x in long_dict:
#calculate the similarity by summing the products of the tf-idf_norm
similarity += short_dict[x] * long_dict[x]
And if you don't need the two dictionaries for anything else, you could create only the A one and iterate over the B (key, value) tuples as they pop out of your B query. After the docb = cursor2.fetchall(), replace all following code by this:
similarity = 0
for b_token, b_value in docb:
if b_token in doca_dic:
similarity += doca_dic[b_token] * b_value
Alternative to the above code: This is doing more work but it's doing more of the iterating in C instead of Python and may be faster.
similarity = sum(
doca_dic[k] * docb_dic[k]
for k in set(doca_dic) & set(docb_dic)
)
Final version of the Python code
# Doc A
cursor1.execute("SELECT token, tfidf_norm FROM index WHERE doc_id = %s", (docid[i][0]))
doca = cursor1.fetchall()
# Doc B
cursor2.execute("SELECT token, tfidf_norm FROM index WHERE doc_id = %s", (docid[j][0]))
docb = cursor2.fetchall()
if len(doca) < len(docb):
short_doc, long_doc = doca, docb
else:
short_doc, long_doc = docb, doca
long_dict = dict(long_doc) # yes, it should be that simple
similarity = 0
for key, value in short_doc:
if key in long_dict:
similarity += long_dict[key] * value
Another practical point: you haven't said which part of it is slow ... working on the dicts or doing the selects? Put some calls of time.time() into your script.
Consider pushing ALL the work onto the database. Following example uses a hardwired SQLite query but the principle is the same.
C:\junk\so>sqlite3
SQLite version 3.6.14
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> create table atable(docid text, token text, score float,
primary key (docid, token));
sqlite> insert into atable values('a', 'apple', 12.2);
sqlite> insert into atable values('a', 'word', 29.67);
sqlite> insert into atable values('a', 'zulu', 78.56);
sqlite> insert into atable values('b', 'apple', 11.0);
sqlite> insert into atable values('b', 'word', 33.21);
sqlite> insert into atable values('b', 'zealot', 11.56);
sqlite> select sum(A.score * B.score) from atable A, atable B
where A.token = B.token and A.docid = 'a' and B.docid = 'b';
1119.5407
sqlite>
And it's worth checking that the database table is appropriately indexed (e.g. one on token by itself) ... not having a usable index is a good way of making an SQL query run very slowly.
Explanation: Having an index on token may make either your existing queries or the "do all the work in the DB" query or both run faster, depending on the whims of the query optimiser in your DB software and the phase of the moon. If you don't have a usable index, the DB will read ALL the rows in your table -- not good.
Creating an index: create index atable_token_idx on atable(token);
Dropping an index: drop index atable_token_idx;
(but do consult the docs for your DB)
A:
What about pushing some of the work on the DB?
With a join you can have a result that is basically
Token A.tfidf_norm B.tfidf_norm
-----------------------------------------
Apple 12.2 11.00
...
Word 29.87 33.21
Zealot 0.00 11.56
Zulu 78.56 0.00
And you just have to scan the cursor and do your operations.
If you don't need to know if one word is in one document and missing in the other one you don't need an outer join, and the list will be the intersection of the two sets. The example I included above assigns automatically a "0" for words missing from one of the two documents. See what your "matching" functions requires.
A:
One sql query can do the job:
SELECT sum(index1.tfidf_norm*index2.tfidf_norm) FROM index index1, index index2 WHERE index1.token=index2.token AND index1.doc_id=? AND index2.doc_id=?
Just replace the '?' with the 2 document id respectively.
| Document Similarity: Comparing two documents efficiently | I have a loop that calculates the similarity between two documents. It collects all the tokens in a document and their scores, and places them in dictionary. It then compares the dictionaries
This is what I have so far, it works, but is super slow:
# Doc A
cursor1.execute("SELECT token, tfidf_norm FROM index WHERE doc_id = %s", (docid[i][0]))
doca = cursor1.fetchall()
#convert tuple to a dictionary
doca_dic = dict((row[0], row[1]) for row in doca)
#Doc B
cursor2.execute("SELECT token, tfidf_norm FROM index WHERE doc_id = %s", (docid[j][0]))
docb = cursor2.fetchall()
#convert tuple to a dictionary
docb_dic = dict((row[0], row[1]) for row in docb)
# loop through each token in doca and see if one matches in docb
for x in doca_dic:
if docb_dic.has_key(x):
#calculate the similarity by summing the products of the tf-idf_norm
similarity += doca_dic[x] * docb_dic[x]
print "similarity"
print similarity
I'm pretty new to Python, hence this mess. I need to speed it up, any help would be appreciated.
Thanks.
| [
"A Python point: adict.has_key(k) is obsolete in Python 2.X and vanished in Python 3.X. k in adict as an expression has been available since Python 2.2; use it instead. It will be faster (no method call).\nAn any-language practical point: iterate over the shorter dictionary.\nCombined result:\nif len(doca_dic) < le... | [
2,
1,
0
] | [] | [] | [
"mysql",
"performance",
"python"
] | stackoverflow_0002437978_mysql_performance_python.txt |
Q:
How to keep text inside a circle using Cairo?
I a drawing a graph using Cairo (pycairo specifically) and I need to know how can I draw text inside a circle without overlapping it, by keeping it inside the bounds of the circle. I have this simple code snippet that draws a letter "a" inside the circle:
'''
Created on May 8, 2010
@author: mrios
'''
import cairo, math
WIDTH, HEIGHT = 1000, 1000
#surface = cairo.PDFSurface ("/Users/mrios/Desktop/exampleplaces.pdf", WIDTH, HEIGHT)
surface = cairo.ImageSurface (cairo.FORMAT_ARGB32, WIDTH, HEIGHT)
ctx = cairo.Context (surface)
ctx.scale (WIDTH/1.0, HEIGHT/1.0) # Normalizing the canvas
ctx.rectangle(0, 0, 1, 1) # Rectangle(x0, y0, x1, y1)
ctx.set_source_rgb(255,255,255)
ctx.fill()
ctx.arc(0.5, 0.5, .4, 0, 2*math.pi)
ctx.set_source_rgb(0,0,0)
ctx.set_line_width(0.03)
ctx.stroke()
ctx.arc(0.5, 0.5, .4, 0, 2*math.pi)
ctx.set_source_rgb(0,0,0)
ctx.set_line_width(0.01)
ctx.set_source_rgb(255,0,255)
ctx.fill()
ctx.set_source_rgb(0,0,0)
ctx.select_font_face("Georgia",
cairo.FONT_SLANT_NORMAL, cairo.FONT_WEIGHT_BOLD)
ctx.set_font_size(1.0)
x_bearing, y_bearing, width, height = ctx.text_extents("a")[:4]
print ctx.text_extents("a")[:4]
ctx.move_to(0.5 - width / 2 - x_bearing, 0.5 - height / 2 - y_bearing)
ctx.show_text("a")
surface.write_to_png ("/Users/mrios/Desktop/node.png") # Output to PNG
The problem is that my labels have variable amount of characters (with a limit of 20) and I need to set the size of the font dynamically. It must fit inside the circle, no matter the size of the circle nor the size of the label. Also, every label has one line of text, no spaces, no line breaks.
Any suggestion?
A:
I had a similar issue, where I need to adjust the size of the font to keep the name of my object within the boundaries of rectangles, not circles. I used a while loop, and kept checking the text extent size of the string, decreasing the font size until it fit.
Here what I did: (this is using C++ under Kylix, a Delphi derivative).
double fontSize = 20.0;
bool bFontFits = false;
while (bFontFits == false)
{
m_pCanvas->Font->Size = (int)fontSize;
TSize te = m_pCanvas->TextExtent(m_name.c_str());
if (te.cx < (width*0.90)) // Allow a little room on each side
{
// Calculate the position
m_labelOrigin.x = rectX + (width/2.0) - (te.cx/2);
m_labelOrigin.y = rectY + (height/2.0) - te.cy/2);
m_fontSize = fontSize;
bFontFits = true;
break;
}
fontSize -= 1.0;
}
Of course, this doesn't show error checking. If the rectangle (or your circle) is too small, you'll have to break out of the loop.
A:
Since the size of the circle does not matter you should draw them in the opposite order than your code.
Print the text on screen
Calculate the text boundaries (using text extents)
Draw a circle around the text that is just a little bigger from the text.
| How to keep text inside a circle using Cairo? | I a drawing a graph using Cairo (pycairo specifically) and I need to know how can I draw text inside a circle without overlapping it, by keeping it inside the bounds of the circle. I have this simple code snippet that draws a letter "a" inside the circle:
'''
Created on May 8, 2010
@author: mrios
'''
import cairo, math
WIDTH, HEIGHT = 1000, 1000
#surface = cairo.PDFSurface ("/Users/mrios/Desktop/exampleplaces.pdf", WIDTH, HEIGHT)
surface = cairo.ImageSurface (cairo.FORMAT_ARGB32, WIDTH, HEIGHT)
ctx = cairo.Context (surface)
ctx.scale (WIDTH/1.0, HEIGHT/1.0) # Normalizing the canvas
ctx.rectangle(0, 0, 1, 1) # Rectangle(x0, y0, x1, y1)
ctx.set_source_rgb(255,255,255)
ctx.fill()
ctx.arc(0.5, 0.5, .4, 0, 2*math.pi)
ctx.set_source_rgb(0,0,0)
ctx.set_line_width(0.03)
ctx.stroke()
ctx.arc(0.5, 0.5, .4, 0, 2*math.pi)
ctx.set_source_rgb(0,0,0)
ctx.set_line_width(0.01)
ctx.set_source_rgb(255,0,255)
ctx.fill()
ctx.set_source_rgb(0,0,0)
ctx.select_font_face("Georgia",
cairo.FONT_SLANT_NORMAL, cairo.FONT_WEIGHT_BOLD)
ctx.set_font_size(1.0)
x_bearing, y_bearing, width, height = ctx.text_extents("a")[:4]
print ctx.text_extents("a")[:4]
ctx.move_to(0.5 - width / 2 - x_bearing, 0.5 - height / 2 - y_bearing)
ctx.show_text("a")
surface.write_to_png ("/Users/mrios/Desktop/node.png") # Output to PNG
The problem is that my labels have variable amount of characters (with a limit of 20) and I need to set the size of the font dynamically. It must fit inside the circle, no matter the size of the circle nor the size of the label. Also, every label has one line of text, no spaces, no line breaks.
Any suggestion?
| [
"I had a similar issue, where I need to adjust the size of the font to keep the name of my object within the boundaries of rectangles, not circles. I used a while loop, and kept checking the text extent size of the string, decreasing the font size until it fit.\nHere what I did: (this is using C++ under Kylix, a D... | [
3,
3
] | [] | [] | [
"cairo",
"graphics",
"pycairo",
"python"
] | stackoverflow_0002793093_cairo_graphics_pycairo_python.txt |
Q:
Trying to come up with a recursive function to expand a tree in Python
I have table that looks like this:
id | parentid | name
---------------------
1 | 0 | parent1
---------------------
2 | 0 | parent2
---------------------
3 | 1 | child1
---------------------
4 | 3 | subchild1
I'm now trying to come up with an efficient way to take that database data and create a Python dictionary.
Basically, I want to be able to do:
tree = expand(Session.query(mytable).all())
print tree['parent2']['child2']
# result would be 'subchild1'
I'm at a complete loss with how to do this... I've been messing around with the following function but I can't get it working. Any help would be appreciated.
def expand(tree):
parents = [i for i in tree if i.parentid == 0]
for parent in parents:
children = expand(parent)
A:
If I understand correctly, the item for which their parent id is 0 are the root one or the first level ?
If so, your method should look like:
def expand(tree, id):
expanded_tree = {}
parents = [i for i in tree if i.parentid == id]
for parent in parents:
expanded_tree[parent.name] = expand(tree, parent.id)
return expanded_tree
and you'd start it like:
tree = expand(Session.query(mytable).all(), 0)
A:
Your example doesn't match the data given but this should be what you want.
It's not recursive, because recursion does not make sense here: The input data has no recursive structure (that's what we are creating), so all you can write as a recursion is the loop ... and that is a pretty pointless thing to do in Python.
data = [ (1,0,"parent1"), (2,0,"parent2"), (3,1,"child1"), (4,3,"child2")]
# first, you have to sort this by the parentid
# this way the parents are added before their children
data.sort(key=lambda x: x[1])
def make_tree( data ):
treemap = {} # for each id, give the branch to append to
trees = {}
for id,parent,name in data:
# {} is always a new branch
if parent == 0: # roots
# root elements are added directly
trees[name] = treemap[id] = {}
else:
# we add to parents by looking them up in `treemap`
treemap[parent][name] = treemap[id] = {}
return trees
tree = make_tree( data )
print tree['parent1']['child1'].keys() ## prints all children: ["child2"]
| Trying to come up with a recursive function to expand a tree in Python | I have table that looks like this:
id | parentid | name
---------------------
1 | 0 | parent1
---------------------
2 | 0 | parent2
---------------------
3 | 1 | child1
---------------------
4 | 3 | subchild1
I'm now trying to come up with an efficient way to take that database data and create a Python dictionary.
Basically, I want to be able to do:
tree = expand(Session.query(mytable).all())
print tree['parent2']['child2']
# result would be 'subchild1'
I'm at a complete loss with how to do this... I've been messing around with the following function but I can't get it working. Any help would be appreciated.
def expand(tree):
parents = [i for i in tree if i.parentid == 0]
for parent in parents:
children = expand(parent)
| [
"If I understand correctly, the item for which their parent id is 0 are the root one or the first level ?\nIf so, your method should look like:\ndef expand(tree, id):\n expanded_tree = {}\n\n parents = [i for i in tree if i.parentid == id]\n\n for parent in parents:\n expanded_tree[parent.name] = ex... | [
2,
1
] | [] | [] | [
"python",
"recursion",
"tree"
] | stackoverflow_0003467723_python_recursion_tree.txt |
Q:
Python HTTPS client with basic authentication via proxy
From Python, I would like to retrieve content from a web site via HTTPS with basic authentication. I need the content on disk. I am on an intranet, trusting the HTTPS server. Platform is Python 2.6.2 on Windows.
I have been playing around with urllib2, however did not succeed so far.
I have a solution running, calling wget via os.system():
wget_cmd = r'\path\to\wget.exe -q -e "https_proxy = http://fqdn.to.proxy:port" --no-check-certificate --http-user="username" --http-password="password" -O path\to\output https://fqdn.to.site/content'
I would like to get rid of the os.system(). Is that possible in Python?
A:
Try this (notice that you'll have to fill in the realm of your server also):
import urllib2
authinfo = urllib2.HTTPBasicAuthHandler()
authinfo.add_password(realm='Fill In Realm Here',
uri='https://fqdn.to.site/content',
user='username',
passwd='password')
proxy_support = urllib2.ProxyHandler({"https" : "http://fqdn.to.proxy:port"})
opener = urllib2.build_opener(proxy_support, authinfo)
fp = opener.open("https://fqdn.to.site/content")
open(r"path\to\output", "wb").write(fp.read())
A:
Proxy and https wasn't working for a long time with urllib2. It will be fixed in the next released version of python 2.6 (v2.6.3).
In the meantime you can reimplement the correct support, that's what we did for mercurial: http://hg.intevation.org/mercurial/crew/rev/59acb9c7d90f
A:
You could try this too:
http://code.google.com/p/python-httpclient/
(It also supports the verification of the server certificate.)
| Python HTTPS client with basic authentication via proxy | From Python, I would like to retrieve content from a web site via HTTPS with basic authentication. I need the content on disk. I am on an intranet, trusting the HTTPS server. Platform is Python 2.6.2 on Windows.
I have been playing around with urllib2, however did not succeed so far.
I have a solution running, calling wget via os.system():
wget_cmd = r'\path\to\wget.exe -q -e "https_proxy = http://fqdn.to.proxy:port" --no-check-certificate --http-user="username" --http-password="password" -O path\to\output https://fqdn.to.site/content'
I would like to get rid of the os.system(). Is that possible in Python?
| [
"Try this (notice that you'll have to fill in the realm of your server also):\nimport urllib2\nauthinfo = urllib2.HTTPBasicAuthHandler()\nauthinfo.add_password(realm='Fill In Realm Here',\n uri='https://fqdn.to.site/content',\n user='username',\n passwd... | [
3,
3,
0
] | [] | [] | [
"basic_authentication",
"https",
"proxy",
"python"
] | stackoverflow_0001453264_basic_authentication_https_proxy_python.txt |
Q:
Python doesn't have opcode cacher?
I'm currently using PHP. I plan to start using Django for some of my next project.
But I don't have any experience with Python. After some searching, I still can't find a Python opcode cacher.
(There are lots of opcode cacher for PHP: APC, eAccelerator, Xcache, ...)
A:
It's automatic in Python -- a compiled .pyc file will appear magically.
A:
Python doesn't need one the same way PHP needs it. Python doesn't throw the bytecode away after execution, it keeps it around (as .pyc files).
A:
It's built in: http://pyfaq.infogami.com/how-do-i-create-a-pyc-file
Python can compile to *.pyc files that effectively do the same thing.
| Python doesn't have opcode cacher? | I'm currently using PHP. I plan to start using Django for some of my next project.
But I don't have any experience with Python. After some searching, I still can't find a Python opcode cacher.
(There are lots of opcode cacher for PHP: APC, eAccelerator, Xcache, ...)
| [
"It's automatic in Python -- a compiled .pyc file will appear magically.\n",
"Python doesn't need one the same way PHP needs it. Python doesn't throw the bytecode away after execution, it keeps it around (as .pyc files).\n",
"It's built in: http://pyfaq.infogami.com/how-do-i-create-a-pyc-file\nPython can compil... | [
9,
2,
1
] | [] | [] | [
"opcode",
"opcode_cache",
"python"
] | stackoverflow_0003468243_opcode_opcode_cache_python.txt |
Q:
How does sympy work? How does it interact with the interactive Python shell, and how does the interactive Python shell work?
What happens internally when I press Enter?
My motivation for asking, besides plain curiosity, is to figure out what happens when you
from sympy import *
and enter an expression. How does it go from Enter to calling
__sympifyit_wrapper(a,b)
in sympy.core.decorators? (That's the first place winpdb took me when I tried inspecting an evaluation.) I would guess that there is some built-in eval function that gets called normally, and is overridden when you import sympy?
A:
All right after playing around with it some more I think I've got it.. when I first asked the question I didn't know about operator overloading.
So, what's going on in this python session?
>>> from sympy import *
>>> x = Symbol(x)
>>> x + x
2*x
It turns out there's nothing special about how the interpreter evaluates the expression; the important thing is that python translates
x + x
into
x.__add__(x)
and Symbol inherits from the Basic class, which defines __add__(self, other) to return Add(self, other). (These classes are found in sympy.core.symbol, sympy.core.basic, and sympy.core.add if you want to take a look.)
So as Jerub was saying, Symbol.__add__() has a decorator called _sympifyit which basically converts the second argument of a function into a sympy expression before evaluating the function, in the process returning a function called __sympifyit_wrapper which is what I saw before.
Using objects to define operations is a pretty slick concept; by defining your own operators and string representations you can implement a trivial symbolic algebra system quite easily:
symbolic.py --
class Symbol(object):
def __init__(self, name):
self.name = name
def __add__(self, other):
return Add(self, other)
def __repr__(self):
return self.name
class Add(object):
def __init__(self, left, right):
self.left = left
self.right = right
def __repr__(self):
return self.left + '+' + self.right
Now we can do:
>>> from symbolic import *
>>> x = Symbol('x')
>>> x+x
x+x
With a bit of refactoring it can easily be extended to handle all basic arithmetic:
class Basic(object):
def __add__(self, other):
return Add(self, other)
def __radd__(self, other): # if other hasn't implemented __add__() for Symbols
return Add(other, self)
def __mul__(self, other):
return Mul(self, other)
def __rmul__(self, other):
return Mul(other, self)
# ...
class Symbol(Basic):
def __init__(self, name):
self.name = name
def __repr__(self):
return self.name
class Operator(Basic):
def __init__(self, symbol, left, right):
self.symbol = symbol
self.left = left
self.right = right
def __repr__(self):
return '{0}{1}{2}'.format(self.left, self.symbol, self.right)
class Add(Operator):
def __init__(self, left, right):
self.left = left
self.right = right
Operator.__init__(self, '+', left, right)
class Mul(Operator):
def __init__(self, left, right):
self.left = left
self.right = right
Operator.__init__(self, '*', left, right)
# ...
With just a bit more tweaking we can get the same behavior as the sympy session from the beginning.. we'll modify Add so it returns a Mul instance if its arguments are equal. This is a bit trickier since we have get to it before instance creation; we have to use __new__() instead of __init__():
class Add(Operator):
def __new__(cls, left, right):
if left == right:
return Mul(2, left)
return Operator.__new__(cls)
...
Don't forget to implement the equality operator for Symbols:
class Symbol(Basic):
...
def __eq__(self, other):
if type(self) == type(other):
return repr(self) == repr(other)
else:
return False
...
And voila. Anyway, you can think of all kinds of other things to implement, like operator precedence, evaluation with substitution, advanced simplification, differentiation, etc., but I think it's pretty cool that the basics are so simple.
A:
This doesn't have much to do with secondbanana's real question - it's just a shot at Omnifarious' bounty ;)
The interpreter itself is pretty simple. As a matter of fact you could write a simple one (nowhere near perfect, doesn't handle exceptions, etc.) yourself:
print "Wayne's Python Prompt"
def getline(prompt):
return raw_input(prompt).rstrip()
myinput = ''
while myinput.lower() not in ('exit()', 'q', 'quit'):
myinput = getline('>>> ')
if myinput:
while myinput[-1] in (':', '\\', ','):
myinput += '\n' + getline('... ')
exec(myinput)
You can do most of the stuff you're used to in the normal prompt:
Waynes Python Prompt
>>> print 'hi'
hi
>>> def foo():
... print 3
>>> foo()
3
>>> from dis import dis
>>> dis(foo)
2 0 LOAD_CONST 1 (3)
3 PRINT_ITEM
4 PRINT_NEWLINE
5 LOAD_CONST 0 (None)
8 RETURN_VALUE
>>> quit
Hit any key to close this window...
The real magic happens in the lexer/parser.
Lexical Analysis, or lexing is breaking the input into individual tokens. The tokens are keywords or "indivisible" elements. For instance, =, if, try, :, for, pass, and import are all Python tokens. To see how Python tokenizes a program you can use the tokenize module.
Put some code in a file called 'test.py' and run the following in that directory:
from tokenize import tokenize
f = open('test.py')
tokenize(f.readline)
For print "Hello World!" you get the following:
1,0-1,5: NAME 'print'
1,6-1,19: STRING '"hello world"'
1,19-1,20: NEWLINE '\n'
2,0-2,0: ENDMARKER ''
Once the code is tokenized, it's parsed into an abstract syntax tree. The end result is a python bytecode representation of your program. For print "Hello World!" you can see the result of this process:
from dis import dis
def heyworld():
print "Hello World!"
dis(heyworld)
Of course all languages lex, parse, compile and then execute their programs. Python lexes, parses, and compiles to bytecode. Then the bytecode is "compiled" (translated might be more accurate) to machine code which is then executed. This is the main difference between interpreted and compiled languages - compiled languages are compiled directly to machine code from the original source, which means you only have to lex/parse before compilation and then you can directly execute the program. This means faster execution times (no lex/parse stage), but it also means that to get to that initial execution time you have to spend a lot more time because the entire program must be compiled.
A:
I just inspected the code of sympy (at http://github.com/sympy/sympy ) and it looks like __sympifyit_wrapper is a decorator. The reason it will called is because there is some code somewhere that looks like this:
class Foo(object):
@_sympifyit
def func(self):
pass
And __sympifyit_wrapper is a wrapper that's returned by @_sympifyit. If you continued your debugging you may've found the function (in my example named func).
I gather in one of the many modules and packages imported in sympy/__init__.py some built in code is replaced with sympy versions. These sympy versions probably use that decorator.
exec as used by >>> won't have been replaced, the objects that are operated on will have been.
A:
The Python interactive interpreter doesn't do a lot that's any different from any other time Python code is getting run. It does have some magic to catch exceptions and to detect incomplete multi-line statements before executing them so that you can finish typing them, but that's about it.
If you're really curious, the standard code module is a fairly complete implementation of the Python interactive prompt. I think it's not precisely what Python actually uses (that is, I believe, implemented in C), but you can dig into your Python's system library directory and actually look at how it's done. Mine's at /usr/lib/python2.5/code.py
| How does sympy work? How does it interact with the interactive Python shell, and how does the interactive Python shell work? | What happens internally when I press Enter?
My motivation for asking, besides plain curiosity, is to figure out what happens when you
from sympy import *
and enter an expression. How does it go from Enter to calling
__sympifyit_wrapper(a,b)
in sympy.core.decorators? (That's the first place winpdb took me when I tried inspecting an evaluation.) I would guess that there is some built-in eval function that gets called normally, and is overridden when you import sympy?
| [
"All right after playing around with it some more I think I've got it.. when I first asked the question I didn't know about operator overloading.\nSo, what's going on in this python session?\n>>> from sympy import *\n>>> x = Symbol(x)\n>>> x + x\n2*x\n\nIt turns out there's nothing special about how the interpreter... | [
12,
6,
5,
1
] | [] | [] | [
"eval",
"interactive",
"python",
"scripting",
"sympy"
] | stackoverflow_0003191749_eval_interactive_python_scripting_sympy.txt |
Q:
Caching options in Python or speeding up urlopen
Hey all, I have a site that looks up info for the end user, is written in Python, and requires several urlopen commands. As a result it takes a bit for a page to load. I was wondering if there was a way to make it faster? Is there an easy Python way to cache or a way to make the urlopen scripts fun last?
The urlopens access the Amazon API to get prices, so the site needs to be somewhat up to date. The only option I can think of is to make a script to make a mySQL db and run it ever now and then, but that would be a nuisance.
Thanks!
A:
httplib2 understands http request caching, abstracts urllib/urllib2's messiness somewhat and has other goodies, like gzip support.
http://code.google.com/p/httplib2/
But besides using that to get the data, if the dataset is not very big, I would also implement some kind of function caching / memoizing.
Example:
http://wiki.python.org/moin/PythonDecoratorLibrary#Memoize
It wouldn't be too hard to modify that decorator to allow for time based expiry, e.g. only cache the result for 15 mins.
If the results are bigger, you need to start looking into memcached/redis.
A:
There are several things you can do.
The urllib caching mechanism is temporarily disabled, but you could easily roll your own by storing the data you get from Amazon in memory or in a file somewhere.
Similarly to the above, you could have a separate script that refreshes the prices every so often, and cron it to run every half an hour (say). These could be stored wherever.
You could run the URL fetching in a new thread/process, since it is mostly waiting anyway.
A:
How often do the price(s) change? If they're pretty constant (say once a day, or every hour or so), just go ahead and write a cron script (or equivalent) that retrieves the values and stores it in a database or text file or whatever it is you need.
I don't know if you can check the timestamp data from the Amazon API - if they report that sort of thing.
A:
You could use memcached. It is designed for that, and this way you could easily share the cache with different program/scripts. And it is really easy to use from Python, check:
Good examples of python-memcache (memcached) being used in Python?
Then you update the memcached when a key is not there and also from some cron script, and you're ready to go.
Another, simpler, option would be to cook you own cache, probably storing the data in a dictionary and/or using cPickle to serialize it to disk (if you want the data to be shared between different runs).
A:
If you need to grab from multiple sites at once you might try whit asyncore http://docs.python.org/library/asyncore.html
This way you can easily load multiple pages at once.
| Caching options in Python or speeding up urlopen | Hey all, I have a site that looks up info for the end user, is written in Python, and requires several urlopen commands. As a result it takes a bit for a page to load. I was wondering if there was a way to make it faster? Is there an easy Python way to cache or a way to make the urlopen scripts fun last?
The urlopens access the Amazon API to get prices, so the site needs to be somewhat up to date. The only option I can think of is to make a script to make a mySQL db and run it ever now and then, but that would be a nuisance.
Thanks!
| [
"httplib2 understands http request caching, abstracts urllib/urllib2's messiness somewhat and has other goodies, like gzip support.\nhttp://code.google.com/p/httplib2/\nBut besides using that to get the data, if the dataset is not very big, I would also implement some kind of function caching / memoizing. \nExample... | [
3,
1,
0,
0,
0
] | [] | [] | [
"caching",
"python",
"sql",
"urlopen"
] | stackoverflow_0003468248_caching_python_sql_urlopen.txt |
Q:
Overwrite method add for ManyToMany related fields
Where should I overwrite method add() for ManyToMany related fields.
Seems like it is not manager 'objects' of my model. Because when we are adding new relation for ManyToMany fields we are not writing Model.objects.add().
So what I need it overwrite method add() of instance. How can I do it?
Edit:
So i know that there is ManyRelatedManager. One thing remain how can i overwrite it?
Sorry... not overwrite, but assign it in my Model by default.
A:
http://docs.djangoproject.com/en/1.2/topics/db/managers/#custom-managers
You can create any number of managers for a Model.
You can subclass a ManyRelatedManager and assign it to the Model.
This example may be what you're looking for
# Then hook it into the Book model explicitly.
class Book(models.Model):
title = models.CharField(max_length=100)
author = models.CharField(max_length=50)
objects = models.Manager() # The default manager.
dahl_objects = DahlBookManager() # The Dahl-specific manager.
The objects manage is the default. Do not change this.
The dahl_objects is a customized manager. You can have any number of these.
| Overwrite method add for ManyToMany related fields | Where should I overwrite method add() for ManyToMany related fields.
Seems like it is not manager 'objects' of my model. Because when we are adding new relation for ManyToMany fields we are not writing Model.objects.add().
So what I need it overwrite method add() of instance. How can I do it?
Edit:
So i know that there is ManyRelatedManager. One thing remain how can i overwrite it?
Sorry... not overwrite, but assign it in my Model by default.
| [
"http://docs.djangoproject.com/en/1.2/topics/db/managers/#custom-managers\nYou can create any number of managers for a Model.\nYou can subclass a ManyRelatedManager and assign it to the Model.\nThis example may be what you're looking for\n# Then hook it into the Book model explicitly.\nclass Book(models.Model):\n ... | [
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0003463372_django_python.txt |
Q:
How can I create python strings with placeholders with arbitrary number of elements
I can do
string="%s"*3
print string %(var1,var2,var3)
but I can't get the vars into another variable so that I can create a list of vars on the fly with app logic.
for example
if condition:
add a new %s to string variable
vars.append(newvar)
else:
remove one %s from string
vars.pop()
print string with placeholders
Any ideas on how to do this with python 2.6 ?
A:
How about this?
print ("%s" * len(vars)) % tuple(vars)
Really though, that's a rather silly way to do things. If you just want to squish all the variables together in one big string, this is likely a better idea:
print ''.join(str(x) for x in vars)
That does require at least Python 2.4 to work.
A:
use a list to append/remove strings then "".join(yourlist) before printing
>>> q = []
>>> for x in range(3):
q.append("%s")
>>> "".join(q)
'%s%s%s'
>>> print "".join(q) % ("a","b","c")
abc
A:
n = 0
if condition:
increment n
vars.append(newvar)
else:
decrement n
vars.pop()
string = "%s" * n
print string with placeholders
But you don't really need string formatting if you're just joining the vars; why not do:
"".join( map( str, vars ) )
| How can I create python strings with placeholders with arbitrary number of elements | I can do
string="%s"*3
print string %(var1,var2,var3)
but I can't get the vars into another variable so that I can create a list of vars on the fly with app logic.
for example
if condition:
add a new %s to string variable
vars.append(newvar)
else:
remove one %s from string
vars.pop()
print string with placeholders
Any ideas on how to do this with python 2.6 ?
| [
"How about this?\nprint (\"%s\" * len(vars)) % tuple(vars)\n\nReally though, that's a rather silly way to do things. If you just want to squish all the variables together in one big string, this is likely a better idea:\nprint ''.join(str(x) for x in vars)\n\nThat does require at least Python 2.4 to work.\n",
"u... | [
6,
3,
1
] | [] | [] | [
"placeholder",
"python"
] | stackoverflow_0003468916_placeholder_python.txt |
Q:
Python: Why does subprocess() start 2 processes in Ubuntu, and 1 in OpenSUSE?
I've written small gui-frontend in Python that lets users play internet radio channels. The program uses Pythons subprocess() to initizalize mplayer in order to tune into a channel, e.g.:
runn = "mplayer http://77.111.88.131:8010"
p = subprocess.Popen(runn, shell=True)
pid = int(p.pid)
wait = os.waitpid(p.pid, 1)
Then saves p.pid, and when a user wants to stop listening the following code is used:
os.kill(p.pid, 9)
This works perfectly in OpenSUSE, but not in Ubuntu. It seems that Ubuntu actually starts two separate processes. Terminal output:
Opensuse 11.3:
$ pgrep mplayer
22845
Ubuntu 10.04:
$ pgrep mplayer
22846
22847
This also applies when running other programs. Does anyone know why? I really want this app to run on all distros, so any help is deeply appreciated.
A:
Try this:
p = subprocess.Popen(runn.split(), shell=False)
My guess as to what's going on is this...
When you say shell=True subprocess actually starts this command sh -c "your string". The sh command then interprets your string and runs the command as if you'd typed that in at the shell prompt (more or less). Normally this would result in two processes. One would be sh -c "your string" and the other would be it's child, your string.
Some versions of sh have an optimization in which they will automatically exec the command under certain conditions. They do it if it's the last command sh is going to run and sh has no other reason to stick around. When using sh -c to run a command, this will almost always result in that invocation of sh replacing itself with the command it's running, thereby resulting in one process.
In my opinion it's a really, really bad idea to ever invoke subprocess.Popen with shell=True. You are opening yourself up for a ton of security issues by doing that, and generally less predictable behavior because shell metacharacters are interpreted by sh -c.
A:
I don't have an exact answer, but here are several ways to investigate:
Use pstree to examine the parent/child relationship between the processes.
Use ps -awux to view the full the command line arguments for all of the processes.
Note that using shell=True starts a shell process (e.g., /bin/bash) which starts mplayer. That might be another avenue for investigation. Are both systems using the same shell?
Are both systems using the same version of mplayer? of python?
A:
subprocess.Popen returns a Popen object with several useful methods. It's probably a bad idea to terminate things by directly using os.kill...
Does the same thing happen if you use the Popen object's p.terminate() or p.kill() methods?
| Python: Why does subprocess() start 2 processes in Ubuntu, and 1 in OpenSUSE? | I've written small gui-frontend in Python that lets users play internet radio channels. The program uses Pythons subprocess() to initizalize mplayer in order to tune into a channel, e.g.:
runn = "mplayer http://77.111.88.131:8010"
p = subprocess.Popen(runn, shell=True)
pid = int(p.pid)
wait = os.waitpid(p.pid, 1)
Then saves p.pid, and when a user wants to stop listening the following code is used:
os.kill(p.pid, 9)
This works perfectly in OpenSUSE, but not in Ubuntu. It seems that Ubuntu actually starts two separate processes. Terminal output:
Opensuse 11.3:
$ pgrep mplayer
22845
Ubuntu 10.04:
$ pgrep mplayer
22846
22847
This also applies when running other programs. Does anyone know why? I really want this app to run on all distros, so any help is deeply appreciated.
| [
"Try this:\np = subprocess.Popen(runn.split(), shell=False)\n\nMy guess as to what's going on is this...\nWhen you say shell=True subprocess actually starts this command sh -c \"your string\". The sh command then interprets your string and runs the command as if you'd typed that in at the shell prompt (more or les... | [
7,
1,
0
] | [] | [] | [
"python",
"subprocess"
] | stackoverflow_0003468922_python_subprocess.txt |
Q:
Regular expression: if, else if, else
I am trying to parse FSM statements of the Gezel language (http://rijndael.ece.vt.edu/gezel2/) using Python and regular expressions
regex_cond = re.compile(r'.+((else\tif|else|if)).+')
line2 = '@s0 else if (insreg==1) then (initx,PING,notend) -> sinitx;'
match = regex_cond.match(line2);
I have problems to distinguish if and else if. The else if in the example is recognized as a if.
A:
a \t matches a tab character. It doesn't look like you have a tab character between "else" and "if" in line2. You might try \s instead, which matches any whitespace character.
A:
Don't do this; use pyparsing instead. You'll thank yourself later.
The problem is that .+ is greedy, so it's eating up the else... do .+? instead. Or rather, don't, because you're using pyparsing now.
regex_cond = re.compile( r'.+?(else\sif|else|if).+?' )
...
# else if
A:
Your immediate problem is that .+ is greedy and so it matches @s0 else instead of just @s0. To make it non-greedy, use .+? instead:
import re
regex_cond = re.compile(r'.+?(else\s+if|else|if).+')
line2 = '@s0 else if (insreg==1) then (initx,PING,notend) -> sinitx;'
match = regex_cond.match(line2)
print(match.groups())
# ('else if',)
However, like others have suggested, using a parser like Pyparsing is a better method than using re here.
A:
Correct me if im wrong, but RE are not good for parsing, since its only sufficient for Type2 languages. For exaple you can't decide weather or not ((())())) is a valid statement without "counting", which regex can't do. Or, to talk about your example, if else else could not be found as invalid. Maybe im mixiung up scanner/parser, in this case please tell me.
| Regular expression: if, else if, else | I am trying to parse FSM statements of the Gezel language (http://rijndael.ece.vt.edu/gezel2/) using Python and regular expressions
regex_cond = re.compile(r'.+((else\tif|else|if)).+')
line2 = '@s0 else if (insreg==1) then (initx,PING,notend) -> sinitx;'
match = regex_cond.match(line2);
I have problems to distinguish if and else if. The else if in the example is recognized as a if.
| [
"a \\t matches a tab character. It doesn't look like you have a tab character between \"else\" and \"if\" in line2. You might try \\s instead, which matches any whitespace character.\n",
"Don't do this; use pyparsing instead. You'll thank yourself later.\n\nThe problem is that .+ is greedy, so it's eating up the ... | [
3,
2,
1,
0
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0003468881_python_regex.txt |
Q:
Python Django MySQLdb setup problem:: setup.py dosen't build due to incorrect location of mysql
I'm trying to install MySQLdb for python.
but when I run the setup, this is the error I get.
well I know why its giving all the missing file statements, but dont know where to change the bold marked location from.
Please help
gaurav-toshniwals-macbook-7:MySQL-python-1.2.3c1 gauravtoshniwal$ python setup.py build
running build
running build_py
copying MySQLdb/release.py -> build/lib.macosx-10.3-fat-2.6/MySQLdb
running build_ext
building '_mysql' extension
gcc-4.0 -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -O3 -Dversion_info=(1,2,3,'gamma',1) -D__version__=1.2.3c1 **-I/Applications/MAMP/Library/include/mysql** -I/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -c _mysql.c -o build/temp.macosx-10.3-fat-2.6/_mysql.o
_mysql.c:36:23: error: my_config.h: No such file or directory
_mysql.c:36:23: error: my_config.h: No such file or directory
_mysql.c:38:19: error: mysql.h: No such file or directory
_mysql.c:38:19:_mysql.c:39:26: error: mysqld_error.h: No such file or directory
error: _mysql.c:40:20:mysql.h: No such file or directory
A:
This path probably comes from mysql_config utility, edit your setup_posix.py and change the variable mysql_config.path to meet your correct mysql_config utility path.
A:
The above suggestion is really helpful but it might also be worth mentioning that you must have mysql-devel installed as well for your build to be successful here.
| Python Django MySQLdb setup problem:: setup.py dosen't build due to incorrect location of mysql | I'm trying to install MySQLdb for python.
but when I run the setup, this is the error I get.
well I know why its giving all the missing file statements, but dont know where to change the bold marked location from.
Please help
gaurav-toshniwals-macbook-7:MySQL-python-1.2.3c1 gauravtoshniwal$ python setup.py build
running build
running build_py
copying MySQLdb/release.py -> build/lib.macosx-10.3-fat-2.6/MySQLdb
running build_ext
building '_mysql' extension
gcc-4.0 -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -O3 -Dversion_info=(1,2,3,'gamma',1) -D__version__=1.2.3c1 **-I/Applications/MAMP/Library/include/mysql** -I/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -c _mysql.c -o build/temp.macosx-10.3-fat-2.6/_mysql.o
_mysql.c:36:23: error: my_config.h: No such file or directory
_mysql.c:36:23: error: my_config.h: No such file or directory
_mysql.c:38:19: error: mysql.h: No such file or directory
_mysql.c:38:19:_mysql.c:39:26: error: mysqld_error.h: No such file or directory
error: _mysql.c:40:20:mysql.h: No such file or directory
| [
"This path probably comes from mysql_config utility, edit your setup_posix.py and change the variable mysql_config.path to meet your correct mysql_config utility path.\n",
"The above suggestion is really helpful but it might also be worth mentioning that you must have mysql-devel installed as well for your build ... | [
0,
0
] | [] | [] | [
"django",
"mysql",
"python"
] | stackoverflow_0003029942_django_mysql_python.txt |
Q:
Python file.read() seeing junk characters at the beginning of a file
I'm trying to use Python to concatenate a few javascript files together before minifying them, basically like so:
outfile = open("output.js", "w")
for somefile in a_list_of_file_names:
js = open(somefile)
outfile.write(js.read())
js.close()
outfile.close()
The minifier complains about illegal characters and syntax errors at the beginning of each file, so I did some diagnostics.
>>> r = open("output.js")
>>> somestring = r.readline()
>>> somestring
'\xef\xbb\xbfvar $j = jQuery.noConflict(),\n'
>>> print somestring
var $j = jQuery.noConflict(),
The first line of the file should, of course be "var $j = jQuery.noConflict(),"
In case it makes a difference, I'm working from within Windows.
Any thoughts?
Edit: Here's what I'm getting from the minifier:
U:\>java -jar c:\path\yuicompressor-2.4.2.jar c:\path\somefile.js -o c:\path\bccsminified.js --type js -v
[INFO] Using charset Cp1252
[ERROR] 1:2:illegal character
[ERROR] 1:2:syntax error
[ERROR] 1:3:illegal character
A:
That's a UTF-8 BOM (Byte Order Mark). You've probably edited the file with Notepad.
A:
EF BB BF is a Unicode Byte-Order Mark (BOM). Those are actually in your files. That's why Python is seeing it.
Either ignore/discard the BOM or reencode the files to omit it.
| Python file.read() seeing junk characters at the beginning of a file | I'm trying to use Python to concatenate a few javascript files together before minifying them, basically like so:
outfile = open("output.js", "w")
for somefile in a_list_of_file_names:
js = open(somefile)
outfile.write(js.read())
js.close()
outfile.close()
The minifier complains about illegal characters and syntax errors at the beginning of each file, so I did some diagnostics.
>>> r = open("output.js")
>>> somestring = r.readline()
>>> somestring
'\xef\xbb\xbfvar $j = jQuery.noConflict(),\n'
>>> print somestring
var $j = jQuery.noConflict(),
The first line of the file should, of course be "var $j = jQuery.noConflict(),"
In case it makes a difference, I'm working from within Windows.
Any thoughts?
Edit: Here's what I'm getting from the minifier:
U:\>java -jar c:\path\yuicompressor-2.4.2.jar c:\path\somefile.js -o c:\path\bccsminified.js --type js -v
[INFO] Using charset Cp1252
[ERROR] 1:2:illegal character
[ERROR] 1:2:syntax error
[ERROR] 1:3:illegal character
| [
"That's a UTF-8 BOM (Byte Order Mark). You've probably edited the file with Notepad.\n",
"EF BB BF is a Unicode Byte-Order Mark (BOM). Those are actually in your files. That's why Python is seeing it.\nEither ignore/discard the BOM or reencode the files to omit it.\n"
] | [
6,
5
] | [] | [] | [
"file_io",
"python"
] | stackoverflow_0003469200_file_io_python.txt |
Q:
Manipulating list from lxml xpath queries
Today I tried lxml as I got very nasty html output from particular web service, and I didn't want to go with re module, just for change and to learn something new. And I did, browsing http://codespeak.net/lxml/ and http://stackoverflow.com in parallel
I won't try to explain above html template, but just for overview it's full of deliberately nested tables.
I extracted part of interest with html parser then find_class() and iterating through TR with xpath (and even this TRs have tables inside).
Now I'm trying to extract data pairs based on class and id attributes:
name child has class "title"
value child has id "text"
Code looks something like this:
fragment = root.find_class('foo')
for node in fragment[0].xpath('table[2]/tr'):
name = node.xpath('//div[@id="title"]')
value = node.xpath('//td[@class="text"]')
Problem is that not every TR, that I'm iterating, has those pairs: some are only with name (id "title") so later when I try to zip them I get wrongly paired data.
I tried couple of things that came to my mind but nothing successful: I tried to compare list length (for name and value) and if they don't match skip name lookup, then if they don't match, delete last list item (in many ways) but nothing worked. For example:
if not len(name) == len(value):
name.pop()
or
if len(name) == len(value):
name = node.xpath('//div[@id="title"]')
value = node.xpath('//td[@class="text"]')
Some comments from more experienced?
A:
How's this?
from lxml import etree
doc = etree.HTML(open('test.data').read())
for t in doc.xpath('//table[.//div[@id="title"] and .//td[@class="text"]]'):
print etree.tostring(t.xpath('.//div[@id="title"]')[0])
print etree.tostring(t.xpath('.//td[@class="text"]')[0])
print "--"
Yielding:
<div id="title">
<span class="Browse">string</span>
</div>
<td class="text" style="padding-left:5px;">
<a href="/***/***.dll?p=***&sql=xxx:yyy">string</a>
</td>
--
<div id="title">
<span>string</span>
</div>
<td class="text" style="padding-left:5px;">
<a href="/***/***.dll?p=***&sql=xxx:yyy">string</a>
</td>
--
<div id="title">
<span>string</span>
</div>
<td class="text" style="padding-left:5px;">
Gospodar of Lutaka
</td>
--
<div id="title">
<span>string</span>
</div>
<td class="text" style="padding-left:5px;">
1986
</td>
--
<div id="title">
<span>string</span>
</div>
<td class="text" style="padding-left:5px;">
Sep 1985-Dec 1985
</td>
--
<div id="title">
<span>string</span>
</div>
<td class="text" style="padding-left:5px;">
Elektra
</td>
--
<div id="title">
<span>string</span>
</div>
<td class="text" style="padding-left:5px;">
54:51
</td>
--
<div id="title">
<span>string</span>
</div>
<td class="text" style="padding-left:5px;">
</td>
--
Update, extended the leading portion of the xpath expression to eliminate an undesired result. Thanks to Alejandro for pointing this out and suggesting a fix that didn't seem to work out for otrov.
from urllib2 import urlopen
from lxml import etree
doc = etree.HTML(urlopen('http://pastebin.com/download.php?i=cg5HHJ6x').read())
for t in doc.xpath('//table/tr/td/table[.//div[@id="title"] and .//td[@class="text"]]'):
print etree.tostring(t.xpath('.//div[@id="title"]')[0])
print etree.tostring(t.xpath('.//td[@class="text"]')[0])
print "--"
A:
Now, with input sample, is more clear what you are asking.
Just this one XPath 1.0 expression return a node set with div and td pair (in document order):
/table/tr/td/table[tr/td/div[@id='title']]
[tr/td[@class='text']]
/tr//*[self::div[@id='title'] or self::td[@class='text']]
As proof, this stylesheet:
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template match="/">
<result>
<xsl:copy-of
select="/table/tr/td/table[tr/td/div[@id='title']]
[tr/td[@class='text']]
/tr//*[self::div[@id='title'] or
self::td[@class='text']]"/>
</result>
</xsl:template>
</xsl:stylesheet>
Output (with proper input sample, because you miss a closing td):
<result>
<div id="title">
<span>string</span>
</div>
<td class="text" style="padding-left:5px;">
<a href="/***/***.dll?p=***&sql=xxx:yyy">string</a>
</td>
<div id="title">
<span>string</span>
</div>
<td class="text" style="padding-left:5px;">
Gospodar of Lutaka
</td>
<div id="title">
<span>string</span>
</div>
<td class="text" style="padding-left:5px;">
1986
</td>
<div id="title">
<span>string</span>
</div>
<td class="text" style="padding-left:5px;">
Sep 1985-Dec 1985
</td>
<div id="title">
<span>string</span>
</div>
<td class="text" style="padding-left:5px;">
Elektra
</td>
<div id="title">
<span>string</span>
</div>
<td class="text" style="padding-left:5px;">
54:51
</td>
<div id="title">
<span>string</span>
</div>
<td class="text" style="padding-left:5px;"></td>
</result>
| Manipulating list from lxml xpath queries | Today I tried lxml as I got very nasty html output from particular web service, and I didn't want to go with re module, just for change and to learn something new. And I did, browsing http://codespeak.net/lxml/ and http://stackoverflow.com in parallel
I won't try to explain above html template, but just for overview it's full of deliberately nested tables.
I extracted part of interest with html parser then find_class() and iterating through TR with xpath (and even this TRs have tables inside).
Now I'm trying to extract data pairs based on class and id attributes:
name child has class "title"
value child has id "text"
Code looks something like this:
fragment = root.find_class('foo')
for node in fragment[0].xpath('table[2]/tr'):
name = node.xpath('//div[@id="title"]')
value = node.xpath('//td[@class="text"]')
Problem is that not every TR, that I'm iterating, has those pairs: some are only with name (id "title") so later when I try to zip them I get wrongly paired data.
I tried couple of things that came to my mind but nothing successful: I tried to compare list length (for name and value) and if they don't match skip name lookup, then if they don't match, delete last list item (in many ways) but nothing worked. For example:
if not len(name) == len(value):
name.pop()
or
if len(name) == len(value):
name = node.xpath('//div[@id="title"]')
value = node.xpath('//td[@class="text"]')
Some comments from more experienced?
| [
"How's this?\nfrom lxml import etree\ndoc = etree.HTML(open('test.data').read())\n\nfor t in doc.xpath('//table[.//div[@id=\"title\"] and .//td[@class=\"text\"]]'):\n print etree.tostring(t.xpath('.//div[@id=\"title\"]')[0])\n print etree.tostring(t.xpath('.//td[@class=\"text\"]')[0])\n print \"--\"\n\nYie... | [
6,
0
] | [] | [] | [
"lxml",
"python",
"xpath"
] | stackoverflow_0003467203_lxml_python_xpath.txt |
Q:
I'd like some advice on packaging this as an egg and uploading it to pypi
I wrote some code that I'd like to package as an egg. This is my directory structure:
src/
src/tests
src/tests/test.py # this has several tests for the movie name parser
src/torrent
src/torrent/__init__.py
src/torrent/movienameparser
src/torrent/movienameparser/__init__.py # this contains the code
I'd like to package this directory structure as an egg, and include the test file too. What should I include in the setup.py file so that I can have any number of namespaces, and any number of tests?
This is the first open source code I'd like to share. Even though, probably, I will be the only one who will find this module useful, I'd like to upload it on pypi. What license can I use that will allow users to do what they want with the code,no limitations upon redistribution,modifications?
Even though I plan on updating this egg, I'd like not to be responsible of anything ( such as providing support to users ). I know this may sound selfish, but this is my first open source code, so please bear with me. Will I need to provide a copy of the license? Where could I find a copy?
Thanks for reading all of this.
A:
I won't get into licensing discussion here, but it's typical to include LICENSE file at the root of your package source code, along with other customary things like README, etc.
I usually organize packages the same way they will be installed on the target system. The standard package layout convention is explained here.
For example if my package is 'torrent' and it has a couple sub-packages such as 'tests' and 'util', here's the source tree would look like:
workspace/torrent/setup.py
workspace/torrent/torrent/__init__.py
workspace/torrent/torrent/foo.py
workspace/torrent/torrent/bar.py
workspace/torrent/torrent/...
workspace/torrent/torrent/tests/__init__.py
workspace/torrent/torrent/tests/test.py
workspace/torrent/torrent/tests/...
workspace/torrent/torrent/util/__init__.py
workspace/torrent/torrent/util/helper1.py
workspace/torrent/torrent/util/...
This 'torrent/torrent' bit seems redundant, but this is the side-effect of this standard convention and of how Python imports work.
Here's the very minimalist setup.py (more info on how to write the setup script):
#!/usr/bin/env python
from distutils.core import setup
setup(name='torrent',
version='0.1',
description='would be nice',
packages=['torrent', 'torrent.tests', 'torrent.util']
)
To obtain a source distro, I'd then do:
$ cd workspace/torrent
$ ./setup.py sdist
This distro (dist/torrent-0.1.tar.gz) will be usable on its own, simply by unpacking it and running setup.py install or by using easy_install from setuptools toolkit. And you won't have to make several "eggs" for each supported version of Python.
If you really need an egg, you will need to add a dependency on setuptools to your setup.py, which will introduce an additional subcommand bdist_egg that generates eggs.
But there's another advantage of setuptools besides its egg-producing-qualities, it removes the need to enumerate packages in your setup.py with a nice helper function find_packages:
#!/usr/bin/env python
from setuptools import setup, find_packages
setup(name='torrent',
version='0.1',
description='would be nice',
packages=find_packages()
)
Then, to obtain an "egg", I will do:
$ cd workspace
$ ./setup.py bdist_egg
... and it will give me the egg file: dist/torrent-0.1-py2.6.egg
Notice the py2.6 suffix, this is because on my machine I have Python 2.6. If you want to please lots of people, you'd need to publish an egg for each major Python release. You don't want hordes of Python 2.5 folks with axes and spears at your doorstep, do you?
But you don't have to build an egg, you can still use sdist subcommand.
Updated: here's another useful page in Python documentation that introduces Distutils from user's perspective.
A:
It would be better to distribute it as a tarball (.tar.gz), not as an egg. Eggs are primarily for binary distribution, such as when using compiled C extensions. In source-only distributions, they are just unnecessary complexity.
If you just want to throw your code out into the world, the MIT or 3-clause BSD licenses are the most popular choice. Both include disclaimers of liability. All you have to do is include the main license in the tarball; typically as "License.txt", or similar. Optionally, you can add a small copyright notification to each source file; I encourage this, so the status of each file is obvious even without the entire archive, but some people think that's too verbose. It's a matter of personal preference.
The BSD license is available on Wikipdia, copied below:
Copyright (c) <year>, <copyright holder>
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of the <organization> nor the
names of its contributors may be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY <copyright holder> ''AS IS'' AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL <copyright holder> BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
A:
Include ez_setup file from setuptools website and include, at the top of your setup.py:
from ez_setup import use_setuptools
use_setuptools()
This script is an helper for people who doesn't have setuptools. It download and install latest version of setuptools on system that do not have setuptools installed.
| I'd like some advice on packaging this as an egg and uploading it to pypi | I wrote some code that I'd like to package as an egg. This is my directory structure:
src/
src/tests
src/tests/test.py # this has several tests for the movie name parser
src/torrent
src/torrent/__init__.py
src/torrent/movienameparser
src/torrent/movienameparser/__init__.py # this contains the code
I'd like to package this directory structure as an egg, and include the test file too. What should I include in the setup.py file so that I can have any number of namespaces, and any number of tests?
This is the first open source code I'd like to share. Even though, probably, I will be the only one who will find this module useful, I'd like to upload it on pypi. What license can I use that will allow users to do what they want with the code,no limitations upon redistribution,modifications?
Even though I plan on updating this egg, I'd like not to be responsible of anything ( such as providing support to users ). I know this may sound selfish, but this is my first open source code, so please bear with me. Will I need to provide a copy of the license? Where could I find a copy?
Thanks for reading all of this.
| [
"I won't get into licensing discussion here, but it's typical to include LICENSE file at the root of your package source code, along with other customary things like README, etc.\nI usually organize packages the same way they will be installed on the target system. The standard package layout convention is explaine... | [
6,
3,
0
] | [] | [] | [
"distutils",
"egg",
"pypi",
"python",
"setuptools"
] | stackoverflow_0001301689_distutils_egg_pypi_python_setuptools.txt |
Q:
Ordering in Python (2.4) dictionary
r_dict={'answer1': "value1",'answer11': "value11",'answer2': "value2",'answer3': "value3",'answer4': "value4",}
for i in r_dict:
if("answer" in i.lower()):
print i
Result is answer11,answer2,snswer4,answer3
I am using Python 2.4.3. I there any way to get the order in which it is populated?
Or is there a way to do this by regular expression since I am using the older Python version?
A:
Dictionaries are unordered - that is, they do have some order, but it's influenced in nonobvious ways by the order of insertion and the hash of the keys. However, there is another implementation that remembers the order of insertion, collections.OrderedDict.
Edit: For Python 2.4, there are several third party implementations. I haven't used any, but since the one from voidspace looks promising.
A:
Not just by using the dictionary by itself. Dictionaries in Python (and a good portion of equivalent non-specialized data structures that involve mapping) are not sorted.
You could potentially subclass dict and override the __setitem__ and __delitem__ methods to add/remove each key to an internal list where you maintain your own sorting. You'd probably then have to override other methods, such as __iter__ to get the sorting you want out of your for loop.
...or just use the odict module as @delnan suggested
A:
A dictionary is by construction unordered. If you want an ordered one, use a collections.OrderedDict:
import collections
r_dict = collections.OrderedDict( [ ( 'answer1', "value1"), ('answer11', "value11"), ('answer2', "value2"), ('answer3', "value3"), ('answer4', "value4") ] )
for i in r_dict:
if("answer" in i.lower()):
print i
A:
Short answer: no. Python dictionaries are fundamentally unordered.
| Ordering in Python (2.4) dictionary | r_dict={'answer1': "value1",'answer11': "value11",'answer2': "value2",'answer3': "value3",'answer4': "value4",}
for i in r_dict:
if("answer" in i.lower()):
print i
Result is answer11,answer2,snswer4,answer3
I am using Python 2.4.3. I there any way to get the order in which it is populated?
Or is there a way to do this by regular expression since I am using the older Python version?
| [
"Dictionaries are unordered - that is, they do have some order, but it's influenced in nonobvious ways by the order of insertion and the hash of the keys. However, there is another implementation that remembers the order of insertion, collections.OrderedDict.\nEdit: For Python 2.4, there are several third party imp... | [
2,
1,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0003469633_python.txt |
Q:
The listener does not work! Django-signals
from django.db.models.signals import post_save
class MyModel(models.Model):
int = models.PositiveIntegerField(unique=True)
def added (sender, instance, **kwargs):
print 'Added'
post_save.connect(added,MyModel)
When I do:
MyModel.objects.create(int=12345).save()
nothing happened
Am i lose something?
After Edit:
Not working.
A:
It looks like you're connecting added() to MyModel instead of BitRate, so it's not surprising that added() is not fired when a bitrate is saved...
A:
You're connecting post_save to MyModel, but you're creating and saving Bitrate. Is that a typo?
| The listener does not work! Django-signals | from django.db.models.signals import post_save
class MyModel(models.Model):
int = models.PositiveIntegerField(unique=True)
def added (sender, instance, **kwargs):
print 'Added'
post_save.connect(added,MyModel)
When I do:
MyModel.objects.create(int=12345).save()
nothing happened
Am i lose something?
After Edit:
Not working.
| [
"It looks like you're connecting added() to MyModel instead of BitRate, so it's not surprising that added() is not fired when a bitrate is saved...\n",
"You're connecting post_save to MyModel, but you're creating and saving Bitrate. Is that a typo?\n"
] | [
0,
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0003469696_django_python.txt |
Q:
Python: wx.ListCtrl -> how to make one of the items a picture that once clicked opens a file
I have a wx.ListCtrl instance to which I use InsertColumn like this:
Path | Size | ... | Last run
For each item to be displayed I have a function that sets all the fields:
setStringItem(index, 0, path)
setStringItem(index, 1, size)
...
I want on column 6 (Last run) to do the following:
1) add a picture
2) the picture should be clickable, once clicked it should open a file
For pictures I use PyEmbeddedImage (using img2py) like this:
btn_remove_entry = GenBitmapTextButton(self, -1,
remove_img.GetBitmap(), "text", size=(120, 35) )
A:
Take a look at Andrea's UltimateListCtrl - it's included in the latest wx, but the online docs aren't up to date.
| Python: wx.ListCtrl -> how to make one of the items a picture that once clicked opens a file | I have a wx.ListCtrl instance to which I use InsertColumn like this:
Path | Size | ... | Last run
For each item to be displayed I have a function that sets all the fields:
setStringItem(index, 0, path)
setStringItem(index, 1, size)
...
I want on column 6 (Last run) to do the following:
1) add a picture
2) the picture should be clickable, once clicked it should open a file
For pictures I use PyEmbeddedImage (using img2py) like this:
btn_remove_entry = GenBitmapTextButton(self, -1,
remove_img.GetBitmap(), "text", size=(120, 35) )
| [
"Take a look at Andrea's UltimateListCtrl - it's included in the latest wx, but the online docs aren't up to date.\n"
] | [
1
] | [] | [] | [
"python",
"wxpython"
] | stackoverflow_0003467869_python_wxpython.txt |
Q:
How can I benchmark different languages / frameworks?
I'd like to compare the performance of different languages and/or different frameworks within the same language. This is aimed at server-side languages used for web development. I know an apples to apples comparison is not possible, but I'd like it to be as unbiased as possible. Here are some ideas :
Simple "Hello World" page
Object initialization
Function/method calls
Method bodies will range from empty to large
File access (read and write)
Database access
They can either be measured by Requests per second or I can use a for loop and loop many times. Some of these benchmarks should measure the overhead the language has (ie: empty function call) rather than how fast they perform a certain task. I'll take some precautions:
They'll run on the same machine, on fresh installations with as few processes on the background as possible.
I'll try and set up the server as officially recommended; I will not attempt any optimizations.
How can I improve on this?
A:
There's a lot of good advice (and a huge number of sample benchmarks for different languages) at http://shootout.alioth.debian.org/
C.
A:
What I have done is to write many unit tests so you can test the layers.
For example, write a SOAP web service in PHP, Python and C#.
Write a REST web service in the same languages (same web services, just two ways to get to them). This one should be able to return JSON and XML as a minimum.
Write unit tests in C# and Python to serve as clients, and test the REST with the various result types (XML/JSON). This is important as later you may need to test to see which is best end-to-end, and JSON may be faster to parse than XML, for you (it should be).
So, the REST/SOAP services should go to the same controller, to simplify your life.
This controller needs tests, as you may need to later remove it's impact on your tests, but, you can also write tests to see how fast it goes to the database.
I would use one database for this, unless you want to evaluate various databases, but for a web test, just do that for phase 2. :)
So, what you end up with is lots of tests, each test needs to be able to determine how long it took for it to actually run.
You then have lots of numbers, and you can start to analyze to see what works best for you.
For example, I had learned (a couple of years ago when I did this) that JSON was faster than XML, REST was faster than SOAP.
You may find that some things are much harder to do in some languages and so drop them from contention as you go through this process.
Writing the tests is the easy part, getting meaningful answers from the numbers will be the harder part, as your biases may color your analysis, so be careful of that.
I would do this with some real application so that the work isn't wasted, just duplicated.
A:
Better take one of the existing benchmarks:
http://www.sellersrank.com/web-frameworks-benchmarking-results/
http://avnetlabs.com/php/php-framework-comparison-benchmarks
http://www.yiiframework.com/performance/
http://www.google.ru/search?q=php+benchmark+frameworks&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:ru:official&client=firefox
But if you really need to find out what framework will be faster for YOUR project - you will need to write a model of your project using that framework and test on it.
A:
You will spend a lot of time and come to realization that it was all wasted.
After you complete your tests you will learn that loops of 1000000 empty iterations are far from the real life and come to apache benchmark.
Then you come no know of opcode cachers which will ruin all your previous results.
Then you will learn that single DB query will take 1000 times longer time than API call, so, your comparisons of database access methods are really waste.
Then you will learn of memcache which will allow you just jump over some terrible bottlenecks you've discovered already, etc etc etc
| How can I benchmark different languages / frameworks? | I'd like to compare the performance of different languages and/or different frameworks within the same language. This is aimed at server-side languages used for web development. I know an apples to apples comparison is not possible, but I'd like it to be as unbiased as possible. Here are some ideas :
Simple "Hello World" page
Object initialization
Function/method calls
Method bodies will range from empty to large
File access (read and write)
Database access
They can either be measured by Requests per second or I can use a for loop and loop many times. Some of these benchmarks should measure the overhead the language has (ie: empty function call) rather than how fast they perform a certain task. I'll take some precautions:
They'll run on the same machine, on fresh installations with as few processes on the background as possible.
I'll try and set up the server as officially recommended; I will not attempt any optimizations.
How can I improve on this?
| [
"There's a lot of good advice (and a huge number of sample benchmarks for different languages) at http://shootout.alioth.debian.org/\nC.\n",
"What I have done is to write many unit tests so you can test the layers.\nFor example, write a SOAP web service in PHP, Python and C#. \nWrite a REST web service in the sa... | [
3,
2,
2,
0
] | [] | [] | [
"asp.net",
"benchmarking",
"frameworks",
"php",
"python"
] | stackoverflow_0003468227_asp.net_benchmarking_frameworks_php_python.txt |
Q:
Is there any language which is just "perfect" for web scraping?
I have used 3 languages for Web Scraping - Ruby, PHP and Python and honestly none of them seems to perfect for the task.
Ruby has an excellent mechanize and XML parsing library but the spreadsheet support is very poor.
PHP has excellent spreadsheet and HTML parsing library but it does not have an equivalent of WWW:Mechanize.
Python has a very poor Mechanize library. I had many problems with it and still unable to solve them. Its spreadsheet library also is more or less decent since it unable to create XLSX files.
Is there anything which is just perfect for webscraping.
PS: I am working on windows platform.
A:
Check Python + Scrappy, it is pretty good:
http://scrapy.org/
A:
Why not just use the XML Spreadsheet format? It's super simple to create, and it would probably be trivial with any type of class-based system.
Also, for Python have you tried BeautifulSoup for parsing? Urllib+BeautifulSoup makes a pretty powerful combo.
A:
Short answer is no.
The problem is that HTML is a large family of formats - and only the more recent variants are consistent (and XML based). If you're going to use PHP then I would recommend using the DOM parser as this can handle a lot of html which does not qualify as well-formed XML.
Reading between the lines of your post - you seem to be:
1) capturing content from the web with a requirement for complex interaction management
2) parsing the data into a consistent machine readable format
3) writing the data to a spreadsheet
Which is certainly 3 seperate problems - if no one language meets all 3 requirements then why not use the best tool for the job and just worry about an suitable interim format/medium for the data?
C.
A:
Python + Beautiful Soup for web scraping and since you are on windows, you could use win32com for Excel automation to generate your xlsx files.
| Is there any language which is just "perfect" for web scraping? | I have used 3 languages for Web Scraping - Ruby, PHP and Python and honestly none of them seems to perfect for the task.
Ruby has an excellent mechanize and XML parsing library but the spreadsheet support is very poor.
PHP has excellent spreadsheet and HTML parsing library but it does not have an equivalent of WWW:Mechanize.
Python has a very poor Mechanize library. I had many problems with it and still unable to solve them. Its spreadsheet library also is more or less decent since it unable to create XLSX files.
Is there anything which is just perfect for webscraping.
PS: I am working on windows platform.
| [
"Check Python + Scrappy, it is pretty good:\nhttp://scrapy.org/\n",
"Why not just use the XML Spreadsheet format? It's super simple to create, and it would probably be trivial with any type of class-based system.\nAlso, for Python have you tried BeautifulSoup for parsing? Urllib+BeautifulSoup makes a pretty power... | [
2,
1,
1,
0
] | [] | [] | [
"php",
"python",
"ruby",
"web_scraping"
] | stackoverflow_0003468028_php_python_ruby_web_scraping.txt |
Q:
Reusing a Django RSS Feed for different Date Ranges
What would be a way to have date range based rss feeds in Django. For instance if I had the following type of django rss feed model.
from django.contrib.syndication.feeds import Feed
from myapp.models import *
class PopularFeed(Feed):
title = '%s : Latest SOLs' % settings.SITE_NAME
link = '/'
description = 'Latest entries to %s' % settings.SITE_NAME
def items(self):
return sol.objects.order_by('-date')
What would be a way to have PopularFeed used for All Time, Last Month, Last Week, Last 24 Hours, and vice-versa if I wanted to have LeastPopularFeed?
A:
You need to define a class for each feed you want. For example for Last Month feed:
class LastMonthFeed(Feed):
def items(self):
ts = datetime.datetime.now() - datetime.timedelta(days=30)
return sol.object.filter(date__gte=ts).order_by('-date')
Then add these feeds to your urls.py as shown in docs: http://docs.djangoproject.com/en/1.2/ref/contrib/syndication/
| Reusing a Django RSS Feed for different Date Ranges | What would be a way to have date range based rss feeds in Django. For instance if I had the following type of django rss feed model.
from django.contrib.syndication.feeds import Feed
from myapp.models import *
class PopularFeed(Feed):
title = '%s : Latest SOLs' % settings.SITE_NAME
link = '/'
description = 'Latest entries to %s' % settings.SITE_NAME
def items(self):
return sol.objects.order_by('-date')
What would be a way to have PopularFeed used for All Time, Last Month, Last Week, Last 24 Hours, and vice-versa if I wanted to have LeastPopularFeed?
| [
"You need to define a class for each feed you want. For example for Last Month feed:\nclass LastMonthFeed(Feed):\n\n def items(self):\n ts = datetime.datetime.now() - datetime.timedelta(days=30)\n return sol.object.filter(date__gte=ts).order_by('-date')\n\nThen add these feeds to your urls.py as sh... | [
1
] | [] | [] | [
"django",
"django_rss",
"feed",
"python"
] | stackoverflow_0003469631_django_django_rss_feed_python.txt |
Q:
Send mail with python using bcc
I'm working with django, i need send a mail to many emails, i want to do this with a high level library like python-mailer, but i need use bcc field, any suggestions?
A:
You should look at the EmailMessage class inside of django, supports the bcc.
Complete docs availble here:
http://docs.djangoproject.com/en/dev/topics/email/#the-emailmessage-class
Quick overview:
The EmailMessage class is initialized with the following parameters (in the given order, if positional arguments are used). All parameters are optional and can be set at any time prior to calling the send() method.
subject: The subject line of the e-mail.
body: The body text. This should be a plain text message.
from_email: The sender's address. Both fred@example.com and Fred forms are legal. If omitted, the DEFAULT_FROM_EMAIL setting is used.
to: A list or tuple of recipient addresses.
bcc: A list or tuple of addresses used in the "Bcc" header when sending the e-mail.
connection: An e-mail backend instance. Use this parameter if you want to use the same connection for multiple messages. If omitted, a new connection is created when send() is called.
attachments: A list of attachments to put on the message. These can be either email.MIMEBase.MIMEBase instances, or (filename, content, mimetype) triples.
headers: A dictionary of extra headers to put on the message. The keys are the header name, values are the header values. It's up to the caller to ensure header names and values are in the correct format for an e-mail message.
| Send mail with python using bcc | I'm working with django, i need send a mail to many emails, i want to do this with a high level library like python-mailer, but i need use bcc field, any suggestions?
| [
"You should look at the EmailMessage class inside of django, supports the bcc.\nComplete docs availble here:\n http://docs.djangoproject.com/en/dev/topics/email/#the-emailmessage-class\nQuick overview:\nThe EmailMessage class is initialized with the following parameters (in the given order, if positional argument... | [
4
] | [] | [] | [
"bcc",
"django",
"email",
"python"
] | stackoverflow_0003470172_bcc_django_email_python.txt |
Q:
What does logging.basicConfig do?
I have seen this in a lot of python code what does this do? What is it useful for?
logging.basicConfig(level=loglevel, format=myname)
A:
Please read the documentation - it explains your question in detail:
http://docs.python.org/library/logging.html#logging.basicConfig "baseConfig"
| What does logging.basicConfig do? | I have seen this in a lot of python code what does this do? What is it useful for?
logging.basicConfig(level=loglevel, format=myname)
| [
"Please read the documentation - it explains your question in detail:\nhttp://docs.python.org/library/logging.html#logging.basicConfig \"baseConfig\"\n"
] | [
4
] | [] | [] | [
"logging",
"python",
"scripting"
] | stackoverflow_0003470262_logging_python_scripting.txt |
Q:
virtualenv, sys.path and site-packages
i am setting up a virtualenv for django deployment. i want an isolated env without access to the global site-packages. i used the option --no-site-packages, then installed a local pip instance for that env.
after using pip and a requirements.txt file i noticed that most packages were installed in a "build" folder that is not in sys.path so i am getting an error such as
"no module named django.conf"
i also installed virtualenvwrapper after the base virtualenv package.
as far i as i can recall i have not seen a "build" folder before, and am curious why these packages weren't simply installed in my local env's site-packages folder. how should i go about pointing to that build folder and why does it exist?
thanks
A:
it seems that the pip process quit prematurely due to a package in requirements that could not be found. this left things in limbo, stuck in the temp-like "build" folder before having a chance to complete the process which gets them into the proper "site-packages" location.
| virtualenv, sys.path and site-packages | i am setting up a virtualenv for django deployment. i want an isolated env without access to the global site-packages. i used the option --no-site-packages, then installed a local pip instance for that env.
after using pip and a requirements.txt file i noticed that most packages were installed in a "build" folder that is not in sys.path so i am getting an error such as
"no module named django.conf"
i also installed virtualenvwrapper after the base virtualenv package.
as far i as i can recall i have not seen a "build" folder before, and am curious why these packages weren't simply installed in my local env's site-packages folder. how should i go about pointing to that build folder and why does it exist?
thanks
| [
"it seems that the pip process quit prematurely due to a package in requirements that could not be found. this left things in limbo, stuck in the temp-like \"build\" folder before having a chance to complete the process which gets them into the proper \"site-packages\" location.\n"
] | [
1
] | [] | [] | [
"django",
"pip",
"python",
"virtualenv"
] | stackoverflow_0003469551_django_pip_python_virtualenv.txt |
Q:
Understanding Python daemon threads
I've obviously misunderstood something fundamental about a Python Thread object's daemon attribute.
Consider the following:
daemonic.py
import sys, threading, time
class TestThread(threading.Thread):
def __init__(self, daemon):
threading.Thread.__init__(self)
self.daemon = daemon
def run(self):
x = 0
while 1:
if self.daemon:
print "Daemon :: %s" % x
else:
print "Non-Daemon :: %s" % x
x += 1
time.sleep(1)
if __name__ == "__main__":
print "__main__ start"
if sys.argv[1] == "daemonic":
thread = TestThread(True)
else:
thread = TestThread(False)
thread.start()
time.sleep(5)
print "__main__ stop"
From the python docs:
The entire Python program exits when
no alive non-daemon threads are left.
So if I run with TestThread as a daemon, I would expect it to exit once the main thread has completed. But this doesn't happen:
> python daemonic.py daemonic
__main__ start
Daemon :: 0
Daemon :: 1
Daemon :: 2
Daemon :: 3
Daemon :: 4
__main__ stop
Daemon :: 5
Daemon :: 6
^C
What don't I get?
As guessed by Justin and Brent, I was running with Python 2.5. Have just got home and tried out on my own machine running 2.7, and everything works fine. Thanks for your helps!
A:
Your understanding about what daemon threads should do is correct.
As to why this isn't happening, I am guessing you are using an older version of Python. The Python 2.5.4 docs include a setDaemon(daemonic) function, as well as isDaemon() to check if a thread is a daemon thread. The 2.6 docs replace these with a directly modifiable daemon flag.
References:
http://docs.python.org/release/2.5.4/ (no daemon member mentioned)
http://docs.python.org/release/2.6/library/threading.html (includes daemon member)
A:
Just out of curiosity, what OS and what version of python are you running?
I'm on Python 2.6.2 on Mac OS X 10.5.8.
When I run your script, here's what I get:
bnash-macbook:Desktop bnash$ python daemon.py daemonic
__main__ start
Daemon :: 0
Daemon :: 1
Daemon :: 2
Daemon :: 3
Daemon :: 4
__main__ stop
Exception in thread Thread-1 (most likely raised during interpreter shutdown)
Which seems like exactly what you'd expect.
And here's the corresponding non-daemon behavior (up until I killed the process):
bnash-macbook:Desktop bnash$ python daemon.py asdf
__main__ start
Non-Daemon :: 0
Non-Daemon :: 1
Non-Daemon :: 2
Non-Daemon :: 3
Non-Daemon :: 4
__main__ stop
Non-Daemon :: 5
Non-Daemon :: 6
Non-Daemon :: 7
Non-Daemon :: 8
Terminated
Seems normal enough to me.
| Understanding Python daemon threads | I've obviously misunderstood something fundamental about a Python Thread object's daemon attribute.
Consider the following:
daemonic.py
import sys, threading, time
class TestThread(threading.Thread):
def __init__(self, daemon):
threading.Thread.__init__(self)
self.daemon = daemon
def run(self):
x = 0
while 1:
if self.daemon:
print "Daemon :: %s" % x
else:
print "Non-Daemon :: %s" % x
x += 1
time.sleep(1)
if __name__ == "__main__":
print "__main__ start"
if sys.argv[1] == "daemonic":
thread = TestThread(True)
else:
thread = TestThread(False)
thread.start()
time.sleep(5)
print "__main__ stop"
From the python docs:
The entire Python program exits when
no alive non-daemon threads are left.
So if I run with TestThread as a daemon, I would expect it to exit once the main thread has completed. But this doesn't happen:
> python daemonic.py daemonic
__main__ start
Daemon :: 0
Daemon :: 1
Daemon :: 2
Daemon :: 3
Daemon :: 4
__main__ stop
Daemon :: 5
Daemon :: 6
^C
What don't I get?
As guessed by Justin and Brent, I was running with Python 2.5. Have just got home and tried out on my own machine running 2.7, and everything works fine. Thanks for your helps!
| [
"Your understanding about what daemon threads should do is correct. \nAs to why this isn't happening, I am guessing you are using an older version of Python. The Python 2.5.4 docs include a setDaemon(daemonic) function, as well as isDaemon() to check if a thread is a daemon thread. The 2.6 docs replace these wit... | [
13,
6
] | [] | [] | [
"python"
] | stackoverflow_0003470235_python.txt |
Q:
How to link one table to itself?
I'm trying to link one table to itself. I have media groups which can contain more media group. I created a relation many to many:
media_group_groups = Table(
"media_group_groups",
metadata,
Column("groupA_id", Integer, ForeignKey("media_groups.id")),
Column("groupB_id", Integer, ForeignKey("media_groups.id"))
)
class MediaGroup(rdb.Model):
"""Represents MediaGroup class. Conteins channels and other media groups"""
rdb.metadata(metadata)
rdb.tablename("media_groups")
id = Column("id", Integer, primary_key=True)
title = Column("title", String(100))
parents = Column("parents", String(512))
channels = relationship(Channel, secondary=media_group_channels, order_by=Channel.titleView, backref="media_groups")
mediaGroup = relationship("MediaGroup", secondary=media_group_groups, order_by="MediaGroup.title", backref="media_groups")
I got this error:
"ArgumentError: Could not determine join condition between parent/child tables on relationship MediaGroup.mediaGroup. Specify a 'primaryjoin' expression. If this is a many-to-many relationship, 'secondaryjoin' is needed as well."
When I create the tables I don't get any error, it's just when I add any element to it.
Any idea???
Thanks in advance!
A:
SQLAlchemy can't figure out which columns in your link table to join on. Try this for the relationship:
mediaGroup = relationship("MediaGroup",
secondary=media_group_groups,
order_by="MediaGroup.title",
backref=backref('media_groups',
secondary="media_media_groups",
primaryjoin= id == "groupB_id",
secondaryjoin = id == "groupA_id",
foreignkeys = ["groupA_id", "groupB_id"] ),
primaryjoin = id == "groupA_id",
secondaryjoin = id == "groupB_id")
This may need some adjustment -- if it doesn't work, try with the join column names being like "media_media_groups.groupA_id" throughout.
A:
I could be wrong, but I don't think you can link a table to itself many-to-many, can you?
You'd need to store the links in your records, and those would potentially need to be able to link to every other row in the table, no? Would seem that your columns would need to expand as fast as your records in that case.
What about a separate table that just contains your relationship?
tbl_links (id, primaryMediaGroupID, linkedMediaGroupID)
| How to link one table to itself? | I'm trying to link one table to itself. I have media groups which can contain more media group. I created a relation many to many:
media_group_groups = Table(
"media_group_groups",
metadata,
Column("groupA_id", Integer, ForeignKey("media_groups.id")),
Column("groupB_id", Integer, ForeignKey("media_groups.id"))
)
class MediaGroup(rdb.Model):
"""Represents MediaGroup class. Conteins channels and other media groups"""
rdb.metadata(metadata)
rdb.tablename("media_groups")
id = Column("id", Integer, primary_key=True)
title = Column("title", String(100))
parents = Column("parents", String(512))
channels = relationship(Channel, secondary=media_group_channels, order_by=Channel.titleView, backref="media_groups")
mediaGroup = relationship("MediaGroup", secondary=media_group_groups, order_by="MediaGroup.title", backref="media_groups")
I got this error:
"ArgumentError: Could not determine join condition between parent/child tables on relationship MediaGroup.mediaGroup. Specify a 'primaryjoin' expression. If this is a many-to-many relationship, 'secondaryjoin' is needed as well."
When I create the tables I don't get any error, it's just when I add any element to it.
Any idea???
Thanks in advance!
| [
"SQLAlchemy can't figure out which columns in your link table to join on. Try this for the relationship:\nmediaGroup = relationship(\"MediaGroup\",\n secondary=media_group_groups,\n order_by=\"MediaGroup.title\",\n backref=backref('media_groups', \n secondary=\"media_med... | [
1,
0
] | [] | [] | [
"python",
"sqlalchemy"
] | stackoverflow_0003470208_python_sqlalchemy.txt |
Q:
Build SQL queries in Python
Are there any python packages that help generating SQL queries from variables and classes?
For example, instead of writing create query manually, the developer will create a create table (as an object maybe), with desired columns in a list for instance. Then the object will return a string that will be used as a query. It would be a plus if such package can support multiple language syntax (SQLite, Oracle, MySQL, ...)
A:
Probably the best Object-Relational mapper package for Python today is the popular SqlAlchemy.
A:
The standard python MySQLdb package will do the right things about quoting variables if it's given a chance. If you're worried about SQL injection attacks.
c.executemany(
"""INSERT INTO breakfast (name, spam, eggs, sausage, price)
VALUES (%s, %s, %s, %s, %s)""",
[
("Spam and Sausage Lover's Plate", 5, 1, 8, 7.95 ),
("Not So Much Spam Plate", 3, 2, 0, 3.95 ),
("Don't Wany ANY SPAM! Plate", 0, 4, 3, 5.95 )
] )
Otherwise you should probably be looking at any number of python web frameworks - Django, etc. to abstract the database.
| Build SQL queries in Python | Are there any python packages that help generating SQL queries from variables and classes?
For example, instead of writing create query manually, the developer will create a create table (as an object maybe), with desired columns in a list for instance. Then the object will return a string that will be used as a query. It would be a plus if such package can support multiple language syntax (SQLite, Oracle, MySQL, ...)
| [
"Probably the best Object-Relational mapper package for Python today is the popular SqlAlchemy.\n",
"The standard python MySQLdb package will do the right things about quoting variables if it's given a chance. If you're worried about SQL injection attacks.\nc.executemany(\n \"\"\"INSERT INTO breakfast (name... | [
2,
1
] | [] | [] | [
"python",
"sql"
] | stackoverflow_0003469990_python_sql.txt |
Q:
In Django, how to limit entries from each user to a specific number N (N > 1)?
I have a Django web application that's similar to the typical Q&A system.
A user asks a question. other users submit answers to that question:
Each user is allowed to submit up to N answers to each question,
where N > 1 (so, say each user can submit no more than 3 answers to
each question)
A user can edit his existing answers, or submit new answer(s) if he
hasn't already reached his limit.
It's straightforward to do this if each user is allowed only 1 answer
- just do:
unique_together = (("user.id", "question_id"),)
But in case of N>1, what is the best way to implement this?
A:
I'd add the following method to your Question model:
class Question(models.Model):
def can_answer(self, user):
return Answer.filter(question=self, user=user).count() < 3
You can use this method to decide if a user can add answers to the question.
A:
This is a business rule that you will have to enforce at the application level rather than the database level. Hence you will have to check at the time of answering if the user can indeed post an answer. This can be done in multiple ways. Andrey's answer is one way of doing this. Another way would be to check for this during Answer.save(). If you prefer to keep the logic away from models you can define an utility function can_answer(user, question) that returns True or False and invoke it from the view.
| In Django, how to limit entries from each user to a specific number N (N > 1)? | I have a Django web application that's similar to the typical Q&A system.
A user asks a question. other users submit answers to that question:
Each user is allowed to submit up to N answers to each question,
where N > 1 (so, say each user can submit no more than 3 answers to
each question)
A user can edit his existing answers, or submit new answer(s) if he
hasn't already reached his limit.
It's straightforward to do this if each user is allowed only 1 answer
- just do:
unique_together = (("user.id", "question_id"),)
But in case of N>1, what is the best way to implement this?
| [
"I'd add the following method to your Question model:\nclass Question(models.Model):\n\n def can_answer(self, user):\n return Answer.filter(question=self, user=user).count() < 3\n\nYou can use this method to decide if a user can add answers to the question.\n",
"This is a business rule that you will hav... | [
2,
2
] | [] | [] | [
"database",
"django",
"python"
] | stackoverflow_0003470120_database_django_python.txt |
Q:
Anyone else having trouble keeping ghettoq running inthe background with django-celery?
nohup python manage.py celeryd -f queue.log 2>queue.err 1>queue.out &
Handles one request fine, then the client app posting the next job to the queues fails with this traceback.
tasks.spawn_job.delay(details)
File "/releases/env/lib/python2.6/site-packages/celery/task/base.py", line 321, in delay
return self.apply_async(args, kwargs)
File "/releases/env/lib/python2.6/site-packages/celery/task/base.py", line 337, in apply_async
return apply_async(self, args, kwargs, **options)
File "/releases/env/lib/python2.6/site-packages/celery/messaging.py", line 248, in _inner
return fun(*args, **kwargs)
File "/releases/env/lib/python2.6/site-packages/celery/execute/__init__.py", line 101, in apply_async
publisher or publish.close()
File "/releases/env/lib/python2.6/site-packages/carrot/messaging.py", line 766, in close
self.backend.close()
File "/releases/env/lib/python2.6/site-packages/ghettoq/taproot.py", line 188, in close
for consumer_tag in self._t.consumers.keys():
AttributeError: 'thread._local' object has no attribute 'consumers'
We are switching to rabbitMQ, since it "...just works"
A:
Switching to RabbitMQ is probably a good idea. But note that this is a bug fixed in the master
branch of ghettoq.
| Anyone else having trouble keeping ghettoq running inthe background with django-celery? | nohup python manage.py celeryd -f queue.log 2>queue.err 1>queue.out &
Handles one request fine, then the client app posting the next job to the queues fails with this traceback.
tasks.spawn_job.delay(details)
File "/releases/env/lib/python2.6/site-packages/celery/task/base.py", line 321, in delay
return self.apply_async(args, kwargs)
File "/releases/env/lib/python2.6/site-packages/celery/task/base.py", line 337, in apply_async
return apply_async(self, args, kwargs, **options)
File "/releases/env/lib/python2.6/site-packages/celery/messaging.py", line 248, in _inner
return fun(*args, **kwargs)
File "/releases/env/lib/python2.6/site-packages/celery/execute/__init__.py", line 101, in apply_async
publisher or publish.close()
File "/releases/env/lib/python2.6/site-packages/carrot/messaging.py", line 766, in close
self.backend.close()
File "/releases/env/lib/python2.6/site-packages/ghettoq/taproot.py", line 188, in close
for consumer_tag in self._t.consumers.keys():
AttributeError: 'thread._local' object has no attribute 'consumers'
We are switching to rabbitMQ, since it "...just works"
| [
"Switching to RabbitMQ is probably a good idea. But note that this is a bug fixed in the master\n branch of ghettoq.\n"
] | [
0
] | [] | [] | [
"celery",
"django",
"python"
] | stackoverflow_0003465568_celery_django_python.txt |
Q:
Java or any other language: Which method/class invoked mine?
I would like to write a code internal to my method that print which method/class has invoked it.
(My assumption is that I can't change anything but my method..)
How about other programming languages?
EDIT: Thanks guys, how about JavaScript? python? C++?
A:
This is specific to Java.
You can use Thread.currentThread().getStackTrace(). This will return an array of StackTraceElements.
The 2nd element in the array will be the calling method.
Example:
public void methodThatPrintsCaller() {
StackTraceElement elem = Thread.currentThread.getStackTrace()[2];
System.out.println(elem);
// rest of you code
}
A:
If all you want to do is print out the stack trace and go hunting for the class, use
Thread.dumpStack();
See the API doc.
A:
Justin has the general case down; I wanted to mention two special cases demonstrated by this snippit:
import java.util.Comparator;
public class WhoCalledMe {
public static void main(String[] args) {
((Comparator)(new SomeReifiedGeneric())).compare(null, null);
new WhoCalledMe().new SomeInnerClass().someInnerMethod();
}
public static StackTraceElement getCaller() {
//since it's a library function we use 3 instead of 2 to ignore ourself
return Thread.currentThread().getStackTrace()[3];
}
private void somePrivateMethod() {
System.out.println("somePrivateMethod() called by: " + WhoCalledMe.getCaller());
}
private class SomeInnerClass {
public void someInnerMethod() {
somePrivateMethod();
}
}
}
class SomeReifiedGeneric implements Comparator<SomeReifiedGeneric> {
public int compare(SomeReifiedGeneric o1, SomeReifiedGeneric o2) {
System.out.println("SomeRefiedGeneric.compare() called by: " + WhoCalledMe.getCaller());
return 0;
}
}
This prints:
SomeRefiedGeneric.compare() called by: SomeReifiedGeneric.compare(WhoCalledMe.java:1)
somePrivateMethod() called by: WhoCalledMe.access$0(WhoCalledMe.java:14)
Even though the first is called "directly" from main() and the second from SomeInnerClass.someInnerMethod(). These are two cases where there is a transparent call made in between the two methods.
In the first case, this is because we are calling the bridge method to a generic method, added by the compiler to ensure SomeReifiedGeneric can be used as a raw type.
In the second case, it is because we are calling a private member of WhoCalledMe from an inner class. To accomplish this, the compiler adds a synthetic method as a go-between to override the visibility problems.
A:
the sequence of method calls is located in stack. this is how you get the stack: Get current stack trace in Java then get previous item.
A:
Since you asked about other languages, Tcl gives you a command (info level) that lets you examine the call stack. For example, [info level -1] returns the caller of the current procedure, as well as the arguments used to call the current procedure.
A:
In Python you use the inspect module.
Getting the function's name and file name is easy, as you see in the example below.
Getting the function itself is more work. I think you could use the __import__ function to import the caller's module. However you must somehow convert the filename to a valid module name.
import inspect
def find_caller():
caller_frame = inspect.currentframe().f_back
print "Called by function:", caller_frame.f_code.co_name
print "In file :", caller_frame.f_code.co_filename
#Alternative, probably more portable way
#print inspect.getframeinfo(caller_frame)
def foo():
find_caller()
foo()
A:
Yes, it is possible.
Have a look at Thread.getStackTrace()
A:
In Python, you should use the traceback or inspect modules. These will modules will shield you from the implementation details of the interpreter, which can differ even today (e.g. IronPython, Jython) and may change even more in the future. The way these modules do it under the standard Python interpreter today, however, is with sys._getframe(). In particular, sys._getframe(1).f_code.co_name provides the information you want.
| Java or any other language: Which method/class invoked mine? | I would like to write a code internal to my method that print which method/class has invoked it.
(My assumption is that I can't change anything but my method..)
How about other programming languages?
EDIT: Thanks guys, how about JavaScript? python? C++?
| [
"This is specific to Java.\nYou can use Thread.currentThread().getStackTrace(). This will return an array of StackTraceElements.\nThe 2nd element in the array will be the calling method.\nExample:\npublic void methodThatPrintsCaller() {\n StackTraceElement elem = Thread.currentThread.getStackTrace()[2];\n Sy... | [
20,
4,
4,
3,
1,
1,
0,
0
] | [] | [] | [
"classloader",
"java",
"javascript",
"programming_languages",
"python"
] | stackoverflow_0003468101_classloader_java_javascript_programming_languages_python.txt |
Q:
django, show fields on admin site that are not in model
Building a Django app.
Class Company(models.Model):
trucks = models.IntegerField()
multiplier = models.IntegerField()
#capacity = models.IntegerField()
The 'capacity' field is actually the sum of (trucks * multiplier). So I don't need a database field for it since I can calculate it.
However, my admin user wants to see this total on the admin screen.
What's the easiest/best way to customize the admin view to allow me to do this?
A:
define a method on company that returns the product of trucks and multiplier
def capacity(self):
return self.trucks * self.multiplier
and in the admin model set
list_display = ('trucks', 'multiplier', 'capacity')
| django, show fields on admin site that are not in model | Building a Django app.
Class Company(models.Model):
trucks = models.IntegerField()
multiplier = models.IntegerField()
#capacity = models.IntegerField()
The 'capacity' field is actually the sum of (trucks * multiplier). So I don't need a database field for it since I can calculate it.
However, my admin user wants to see this total on the admin screen.
What's the easiest/best way to customize the admin view to allow me to do this?
| [
"define a method on company that returns the product of trucks and multiplier\ndef capacity(self):\n return self.trucks * self.multiplier\n\nand in the admin model set\nlist_display = ('trucks', 'multiplier', 'capacity')\n\n"
] | [
3
] | [] | [] | [
"django",
"django_admin",
"python"
] | stackoverflow_0003471035_django_django_admin_python.txt |
Q:
Invalid syntax error in simple Python-3 program
from TurtleWorld import *
import math
bob = Turtle()
print(bob)
draw_circle(turtle, r):
d = r*2
c = d*math.pi
degrees = 360/25
length = c // 25
for i in range(25):
fd(turtle, length)
rt(turtle, degrees)
draw_circle(bob, 25)
wait_for_user()
The problem in on line 7:
draw_circle(turtle, r):
The compiler only tells me that there is a syntax error and highlights the colon at the
end of that line.
I'm sure I'm missing something simple, but the code looks right to me.
A:
in python, we define functions using the def keyword.. like
def draw_circle(turtle, r):
# ...
A:
You need to write:
def draw_circle(turtle, r):
to define a function.
A:
http://docs.python.org/release/3.0.1/tutorial/controlflow.html#defining-functions
Your missing the def part?
A:
I thought, just in case the other three answers weren't obvious enough, I should tell you that you need def first
def draw_circle(turtle, r):
@People duplicating:
Seriously, could we get 1 more answer? I believe 3 (4 if you add me) isn't enough
| Invalid syntax error in simple Python-3 program | from TurtleWorld import *
import math
bob = Turtle()
print(bob)
draw_circle(turtle, r):
d = r*2
c = d*math.pi
degrees = 360/25
length = c // 25
for i in range(25):
fd(turtle, length)
rt(turtle, degrees)
draw_circle(bob, 25)
wait_for_user()
The problem in on line 7:
draw_circle(turtle, r):
The compiler only tells me that there is a syntax error and highlights the colon at the
end of that line.
I'm sure I'm missing something simple, but the code looks right to me.
| [
"in python, we define functions using the def keyword.. like\ndef draw_circle(turtle, r):\n # ...\n\n",
"You need to write:\ndef draw_circle(turtle, r):\n\nto define a function.\n",
"http://docs.python.org/release/3.0.1/tutorial/controlflow.html#defining-functions\nYour missing the def part?\n",
"I thought... | [
2,
1,
1,
0
] | [] | [] | [
"python",
"python_3.x",
"syntax",
"syntax_error"
] | stackoverflow_0003471165_python_python_3.x_syntax_syntax_error.txt |
Q:
Python inline of XML or ASCII string/template?
I am generating complex XML files through Python scripts that require a bunch of conditional statements (example http://repository.azgs.az.gov/uri_gin/azgs/dlio/536/iso19139.xml). I am working with multiple XML or ASCII metadata standards that often have poor schema or are quite vague.
In PHP, I just wrote out the XML and inserted PHP snippets where needed. Is there an easy way to do that in Python? I am trying to avoid having to escape all that XML. The inline method is also very helpful for tweaking the template without much rewrite.
I have looked a bit into Python templeting solutions but they appeared either too static or were overkill. Moving the whole XML into an XML object is a lot of work at a cost of flexibility when changing the XML or ASCII template.
Thanks for the newbie support!
A:
wgrunberg,
I use Python's built-in string.Template class like so:
from string import Template
the_template = Template("<div id='$section_id'>First name: $first</div>")
print the_template.substitute(section_id="anID", first="Sarah")
The output of the above is:
<div id='anID'>First name: Sarah</div>
In the above example I showed an XML template, but any "template" that you can describe as a string would work.
To do conditionals you can do something like:
print the_template.substitute(section_id="theID", first="Sarah" if 0==0 else "John")
If your conditionals are complex, instead of expressing them inline as above, consider breaking them out into closures/functions.
A:
Try any modern non-XML-based Python templating engine, e.g. Mako or Jinja2. They are fairly easy to integrate into your project, and then you will be able to write things such as:
<sometag>
%if a > b:
<anothertag />
%endif
</sometag>
You can also use inline python, including assignments.
A:
@Gintautas' suggestion of using a good template engine would also be my first choice (especially Mako, whose "templating language" is basically Python plus some markup;-).
However, alternatives many people prefer to build XML files include writing them from the DOM, e.g. using (to stick with something in the standard library) ElementTree or (a third-party package, but zero trouble to install and very popular) BeautifulSoup (by all means stick with its 3.0.x release unless you're using Python 3!), specifically the BeautifulStoneSoup class (the BeautifulSoup class is for processing HTML, the stone one for XML).
A:
Going for a template is the sensible thing to do. Using the build-in string template over external libraries would be my preferred method because I need to be able to pass on the Python ETL scripts easily to other people. However, I could get away with using templates by putting the XML string inside single quotes and using multi-line strings. Following is an example of this crude method.
for row in metadata_dictionary:
iso_xml = ''
iso_xml += ' *** a bunch of XML *** '
iso_xml += '\
<gmd:contact> \n\
<gmd:CI_ResponsibleParty> \n'
if row['MetadataContactName']:
iso_xml += '\
<gmd:individualName> \n\
<gco:CharacterString>'+row['MetadataContactName'].lower().strip()+'</gco:CharacterString> \n\
</gmd:individualName> \n'
if row['MetadataContactOrganisation']:
iso_xml += '\
<gmd:organisationName> \n\
<gco:CharacterString>'+row['MetadataContactOrganisation'].lower().strip()+'</gco:CharacterString> \n\
</gmd:organisationName> \n'
iso_xml += ' *** more XML *** '
| Python inline of XML or ASCII string/template? | I am generating complex XML files through Python scripts that require a bunch of conditional statements (example http://repository.azgs.az.gov/uri_gin/azgs/dlio/536/iso19139.xml). I am working with multiple XML or ASCII metadata standards that often have poor schema or are quite vague.
In PHP, I just wrote out the XML and inserted PHP snippets where needed. Is there an easy way to do that in Python? I am trying to avoid having to escape all that XML. The inline method is also very helpful for tweaking the template without much rewrite.
I have looked a bit into Python templeting solutions but they appeared either too static or were overkill. Moving the whole XML into an XML object is a lot of work at a cost of flexibility when changing the XML or ASCII template.
Thanks for the newbie support!
| [
"wgrunberg,\nI use Python's built-in string.Template class like so:\nfrom string import Template\nthe_template = Template(\"<div id='$section_id'>First name: $first</div>\")\nprint the_template.substitute(section_id=\"anID\", first=\"Sarah\")\n\nThe output of the above is:\n<div id='anID'>First name: Sarah</div>\n\... | [
3,
1,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0003454437_python.txt |
Q:
Weird behaviour with lxml getiterator()
I have the following XML document:
<x>
<a>Some text</c>
<b>Some text 2</b>
<c>Some text 3</c>
</x>
I want to get the text of all the tags, so I decided to use getiterator().
My problem is, it adds up blank lines for a reason I can't understand. Consider this:
>>> for text in document_root.getiterator():
... print text.text
...
Some text
Some text 2
Some text 3
Notice the two extra blank lines before 'Some text'. What is the reason for this? If I pass a tag to the getiterator() method, there are no blank lines, as it should be.
>>> for text in document_root.getiterator('a'):
... print text.text
...
Some text
So my question is, what is causing those extra blank lines in case I pass getiterator() without a tag and how do I remove them?
A:
By default lxml.etree will regard empty text between tags as the textual content for that tag and in your case the whitespace being displayed comes from <x>. If you want a parser that ignores the whitespace you'll want to do something like:
from lxml import etree
parser = etree.XMLParser(remove_blank_text=True)
tree = etree.XML("""\
<x>
<a>Some text</a>
<b>Some text 2</b>
<c>Some text 3</c>
</x>
""", parser)
for node in tree.iter():
if node.text == None: continue
print node.text
Note how node.text will return None if there is no text at all. Also note that the API documentation for lxml states that getiterator() is deprecated in favor of iter().
For more information see The lxml.etree Tutorial: Parser objects.
A:
Although Im not sure, I would assume it's trying to read text within < x >.
Anyhow, what's wrong with
for text in document_root.getiterator():
if text.strip() == '': continue
print text
| Weird behaviour with lxml getiterator() | I have the following XML document:
<x>
<a>Some text</c>
<b>Some text 2</b>
<c>Some text 3</c>
</x>
I want to get the text of all the tags, so I decided to use getiterator().
My problem is, it adds up blank lines for a reason I can't understand. Consider this:
>>> for text in document_root.getiterator():
... print text.text
...
Some text
Some text 2
Some text 3
Notice the two extra blank lines before 'Some text'. What is the reason for this? If I pass a tag to the getiterator() method, there are no blank lines, as it should be.
>>> for text in document_root.getiterator('a'):
... print text.text
...
Some text
So my question is, what is causing those extra blank lines in case I pass getiterator() without a tag and how do I remove them?
| [
"By default lxml.etree will regard empty text between tags as the textual content for that tag and in your case the whitespace being displayed comes from <x>. If you want a parser that ignores the whitespace you'll want to do something like:\nfrom lxml import etree\n\nparser = etree.XMLParser(remove_blank_text=True... | [
2,
0
] | [] | [] | [
"lxml",
"python"
] | stackoverflow_0003470929_lxml_python.txt |
Q:
Python Class in module not loading in one computer, but the other
So I have two files:
File 1 has this method in it:
import MyGlobals
global old_function
def init():
import ModuleB
global old_function
MyGlobals.SomeNumber = 0
old_function = ModuleB.someClass.function
ModuleB.someClass.function = someNewFunction
File 2 has a class "someClass" and a class "someOtherClass". That being said.
When I run my code on my computer it works great and it does what I expect it to. When I run this code on my friends computer which is the same build of windows 7 with the same python version 2.5.4, and with the same code(on a thumb drive for both) it gets an error "Module does not contain someClass"
I hope this makes sense in what I am trying to say. It is work related therefore code snippets aren't aloud. This one has me extremely stumped on why this would be the case. I even tried "from ModuleB import someClass" to see if someClass would work, but it still said that someClass is not in moduleB, while someCLass is definitely in moduleB...
Any ideas would greatly be appreciated!
A:
Well, it's fairly clear that you are using different versions of ModuleB. I would hazard a guess that even though you are running the code from a thumb drive, you have put ModuleB.py somewhere else in your PYTHONPATH and it is running that version on your computer, but not on your friend's. This is easy to check:
import ModuleB
print ModuleB.__file__
I'll bet that doesn't print what you're expecting!
On a different note, you don't need the first global declaration in your code snippet -- that's already the global scope.
| Python Class in module not loading in one computer, but the other | So I have two files:
File 1 has this method in it:
import MyGlobals
global old_function
def init():
import ModuleB
global old_function
MyGlobals.SomeNumber = 0
old_function = ModuleB.someClass.function
ModuleB.someClass.function = someNewFunction
File 2 has a class "someClass" and a class "someOtherClass". That being said.
When I run my code on my computer it works great and it does what I expect it to. When I run this code on my friends computer which is the same build of windows 7 with the same python version 2.5.4, and with the same code(on a thumb drive for both) it gets an error "Module does not contain someClass"
I hope this makes sense in what I am trying to say. It is work related therefore code snippets aren't aloud. This one has me extremely stumped on why this would be the case. I even tried "from ModuleB import someClass" to see if someClass would work, but it still said that someClass is not in moduleB, while someCLass is definitely in moduleB...
Any ideas would greatly be appreciated!
| [
"Well, it's fairly clear that you are using different versions of ModuleB. I would hazard a guess that even though you are running the code from a thumb drive, you have put ModuleB.py somewhere else in your PYTHONPATH and it is running that version on your computer, but not on your friend's. This is easy to check:\... | [
2
] | [] | [] | [
"module",
"python",
"python_module"
] | stackoverflow_0003471416_module_python_python_module.txt |
Q:
Sorting a python array
opt=[]
opt=["opt3","opt2","opt7","opt6","opt1"]
for i in range(len(opt)):
print opt[i]
Output for the above is
opt3,opt2,opt7,opt6,opt1
How to sort the above array in ascending order..
A:
Use .sort() if you want to sort the original list. (opt.sort())
Use sorted() if you want a sorted copy of it.
A:
print sorted(opt)
A:
Depends on whether or not you want a natural sort (which I think you do) or not.
If you use sorted() or .sort() you'll get:
>>> opt = ["opt3", "opt2", "opt7", "opt6", "opt1", "opt10", "opt11"]
>>> print sorted(opt)
['opt1', 'opt10', 'opt11', 'opt2', 'opt3', 'opt6', 'opt7']
While you'll probably want ['opt1', 'opt2', 'opt3', 'opt6', 'opt7', 'opt10', 'opt11'].
If so you'll want to look into natural sorting (there are various variations on the function mentioned in that article).
| Sorting a python array | opt=[]
opt=["opt3","opt2","opt7","opt6","opt1"]
for i in range(len(opt)):
print opt[i]
Output for the above is
opt3,opt2,opt7,opt6,opt1
How to sort the above array in ascending order..
| [
"Use .sort() if you want to sort the original list. (opt.sort())\nUse sorted() if you want a sorted copy of it.\n",
"print sorted(opt)\n",
"Depends on whether or not you want a natural sort (which I think you do) or not.\nIf you use sorted() or .sort() you'll get:\n>>> opt = [\"opt3\", \"opt2\", \"opt7\", \"opt... | [
8,
2,
0
] | [] | [] | [
"python",
"sorting"
] | stackoverflow_0003470436_python_sorting.txt |
Q:
memory size of Python data structure
How do I find out the memory size of a Python data structure? I'm looking for something like:
sizeof({1:'hello', 2:'world'})
It is great if it counts every thing recursively. But even a basic non-recursive result helps. Basically I want to get a sense of various implementation options like tuple v.s. list v.s. class in terms of memory footprint. It matters because I'm planning to have millions of object instantiated.
My current dev platform is CPython 2.6.
A:
Have a look at the sys.getsizeof function. According to the documentation, it returns the size of an object in bytes, as given by the object's __sizeof__ method.
As Daniel pointed out in a comment, it's not recursive; it only counts bytes occupied by the object itself, not other objects it refers to. This recipe for a recursive computation is linked to by the Python 3 documentation.
| memory size of Python data structure | How do I find out the memory size of a Python data structure? I'm looking for something like:
sizeof({1:'hello', 2:'world'})
It is great if it counts every thing recursively. But even a basic non-recursive result helps. Basically I want to get a sense of various implementation options like tuple v.s. list v.s. class in terms of memory footprint. It matters because I'm planning to have millions of object instantiated.
My current dev platform is CPython 2.6.
| [
"Have a look at the sys.getsizeof function. According to the documentation, it returns the size of an object in bytes, as given by the object's __sizeof__ method.\nAs Daniel pointed out in a comment, it's not recursive; it only counts bytes occupied by the object itself, not other objects it refers to. This recipe ... | [
26
] | [] | [] | [
"data_structures",
"memory",
"memory_management",
"python"
] | stackoverflow_0003471559_data_structures_memory_memory_management_python.txt |
Q:
Python project deployment design
Here is the situation: the company that I'm working in right now gave me the freedom to work with either java or python to develop my applications. The company has mainly experience in java.
I have decided to go with python, so they where very happy to ask me to give maintenance to all the python projects/scripts related to the database maintenance that they have.
Its not that bad to handle all that stuff and its kind of fun to see how much free time I have compared to java programmers. There is just one but, the projects layout is a mess.
There are many scripts that simply lay in virtual machines all over the company. Some of them have complex functionality that is spread across a few modules(4 at maximum.)
While thinking about it about it, I realized that I don't know how to address that, so here are 3 questions.
Where do I put standalone scripts? We use git as our versioning system.
How do structure the project's layout in a way that the user do not need to dig deep into the folders to run the programs(in java I created a jar or a jar and a shell script to handle some bootstrap operations.)
What is a standard way to create modules that allow easy reusability(mycompany.myapp.mymodule?)
A:
A package is a way of creating a module hierarchy: if you make a file called __init__.py in a directory, Python will treat that directory as a package and allow you to import its contents using dotted imports:
spam \
__init__.py
ham.py
eggs.py
import spam.ham
The modules inside a package can reference each other -- see the docs.
If these are all DB maintenance scripts, I would make a package called DB or something, and place them all in it. You can have subpackages for the more complicated ones. So if you had a script for, I don't know, cleaning up the transaction logs, you could put it in ourDB.clean and do
import ourDB.clean
ourDB.clean.transaction_logs( )
A:
Where do I put standalone scripts?
You organize them "functionally" -- based on what they do and why people use them.
The language (Python vs. Java) is irrelevant.
You have to think of scripts as small applications focused on some need and create appropriate directory structures for that application.
We use /opt/thisapp and /opt/thatapp. If you want a shared mount-point, you might use a different path.
How do structure the project's layout in a way that the user do not need to dig deep into the folders to run the programs
You organize them "functionally" -- based on what they do and why people use them. At the top level of a /opt/thisapp directory, you might have an __init__.py (because it's a package) and perhaps a main.py script which starts the real work.
In Python 2.7 and Python 3, you have the runpy module. With this you would name your
top-level main script __main__.py
http://docs.python.org/library/runpy.html#module-runpy
What is a standard way to create modules that allow easy reusability(mycompany.myapp.mymodule?)
Read about packages. http://docs.python.org/tutorial/modules.html#packages
| Python project deployment design | Here is the situation: the company that I'm working in right now gave me the freedom to work with either java or python to develop my applications. The company has mainly experience in java.
I have decided to go with python, so they where very happy to ask me to give maintenance to all the python projects/scripts related to the database maintenance that they have.
Its not that bad to handle all that stuff and its kind of fun to see how much free time I have compared to java programmers. There is just one but, the projects layout is a mess.
There are many scripts that simply lay in virtual machines all over the company. Some of them have complex functionality that is spread across a few modules(4 at maximum.)
While thinking about it about it, I realized that I don't know how to address that, so here are 3 questions.
Where do I put standalone scripts? We use git as our versioning system.
How do structure the project's layout in a way that the user do not need to dig deep into the folders to run the programs(in java I created a jar or a jar and a shell script to handle some bootstrap operations.)
What is a standard way to create modules that allow easy reusability(mycompany.myapp.mymodule?)
| [
"A package is a way of creating a module hierarchy: if you make a file called __init__.py in a directory, Python will treat that directory as a package and allow you to import its contents using dotted imports:\nspam \\\n __init__.py\n ham.py\n eggs.py\n\nimport spam.ham\n\nThe modules inside a pa... | [
2,
2
] | [] | [] | [
"project_layout",
"python"
] | stackoverflow_0003471413_project_layout_python.txt |
Q:
Auto Increment While Building list in Python
Here is what I have so far. Is there anyway, I can auto increment the list while it is being built? So instead of having all ones, I'd have 1,2,3,4....
possible = []
possible = [1] * 100
print possible
Thanks,
Noah
A:
possible = range(1, 101)
Note that the end point (101 in this case) is not part of the resulting list.
A:
Something like this?
start=1
count= 100
possible = [num for num in range(start,start+count)]
print possible
| Auto Increment While Building list in Python | Here is what I have so far. Is there anyway, I can auto increment the list while it is being built? So instead of having all ones, I'd have 1,2,3,4....
possible = []
possible = [1] * 100
print possible
Thanks,
Noah
| [
"possible = range(1, 101)\n\nNote that the end point (101 in this case) is not part of the resulting list.\n",
"Something like this?\nstart=1\ncount= 100\npossible = [num for num in range(start,start+count)]\nprint possible\n\n"
] | [
13,
0
] | [] | [] | [
"list",
"python"
] | stackoverflow_0003471151_list_python.txt |
Q:
Python's timedelta: can't I just get in whatever time unit I want the value of the entire difference?
I am trying to have some clever dates since a post has been made on my site ("seconds since, hours since, weeks since, etc..") and I'm using datetime.timedelta difference between utcnow and utc dated stored in the database for a post.
Looks like, according to the docs, I have to use the days attribute AND the seconds attribute, to get the fancy date strings I want.
Can't I just get in whatever time unit I want the value of the entire difference? Am I missing something?
It would be perfect if I could just get the entire difference in seconds.
A:
It seems that Python 2.7 has introduced a total_seconds() method, which is what you were looking for, I believe!
A:
You can compute the difference in seconds.
total_seconds = delta.days * 86400 + delta.seconds
No, you're no "missing something". It doesn't provide deltas in seconds.
A:
It would be perfect if I could just get the entire difference in seconds.
Then plain-old-unix-timestamp as provided by the 'time' module may be more to your taste.
I personally have yet to be convinced by a lot of what's in 'datetime'.
A:
Like bobince said, you could use timestamps, like this:
# assuming ts1 and ts2 are the two datetime objects
from time import mktime
mktime(ts1.timetuple()) - mktime(ts2.timetuple())
Although I would think this is even uglier than just calculating the seconds from the timedelta object...
| Python's timedelta: can't I just get in whatever time unit I want the value of the entire difference? | I am trying to have some clever dates since a post has been made on my site ("seconds since, hours since, weeks since, etc..") and I'm using datetime.timedelta difference between utcnow and utc dated stored in the database for a post.
Looks like, according to the docs, I have to use the days attribute AND the seconds attribute, to get the fancy date strings I want.
Can't I just get in whatever time unit I want the value of the entire difference? Am I missing something?
It would be perfect if I could just get the entire difference in seconds.
| [
"It seems that Python 2.7 has introduced a total_seconds() method, which is what you were looking for, I believe!\n",
"You can compute the difference in seconds.\ntotal_seconds = delta.days * 86400 + delta.seconds\n\nNo, you're no \"missing something\". It doesn't provide deltas in seconds. \n",
"\nIt would b... | [
21,
15,
5,
5
] | [] | [] | [
"datetime",
"python",
"timedelta"
] | stackoverflow_0000500168_datetime_python_timedelta.txt |
Q:
how to get a descendant from parent entity
I can't seem to find a quick answer on how to get Datastore descendants given a reference to the parent entity. Here's a quick example:
# a person who has pets
john=Person(**kwargs)
# pets
fluffy=Pet(parent=john, ...)
rover=Pet(parent=john, ...)
# lengthy details about john that are accessed infrequently
facts=Details(parent=john, ...)
How can I get the keys for fluffy, rover and facts using john or the key for john?
With john (or key) can I get key/entity for just facts (not rover, fluffy)?
A:
You want to use an ancestor filter:
kid_keys = db.Query(keys_only=True).ancestor(john).fetch(1000)
And you can get just facts by specifying the type of facts:
facts_key = db.Query(Details, keys_only=True).ancestor(john).get()
Using get() instead of fetch() assumes that john will have only one Details child. If that is not true, call fetch to get them all.
A:
Expanding on Adam's answer: If you don't know the key name or key ID for fluffy, rover or facts, you can't construct the keys ahead of time based just on the entity or key for john. You'll have to use ancestor queries, as in Adam's answer.
If, on the other hand, you do know the name or ID for those child entities (say, in a request handler where the ID comes in as part of the URL or something), you can construct their keys based on the john entity or its key. E.g.:
# Build a key for the parent
john_key = db.Key.from_path('Person', 1)
# Build a key for the children
fluffy_key = db.Key.from_path('Pet', 1, parent=john_key)
rover_key = db.Key.from_path('Pet', 2, parent=john_key)
# Get all three at once
john, fluffy, rover = db.get([john_key, fluffy_key, rover_key])
These examples assume that all the entity keys have IDs, but if they have names you can just substitute the string names in place of the int IDs.
| how to get a descendant from parent entity | I can't seem to find a quick answer on how to get Datastore descendants given a reference to the parent entity. Here's a quick example:
# a person who has pets
john=Person(**kwargs)
# pets
fluffy=Pet(parent=john, ...)
rover=Pet(parent=john, ...)
# lengthy details about john that are accessed infrequently
facts=Details(parent=john, ...)
How can I get the keys for fluffy, rover and facts using john or the key for john?
With john (or key) can I get key/entity for just facts (not rover, fluffy)?
| [
"You want to use an ancestor filter:\nkid_keys = db.Query(keys_only=True).ancestor(john).fetch(1000)\n\nAnd you can get just facts by specifying the type of facts:\nfacts_key = db.Query(Details, keys_only=True).ancestor(john).get()\n\nUsing get() instead of fetch() assumes that john will have only one Details child... | [
1,
0
] | [] | [] | [
"google_app_engine",
"python"
] | stackoverflow_0003471054_google_app_engine_python.txt |
Q:
Be my human compiler: What is wrong with this Python 2.5 code?
My framework is raising a syntax error when I try to execute this code:
from django.template import Template, TemplateSyntaxError
try:
Template(value)
except TemplateSyntaxError as error:
raise forms.ValidationError(error)
return value
And here's the error:
from template_field import TemplateTextField, TemplateCharField
File "C:\django\internal\..\internal\cmsplugins\form_designer\template_field.py", line 14
except TemplateSyntaxError as error:
^
SyntaxError: invalid syntax
What's going on?
A:
The alternate syntax except SomeException as err is new in 2.6. You should use except SomeException, err in 2.5.
A:
You can't have an empty try block like that in Python. If you just want to do nothing in the block (for prototyping code, say), use the pass keyword:
from django.template import Template, TemplateSyntaxError
try:
pass
except TemplateSyntaxError as error:
Template(value)
raise forms.ValidationError(error)
return value
Edit: This answers the original version of the question. I'll leave it up for posterity, but the question has now been edited, and @jleedev has the correct answer to the revised question.
A:
You can't try nothing. If you really have nothing to try, use the pass keyword:
try:
pass
except TemplateSyntaxError as error:
Template(value)
raise forms.ValidationError(error)
return value
But based on my (limited) knowledge of Django, I'd guess you want something like this instead:
try:
return Template(value)
except TemplateSyntaxError as error:
raise forms.ValidationError(error)
A:
You've got nothing inside your try block.
A try/except block looks like:
try:
do_something()
except SomeException as err:
handle_exception()
A:
In every block in Python you should do something, or if you don't want to do something use the pass statement!
| Be my human compiler: What is wrong with this Python 2.5 code? | My framework is raising a syntax error when I try to execute this code:
from django.template import Template, TemplateSyntaxError
try:
Template(value)
except TemplateSyntaxError as error:
raise forms.ValidationError(error)
return value
And here's the error:
from template_field import TemplateTextField, TemplateCharField
File "C:\django\internal\..\internal\cmsplugins\form_designer\template_field.py", line 14
except TemplateSyntaxError as error:
^
SyntaxError: invalid syntax
What's going on?
| [
"The alternate syntax except SomeException as err is new in 2.6. You should use except SomeException, err in 2.5.\n",
"You can't have an empty try block like that in Python. If you just want to do nothing in the block (for prototyping code, say), use the pass keyword:\nfrom django.template import Template, Templa... | [
17,
6,
4,
3,
1
] | [] | [] | [
"django",
"python",
"syntax"
] | stackoverflow_0003471295_django_python_syntax.txt |
Q:
How to pass object_id to generic view object_detail in Django
I'm using django.views.generic.list_detail.object_detail.
According to the documentation the view takes the variable object_id. To do this I added the following to my urlconf:
(r'^(?P<object_id>\d+)$', list_detail.object_detail, article_info),
The above line is in a separate urlconf that is included in the main urlconf.
If I leave the '^' character at the beginning of the pattern and then attempt to go to the address:
.../?object_id=1
It does not work. If I remove the '^' character the address:
.../?object_id=1
Still does not work. However if I use:
.../object_id=1 (without the question mark)
The view accepts the object_id variable and works without a problem. I have two questions about this.
First: Can the '^' character in an included urlconf be used to restrict the pattern to only match the base url pattern plus the exact string bettween a ^$ in the included urlconf?
Second: Why does the question mark character stop the view from receiving the 'object_id' variable? I thought the '?' was used to designate GET variables in a URL.
thanks
A:
I'll tackle your second question first. The ? character in this context is used to denote a named group in the regular expression. This is a custom extension to regular expressions provided by Python. (See the howto for examples)
To pass an object_id append it to the URL (in your case). Like this: ../foo/app/3 where 3 is the object_id.
A:
That urlconf tells Django to map URLs of the kind .../1, .../123 to the given view (... being the prefix of that urlconf). (?P<object_id>\d+) tells django to assign the value captured by \d+ to the variable object_id. See also Python's documentation on regular expressions and django's documentation on its URL dispatcher.
A:
first, the "r" before a string means it is a regular expression and ^ means the start of the string, $ means the end of string. in python when you put (?P<'something>a_regular_expression) python will find your matched expression for a_regular_expression in that string and return it as a variable with name: something. here \d+ means numbers, and it will find a number and pass it to the function you assigned there ( article_info ) by the name of object_id.
second, you shouldn't worry for GET urls, you just set the main url and django will manage GET variables itself. for example if you have (r'^/post/$, my_app.views.show_post) in your url patterns and you send this get request ../post/?id=10, django will use your my_app.views.show_post function and you can access the get variables in request.GET, here if you want to get id you can use this request.GET[id].
| How to pass object_id to generic view object_detail in Django | I'm using django.views.generic.list_detail.object_detail.
According to the documentation the view takes the variable object_id. To do this I added the following to my urlconf:
(r'^(?P<object_id>\d+)$', list_detail.object_detail, article_info),
The above line is in a separate urlconf that is included in the main urlconf.
If I leave the '^' character at the beginning of the pattern and then attempt to go to the address:
.../?object_id=1
It does not work. If I remove the '^' character the address:
.../?object_id=1
Still does not work. However if I use:
.../object_id=1 (without the question mark)
The view accepts the object_id variable and works without a problem. I have two questions about this.
First: Can the '^' character in an included urlconf be used to restrict the pattern to only match the base url pattern plus the exact string bettween a ^$ in the included urlconf?
Second: Why does the question mark character stop the view from receiving the 'object_id' variable? I thought the '?' was used to designate GET variables in a URL.
thanks
| [
"I'll tackle your second question first. The ? character in this context is used to denote a named group in the regular expression. This is a custom extension to regular expressions provided by Python. (See the howto for examples)\nTo pass an object_id append it to the URL (in your case). Like this: ../foo/app/3 wh... | [
3,
2,
1
] | [] | [] | [
"django",
"django_urls",
"python",
"regex"
] | stackoverflow_0003467121_django_django_urls_python_regex.txt |
Q:
Python lazy dictionary evaluation
Python evangelists will say the reason Python doesn't have a switch statement is because it has dictionaries. So... how can I use a dictionary to solve this problem here?
The problem is that all values are being evaluated some and raising exceptions depending on the input.
This is just a dumb example of a class that stores a number or a list of numbers and provides multiplication.
class MyClass(object):
def __init__(self, value):
self._value = value
def __mul__(self, other):
return {
(False, False): self._value * other._value ,
(False, True ): [self._value * o for o in other._value] ,
(True , False): [v * other._value for v in self._value] ,
(True , True ): [v * o for v, o in zip(self._value, other._value)],
}[(isinstance(self._value, (tuple, list)), isinstance(other._value, (tuple, list)))]
def __str__(self):
return repr(self._value)
__repr__ = __str__
>>> x = MyClass(2.0)
>>> y = MyClass([3.0, 4.0, 5.0])
>>> print x
2.0
>>> print y
[3.0, 4.0, 5.0]
>>> print x * y
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 8, in __mul__
TypeError: can't multiply sequence by non-int of type 'float'
One way that I could solve it would be to prefix each value with "lambda : " and at after the dictionary lookup call the lambda function .... "}(isinsta ...)"
Is there a better way?
A:
Yes, define small lambdas for these different options:
def __mul__(self, other):
scalar_times_scalar = lambda x,y: x*y
scalar_times_seq = lambda x,y: [x*y_i for y_i in y]
seq_times_scalar = lambda x,y: scalar_times_seq(y,x)
seq_times_seq = lambda x,y: [x_i*y_i for x_i,y_i in zip(x,y)]
self_is_seq, other_is_seq = (isinstance(ob._value,(tuple, list))
for ob in (self, other))
fn = {
(False, False): scalar_times_scalar,
(False, True ): scalar_times_seq,
(True , False): seq_times_scalar,
(True , True ): seq_times_seq,
}[(self_is_seq, other_is_seq)]
return fn(self._value, other._value)
Ideally, of course, you would define these lambdas only once at class or module scope. I've just shown them in the __mul__ method here for ease of reference.
A:
I can think of two approaches here:
Some if statements. For just four combinations of True and False, it's not that bad. Sequences of if ... elif ... elif ... clauses are, from what I've seen, not uncommon in Python code.
Creating the dict once (as a class field, rather than an instance field), and storing (lambda) functions inside it. This scales better than the previous approach and is faster for many options (although I don't know the value of "many").
A:
I think the main point here is readability.
A dictionary lookup as the one you showed is definitely difficult to read, and therefore to maintain.
In my opinion, main goal while writing software should be readability; for this reason, I would go for a set of if/elif explicitely comparing the two values (instead of having the mapping for the types); then if measurements shows performance concerns, other solutions (like a dictionary lookup with functions) could be explored.
| Python lazy dictionary evaluation | Python evangelists will say the reason Python doesn't have a switch statement is because it has dictionaries. So... how can I use a dictionary to solve this problem here?
The problem is that all values are being evaluated some and raising exceptions depending on the input.
This is just a dumb example of a class that stores a number or a list of numbers and provides multiplication.
class MyClass(object):
def __init__(self, value):
self._value = value
def __mul__(self, other):
return {
(False, False): self._value * other._value ,
(False, True ): [self._value * o for o in other._value] ,
(True , False): [v * other._value for v in self._value] ,
(True , True ): [v * o for v, o in zip(self._value, other._value)],
}[(isinstance(self._value, (tuple, list)), isinstance(other._value, (tuple, list)))]
def __str__(self):
return repr(self._value)
__repr__ = __str__
>>> x = MyClass(2.0)
>>> y = MyClass([3.0, 4.0, 5.0])
>>> print x
2.0
>>> print y
[3.0, 4.0, 5.0]
>>> print x * y
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 8, in __mul__
TypeError: can't multiply sequence by non-int of type 'float'
One way that I could solve it would be to prefix each value with "lambda : " and at after the dictionary lookup call the lambda function .... "}(isinsta ...)"
Is there a better way?
| [
"Yes, define small lambdas for these different options:\n def __mul__(self, other): \n scalar_times_scalar = lambda x,y: x*y\n scalar_times_seq = lambda x,y: [x*y_i for y_i in y]\n seq_times_scalar = lambda x,y: scalar_times_seq(y,x)\n seq_times_seq = lambda x,y: [x_i*y_i ... | [
4,
1,
1
] | [] | [] | [
"dictionary",
"lazy_evaluation",
"python",
"switch_statement"
] | stackoverflow_0003471024_dictionary_lazy_evaluation_python_switch_statement.txt |
Q:
Create x lists in python dynamically
(First, I chose to do this in Python because I never programmed in it and it would be good practice.)
Someone asked me to implement a little "combination" program that basically outputs all possible combinations of a set of group of numbers. Example, if you have:
(1,2,3) as the first set,
(4,5,6) as the second, and
(7,8,9) as the third, then one combination would be (1,4,7) and so on, with a total of 27 possible combinations.
This person just wants to do a 6rows x 6cols matrix or a 5rows x 6cols matrix. However, I want to make my little program as flexible as possible.
The next requirement is to only output combinations with X even numbers. If he wants 0 even numbers, then a possible combination would be (1,5,7). You get the idea.
For the permutation part, I used itertools.product(), which works perfectly.
It would be easy if I just assume that the number of numbers in each set (cols) is fixed as 6.
In that case, I could manually create 6 lists and append each combination to the right list.
However and again, I want this to work with N number of cols.
I'm thinking of 2 ways I might be able to do this, but tried with no luck.
So my question is:
How can I create?
li_1 = []
li_2 = []
...
li_x = []
The one way I tried using "lists of lists":
for combination in itertools.product(*li):
total_combinations = total_combinations + 1
#Counts number of even numbers in a single combination
for x in range(numberInRows):
if combination[x] % 2 == 0:
even_counter = even_counter + 1
print "Even counter:",even_counter
num_evens[even_counter].append(combination)
print "Single set:",num_evens
even_counter = 0
print combination
print "Num_evens:",num_evens
print '\nTotal combinations:', total_combinations
A:
Ls = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
import collections
import itertools
def products_by_even_count(seq):
ret = collections.defaultdict(set)
for p in itertools.product(*seq):
n_even = sum(1 for n in p if n % 2 == 0)
ret[n_even].add(p)
return ret
import pprint
# Calling dict() is only necessary for pretty pprint output.
pprint.pprint(dict(products_by_even_count(Ls)))
Output:
{0: set([(1, 5, 7), (1, 5, 9), (3, 5, 7), (3, 5, 9)]),
1: set([(1, 4, 7),
(1, 4, 9),
(1, 5, 8),
(1, 6, 7),
(1, 6, 9),
(2, 5, 7),
(2, 5, 9),
(3, 4, 7),
(3, 4, 9),
(3, 5, 8),
(3, 6, 7),
(3, 6, 9)]),
2: set([(1, 4, 8),
(1, 6, 8),
(2, 4, 7),
(2, 4, 9),
(2, 5, 8),
(2, 6, 7),
(2, 6, 9),
(3, 4, 8),
(3, 6, 8)]),
3: set([(2, 4, 8), (2, 6, 8)])}
A:
num_evens = {}
for combination in itertools.product(*li):
even_counter = len([ y for y in combination if y & 1 == 0 ])
num_evens.setdefault(even_counter,[]).append(combination)
import pprint
pprint.pprint(num_evens)
A:
from itertools import product
from collections import defaultdict
num_evens = defaultdict(list)
for comb in product(*li):
num_evens[sum(y%2==0 for y in comb)].append(comb)
import pprint
pprint.pprint(num_evens)
| Create x lists in python dynamically | (First, I chose to do this in Python because I never programmed in it and it would be good practice.)
Someone asked me to implement a little "combination" program that basically outputs all possible combinations of a set of group of numbers. Example, if you have:
(1,2,3) as the first set,
(4,5,6) as the second, and
(7,8,9) as the third, then one combination would be (1,4,7) and so on, with a total of 27 possible combinations.
This person just wants to do a 6rows x 6cols matrix or a 5rows x 6cols matrix. However, I want to make my little program as flexible as possible.
The next requirement is to only output combinations with X even numbers. If he wants 0 even numbers, then a possible combination would be (1,5,7). You get the idea.
For the permutation part, I used itertools.product(), which works perfectly.
It would be easy if I just assume that the number of numbers in each set (cols) is fixed as 6.
In that case, I could manually create 6 lists and append each combination to the right list.
However and again, I want this to work with N number of cols.
I'm thinking of 2 ways I might be able to do this, but tried with no luck.
So my question is:
How can I create?
li_1 = []
li_2 = []
...
li_x = []
The one way I tried using "lists of lists":
for combination in itertools.product(*li):
total_combinations = total_combinations + 1
#Counts number of even numbers in a single combination
for x in range(numberInRows):
if combination[x] % 2 == 0:
even_counter = even_counter + 1
print "Even counter:",even_counter
num_evens[even_counter].append(combination)
print "Single set:",num_evens
even_counter = 0
print combination
print "Num_evens:",num_evens
print '\nTotal combinations:', total_combinations
| [
"Ls = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]\nimport collections\nimport itertools\n\ndef products_by_even_count(seq):\n ret = collections.defaultdict(set)\n for p in itertools.product(*seq):\n n_even = sum(1 for n in p if n % 2 == 0)\n ret[n_even].add(p)\n return ret\n\nimport pprint\n# Calling d... | [
1,
1,
1
] | [] | [] | [
"list",
"python"
] | stackoverflow_0003472048_list_python.txt |
Q:
Number Guesser: Please Review and Make it More Pythonic
I'm working on learning python, here is a simple program, I wrote:
def guesser(var, num1,possible):
if var == 'n':
cutoff = len(possible)/2
possible = possible[0:cutoff]
cutoff = possible[len(possible)/2]
#print possible
if (len(possible) == 1):
print "Your Number is:", possible
else:
var = raw_input("Is Your Number Bigger Than %s? (y/n): " %cutoff)
guesser(var, cutoff,possible)
elif var == 'y':
cutoff = len(possible)/2
possible = possible[cutoff:len(possible)]
cutoff = possible[len(possible)/2]
#print possible
#print cutoff
if (len(possible) == 1):
print "Your Number is:", possible
else:
var = raw_input("Is Your Number Bigger Than %s? (y/n): " %cutoff)
guesser(var, cutoff,possible)
else:
var = raw_input("Is Your Number Bigger Than 50? (y/n): ")
guesser(var, 50, possible)
possible = []
possible = range(1,101)
guesser('a', 50, possible)
A:
Normally I would try to help with your code, but you have made it so way much too complicated that I think it would be easier for you to look at some code.
def guesser( bounds ):
a, b = bounds
mid = ( a + b ) // 2
if a == b: return a
if input( "over {0}? ".format( mid ) ) == "y":
new_bounds = ( mid, b )
else:
new_bounds = ( a, mid )
return guesser( new_bounds )
You should think about how your algorithm will work in abstract terms before diving in.
EDIT: Simplified the code at the expense of brevity.
A:
Before doing it more pythonic I will probably make it more simple... the algorithm is much more complex than necessary. No need to use a list when two ints are enough.
def guesser(low = 0, up = 100):
print("Choose a number between %d and %d" % (low, up-1))
while low < up - 1:
mid = (low+up)//2
yn = raw_input("Is Your Number Smaller Than %s? (y/n): " % mid)
if yn not in ['y', 'n']: continue
low, up = (low, mid) if yn == 'y' else (mid, up)
print "Your Number is:", low
guesser()
A:
More pythonic to use the bisect module - and a class of course :)
import bisect
hival= 50
class Guesser(list):
def __getitem__(self, idx):
return 0 if raw_input("Is your number bigger than %s? (y/n)"%idx)=='y' else hival
g=Guesser()
print "Think of a number between 0 and %s"%hival
print "Your number is: %s"%bisect.bisect(g,0,hi=hival)
Here is the definition of bisect.bisect from the python library. As you can see, most of the algorithm is implemented here for you
def bisect_right(a, x, lo=0, hi=None):
"""Return the index where to insert item x in list a, assuming a is sorted.
The return value i is such that all e in a[:i] have e <= x, and all e in
a[i:] have e > x. So if x already appears in the list, a.insert(x) will
insert just after the rightmost x already there.
Optional args lo (default 0) and hi (default len(a)) bound the
slice of a to be searched.
"""
if lo < 0:
raise ValueError('lo must be non-negative')
if hi is None:
hi = len(a)
while lo < hi:
mid = (lo+hi)//2
if x < a[mid]: hi = mid
else: lo = mid+1
return lo
bisect = bisect_right # backward compatibility
A:
This is not as elegant as katrielalex's recursion, but it illustrates a basic class.
class guesser:
def __init__(self, l_bound, u_bound):
self.u_bound = u_bound
self.l_bound = l_bound
self.nextguess()
def nextguess(self):
self.guess = int((self.u_bound + self.l_bound)/2)
print 'Higher or lower than %i?' % self.guess
def mynumberishigher(self):
self.l_bound = self.guess
self.nextguess()
def mynumberislower(self):
self.u_bound = self.guess
self.nextguess()
| Number Guesser: Please Review and Make it More Pythonic | I'm working on learning python, here is a simple program, I wrote:
def guesser(var, num1,possible):
if var == 'n':
cutoff = len(possible)/2
possible = possible[0:cutoff]
cutoff = possible[len(possible)/2]
#print possible
if (len(possible) == 1):
print "Your Number is:", possible
else:
var = raw_input("Is Your Number Bigger Than %s? (y/n): " %cutoff)
guesser(var, cutoff,possible)
elif var == 'y':
cutoff = len(possible)/2
possible = possible[cutoff:len(possible)]
cutoff = possible[len(possible)/2]
#print possible
#print cutoff
if (len(possible) == 1):
print "Your Number is:", possible
else:
var = raw_input("Is Your Number Bigger Than %s? (y/n): " %cutoff)
guesser(var, cutoff,possible)
else:
var = raw_input("Is Your Number Bigger Than 50? (y/n): ")
guesser(var, 50, possible)
possible = []
possible = range(1,101)
guesser('a', 50, possible)
| [
"Normally I would try to help with your code, but you have made it so way much too complicated that I think it would be easier for you to look at some code.\ndef guesser( bounds ):\n a, b = bounds\n mid = ( a + b ) // 2\n\n if a == b: return a\n\n if input( \"over {0}? \".format( mid ) ) == \"y\":\n ... | [
4,
2,
2,
0
] | [] | [] | [
"python"
] | stackoverflow_0003472124_python.txt |
Q:
What are the differences between the two Python 2.7 Mac OS X disk image installers?
Python 2.7 has two different disk image installers for Mac OS X. My questions are:
What are the differences between the two Python 2.7 disk image installers?
Python 2.7 32-bit Mac OS X Installer Disk Image for Mac OS X 10.3 through 10.6
Python 2.7 PPC/i386/x86-64 Mac OS X Installer Disk Image for Mac OS X 10.5 or later
If running Mac OS X 10.6 Snow Leopard without the 64-bit kernel and extensions, which is the more appropriate version of Python 2.7 to install?
Why are there two different Mac OS X disk image installers for Python 2.7 when Python 2.6.5 and Python 3.2 each only have one?
Does the first listed installer support PPC? Strange that it wouldn't if it support back to Mac OS X 10.3, but unlike the second installer PPC isn't listed.
A:
As others have pointed out, the second (64-bit) installer variant is new on python.org starting with 2.7 and future releases of 2.7 and 3.2 will have both 32-bit-only and a 32-/64-bit variants. The newer variant is an attempt to add out-of-the-box support from python.org for Intel 64-bit (x86_64) processes which is the default for new applications in OS X 10.6.
However, the python.org installer goes a bit further and tries to support x86_64 on OS X 10.5 as well and that has caused some serious problems. In particular, the installer was linked with Tk 8.4 for which Apple does not supply a native 64-bit version on either 10.5 or 10.6. This means that IDLE and any other Python program that uses Tkinter fails on 10.6 in the default 64-bit mode (and for various reasons it is not straightforward to run IDLE in 32-bit mode on 10.6). And, of course, they will fail on 10.5 if 64-bit mode is forced. Apple does supply a 64-bit version of Tk 8.5 but only on OS X 10.6. For this and other reasons, the current plan is to change the 32-bit/64-bit variant in future releases to only support 10.6 or higher and only include 32-bit (i386) and 64-bit (x86_64) support, no PPC.
So if you anticipate needing IDLE or Tkinter on 10.6, you should consider sticking to the traditional 32-bit-only 2.7 installer for now until a newer 10.6-only installer is available (which might not be until the next maintenance release of 2.7).
As to question 4, at the moment, both installers support PPC 32-bit: the first on 10.3 through 10.6, the second on 10.5 & 10.6. But the second will disappear in the future. And, although OS X 10.6 will not boot on PPC machines, it is possible to run Python (and most other programs) in PPC mode if the Rosetta emulation package is installed in OS X.
A:
Looks like all the other versions only have a 32 bit port? So a "new feature" of 2.7 is a 64 bit port. If you aren't running a 64 bit OS and don't need programs that can use > 4 GB of ram, you can stick with the 32 bit.
A:
1) You almost certainly want "Python 2.7 PPC/i386/x86-64 Mac OS X Installer Disk Image". It's also a close analogue of the 2.6.x version that comes with 10.6 by default.
2) Unless you know you need 32-bit versions for some reason, default to 64-bit for everything on Snow Leopard. It's what will most closely match the rest of the the apps/libraries/userland. The kernel is irrelevant in this regard. The 32-bit OS X kernel can and will still run 64-bit userland.
3) 64-bit versions weren't available before 10.6.
A:
Python Issue 7473 appears to shed light on why there are two installers and the differences.
| What are the differences between the two Python 2.7 Mac OS X disk image installers? | Python 2.7 has two different disk image installers for Mac OS X. My questions are:
What are the differences between the two Python 2.7 disk image installers?
Python 2.7 32-bit Mac OS X Installer Disk Image for Mac OS X 10.3 through 10.6
Python 2.7 PPC/i386/x86-64 Mac OS X Installer Disk Image for Mac OS X 10.5 or later
If running Mac OS X 10.6 Snow Leopard without the 64-bit kernel and extensions, which is the more appropriate version of Python 2.7 to install?
Why are there two different Mac OS X disk image installers for Python 2.7 when Python 2.6.5 and Python 3.2 each only have one?
Does the first listed installer support PPC? Strange that it wouldn't if it support back to Mac OS X 10.3, but unlike the second installer PPC isn't listed.
| [
"As others have pointed out, the second (64-bit) installer variant is new on python.org starting with 2.7 and future releases of 2.7 and 3.2 will have both 32-bit-only and a 32-/64-bit variants. The newer variant is an attempt to add out-of-the-box support from python.org for Intel 64-bit (x86_64) processes which ... | [
3,
0,
0,
0
] | [] | [] | [
"diskimage",
"installation",
"python"
] | stackoverflow_0003472349_diskimage_installation_python.txt |
Q:
How to get stdout into a string (Python)
I need to capture the stdout of a process I execute via subprocess into a string to then put it inside a TextCtrl of a wx application I'm creating. How do I do that?
EDIT: I'd also like to know how to determine when a process terminates
A:
From the subprocess documentation:
from subprocess import *
output = Popen(["mycmd", "myarg"], stdout=PIPE).communicate()[0]
A:
Take a look at the subprocess module.
http://docs.python.org/library/subprocess.html
It allows you to do a lot of the same input and output redirection that you can do in the shell.
If you're trying to redirect the stdout of the currently executing script, that's just a matter of getting a hold of the correct file handle. Off the top of my head, stdin is 0, stdout is 1, and stderr is 2, but double check. I could be wrong on that point.
| How to get stdout into a string (Python) | I need to capture the stdout of a process I execute via subprocess into a string to then put it inside a TextCtrl of a wx application I'm creating. How do I do that?
EDIT: I'd also like to know how to determine when a process terminates
| [
"From the subprocess documentation:\nfrom subprocess import *\noutput = Popen([\"mycmd\", \"myarg\"], stdout=PIPE).communicate()[0]\n\n",
"Take a look at the subprocess module.\nhttp://docs.python.org/library/subprocess.html\nIt allows you to do a lot of the same input and output redirection that you can do in th... | [
10,
2
] | [] | [] | [
"python",
"stdout"
] | stackoverflow_0003472760_python_stdout.txt |
Q:
python generate SQL statement keywords by given lists
I have below variables try to make an auto generating SQL statement in python
_SQL_fields = ("ID", "NAME", "BIRTH", "SEX", "AGE")
_add_SQL_desc_type = ("PRIMARY KEY AUTOINCREASEMENT",)
_SQL_type = ("INT",'TEXT')
_Value = (1,'TOM',19700101,'M',40)
bop = ["AND"] * (len(_SQL_fields) - 1)
mop = ["="] * 5
comma = [","] * 4
exception_field = ["NAME", "BIRTH"]
for successful design a SQL statement I need below strings generated by above variable:
'(ID, NAME, BIRTH, SEX, AGE)'
'(ID PRIMARY KEY AUTOINCREASEMENT INT, NAME TEXT, BIRTH, SEX, AGE)'
'ID = 1 AND NAME = "TOM" AND BIRTH = "19700101" AND SEX = "M" AND AGE = '40''
and removed exception_field strings may looks like
'(ID, SEX, AGE)'
'(ID PRIMARY KEY AUTOINCREASEMENT INT, SEX, AGE)'
'ID = 1 AND SEX = "M" AND AGE = '40''
==========
so far I use something like
str(zip(_SQL_fields,_add_SQL_desc_type,_SQL_type,comma))
or using
", ".join(_SQL_fields)
to get one.
but I get a lot of mess when, especiall in processing the "comma" and "quotes".
'(ID, NAME, BIRTH, SEX, AGE)' <===== this is OK for SQL
'(ID, NAME, BIRTH, SEX, AGE,)' <===== this is NOT
and this
'ID = 1 AND NAME = "TOM" AND BIRTH = "19700101" AND SEX = "M" and AGE = '40'
^-some var need quotes and some do no need them.
So anyone can share me how to write a clean code for generate above strings?
Thanks for your time!
KC
A:
To obtain
'(ID PRIMARY KEY AUTOINCREASEMENT INT, NAME TEXT, BIRTH, SEX, AGE)'
you can use izip_longest from itertools module in python.
Then, to remove exception_fields in the obtain list, you can use set or list comprehension.
l1 = [1, 2, 3, 4]
l2 = [2, 3]
l3 = list(set(l1) - set(l2))
print l3
--> [1, 4]
l3 = [val for val in l1 if val not in l2]
print l3
--> [1, 4]
I recommend the list comprehension.
", ".join(_SQL_fields)
is perfect to obtain
'ID, NAME, BIRTH, SEX, AGE'
so i don't really understand you're problem.
| python generate SQL statement keywords by given lists | I have below variables try to make an auto generating SQL statement in python
_SQL_fields = ("ID", "NAME", "BIRTH", "SEX", "AGE")
_add_SQL_desc_type = ("PRIMARY KEY AUTOINCREASEMENT",)
_SQL_type = ("INT",'TEXT')
_Value = (1,'TOM',19700101,'M',40)
bop = ["AND"] * (len(_SQL_fields) - 1)
mop = ["="] * 5
comma = [","] * 4
exception_field = ["NAME", "BIRTH"]
for successful design a SQL statement I need below strings generated by above variable:
'(ID, NAME, BIRTH, SEX, AGE)'
'(ID PRIMARY KEY AUTOINCREASEMENT INT, NAME TEXT, BIRTH, SEX, AGE)'
'ID = 1 AND NAME = "TOM" AND BIRTH = "19700101" AND SEX = "M" AND AGE = '40''
and removed exception_field strings may looks like
'(ID, SEX, AGE)'
'(ID PRIMARY KEY AUTOINCREASEMENT INT, SEX, AGE)'
'ID = 1 AND SEX = "M" AND AGE = '40''
==========
so far I use something like
str(zip(_SQL_fields,_add_SQL_desc_type,_SQL_type,comma))
or using
", ".join(_SQL_fields)
to get one.
but I get a lot of mess when, especiall in processing the "comma" and "quotes".
'(ID, NAME, BIRTH, SEX, AGE)' <===== this is OK for SQL
'(ID, NAME, BIRTH, SEX, AGE,)' <===== this is NOT
and this
'ID = 1 AND NAME = "TOM" AND BIRTH = "19700101" AND SEX = "M" and AGE = '40'
^-some var need quotes and some do no need them.
So anyone can share me how to write a clean code for generate above strings?
Thanks for your time!
KC
| [
"To obtain\n'(ID PRIMARY KEY AUTOINCREASEMENT INT, NAME TEXT, BIRTH, SEX, AGE)'\n\nyou can use izip_longest from itertools module in python.\nThen, to remove exception_fields in the obtain list, you can use set or list comprehension. \nl1 = [1, 2, 3, 4]\nl2 = [2, 3]\nl3 = list(set(l1) - set(l2))\nprint l3\n--> [1, ... | [
0
] | [] | [] | [
"keyword",
"list",
"python",
"sql"
] | stackoverflow_0003466617_keyword_list_python_sql.txt |
Q:
PyAMF / Django - Flex class mapping errors
I'm using PyAmf to communicate with a Flex app. But I keep getting errors.
My model:
from django.contrib.auth.models import User
class Talent(User):
street = models.CharField(max_length=100)
street_nr = models.CharField(max_length=100)
postal_code = models.PositiveIntegerField()
city = models.CharField(max_length=100)
description = models.CharField(max_length=100)
My gateway file:
from pyamf.remoting.gateway.django import DjangoGateway
from addestino.bot.services import user
from addestino.bot.models import *
from django.contrib.auth.models import User
import pyamf
pyamf.register_class(User, 'django.contrib.auth.models.User')
pyamf.register_class(Talent, 'addestino.bot.models.Talent')
services = {
'user.register': user.register,
'user.login': user.login,
'user.logout': user.logout,
}
gateway = DjangoGateway(services, expose_request=True)
The Flex Talent object:
package be.addestino.battleoftalents.model
{
[Bindable]
public class Investor
{
public static var ALIAS : String = 'be.addestino.battleoftalents.model.Investor';
public var id:Object;
public var street:String;
public var street_nr:String;
public var postal_code:uint;
public var city:String;
public var cash:Number;
public var date_created:Date;
public var date_modified:Date;
public var username:String;
public var password:String;
public var email:String;
public function Investor()
{
}
}
}
If Flex calls my register servicemethod (a method that sends a flex Investor to python), I get an error 'KeyError: first_name'. Then when we add a first_name field to our Flex VO, we get a last_name error. And so on.
This error means that our flex VO has to have exactly the same fields as our django models. With simple objects this wouldn't be a problem. But we use subclasses of the django User object. And that means our Investor also needs a user_ptr field for example.
Note: I get all errors before the servicemethod.
Is there an easier way? Ideally we would have a Flex Investor VO with only the fields that we use (whether they're from the Django User or our django Investor that extends from User). But right now the Flex objects would have to be modeled EXACTLY after our Django objects. I don't even know exactly what the Django User object looks like (which I shouldn't).
I could really use some help. Thanks a lot in advance :-)
A:
This is done using the IExternalizable interface.
PyAMF Docs
Adobe Docs
It lets you explicitly write and read objects. If this is similar to Java implicit serialization, it's not going to let you limit what is sent by default. I was unable to find any examples of this with PyAMF.
Best post on Serialization I've found.
A:
With the release of PyAMF 0.6b2 I can finally answer this question.
0.5.1 was pretty strict in how it handled inheritance when it came to encoding Django models. It ensured that all properties were guaranteed to be encoded on each object - and expected that all properties were available when decoding the request.
This sucked and the behaviour has now changed with the release of the new version. PyAMF is a lot more forgiving about what you hand it from Flex. You shouldn't get the KeyError error any more (and if you do, its considered a bug).
| PyAMF / Django - Flex class mapping errors | I'm using PyAmf to communicate with a Flex app. But I keep getting errors.
My model:
from django.contrib.auth.models import User
class Talent(User):
street = models.CharField(max_length=100)
street_nr = models.CharField(max_length=100)
postal_code = models.PositiveIntegerField()
city = models.CharField(max_length=100)
description = models.CharField(max_length=100)
My gateway file:
from pyamf.remoting.gateway.django import DjangoGateway
from addestino.bot.services import user
from addestino.bot.models import *
from django.contrib.auth.models import User
import pyamf
pyamf.register_class(User, 'django.contrib.auth.models.User')
pyamf.register_class(Talent, 'addestino.bot.models.Talent')
services = {
'user.register': user.register,
'user.login': user.login,
'user.logout': user.logout,
}
gateway = DjangoGateway(services, expose_request=True)
The Flex Talent object:
package be.addestino.battleoftalents.model
{
[Bindable]
public class Investor
{
public static var ALIAS : String = 'be.addestino.battleoftalents.model.Investor';
public var id:Object;
public var street:String;
public var street_nr:String;
public var postal_code:uint;
public var city:String;
public var cash:Number;
public var date_created:Date;
public var date_modified:Date;
public var username:String;
public var password:String;
public var email:String;
public function Investor()
{
}
}
}
If Flex calls my register servicemethod (a method that sends a flex Investor to python), I get an error 'KeyError: first_name'. Then when we add a first_name field to our Flex VO, we get a last_name error. And so on.
This error means that our flex VO has to have exactly the same fields as our django models. With simple objects this wouldn't be a problem. But we use subclasses of the django User object. And that means our Investor also needs a user_ptr field for example.
Note: I get all errors before the servicemethod.
Is there an easier way? Ideally we would have a Flex Investor VO with only the fields that we use (whether they're from the Django User or our django Investor that extends from User). But right now the Flex objects would have to be modeled EXACTLY after our Django objects. I don't even know exactly what the Django User object looks like (which I shouldn't).
I could really use some help. Thanks a lot in advance :-)
| [
"This is done using the IExternalizable interface.\n\nPyAMF Docs\nAdobe Docs\n\nIt lets you explicitly write and read objects. If this is similar to Java implicit serialization, it's not going to let you limit what is sent by default. I was unable to find any examples of this with PyAMF.\nBest post on Serialization... | [
3,
1
] | [] | [] | [
"apache_flex",
"django",
"pyamf",
"python"
] | stackoverflow_0000856846_apache_flex_django_pyamf_python.txt |
Q:
Popen gives "File not found" Error (windows/python)
I'm trying to run console commands via subprocess.Popen, and whenever I run it I get the windows "File not found" error, even when running the echo command.
I am also using Popen inside a thread made with the thread module. Is that the problem?
A:
Instead ofD:\Program Files\Steam\steamapps\terabytest\sourcesdk\bin\orangebox\bin\vbsp.exe, useD:/Program Files/Steam/steamapps/terabytest/sourcesdk/bin/orangebox/bin/vbsp.exe
This eliminates any complications with backslashes inside quotes.
A:
echo is not an executable, it's an internal command inside cmd.exe. If you want to use Popen with internal commands, add a keyword parameter shell=True
| Popen gives "File not found" Error (windows/python) | I'm trying to run console commands via subprocess.Popen, and whenever I run it I get the windows "File not found" error, even when running the echo command.
I am also using Popen inside a thread made with the thread module. Is that the problem?
| [
"Instead ofD:\\Program Files\\Steam\\steamapps\\terabytest\\sourcesdk\\bin\\orangebox\\bin\\vbsp.exe, useD:/Program Files/Steam/steamapps/terabytest/sourcesdk/bin/orangebox/bin/vbsp.exe\nThis eliminates any complications with backslashes inside quotes.\n",
"echo is not an executable, it's an internal command insi... | [
4,
3
] | [] | [] | [
"popen",
"python",
"windows"
] | stackoverflow_0003472862_popen_python_windows.txt |
Q:
Resuable model members in django
I have a django model like this:
class Something(models.Model):
title = models.CharField(max_length=200, default=u'')
text = models.CharField(max_length=250, default=u'', blank=True)
photo = models.ImageField(upload_to=u'something')
def photo_thumb(self):
if self.photo:
return u'<img src="%s" />' % (settings.MEDIA_URL + '/thumbs/?h=64&w=80&c=50x0&p=' + self.photo.name)
else:
return u'(no photo)'
photo_thumb.short_description = u'Photo'
photo_thumb.allow_tags = True
photo_thumb.admin_order_field = 'photo'
def __unicode__(self):
return self.title;
class SomethingElse(models.Model):
name = models.CharField(max_length=200, default=u'')
foo = models.CharField(max_length=250, default=u'', blank=True)
photo = models.ImageField(upload_to=u'something_else')
def photo_thumb(self):
if self.photo:
return u'<img src="%s" />' % (settings.MEDIA_URL + '/thumbs/?h=64&w=80&c=50x0&p=' + self.photo.name)
else:
return u'(no photo)'
photo_thumb.short_description = u'Photo'
photo_thumb.allow_tags = True
photo_thumb.admin_order_field = 'photo'
def __unicode__(self):
return self.title;
I feel like this violates DRY, for obvious reasons. My question is, can I stick this somewhere else:
# ...
def photo_thumb(self):
if self.photo:
return u'<img src="%s" />' % (settings.MEDIA_URL + '/thumbs/?h=64&w=80&c=50x0&p=' + self.photo.name)
else:
return u'(no photo)'
photo_thumb.short_description = u'Photo'
photo_thumb.allow_tags = True
photo_thumb.admin_order_field = 'photo'
# ...
And then include it in relevant model classes with a single line of code? Or can photo_thumb be dynamically added to the appropriate classes somehow? I've tried classical and parasitic inheritance, but I may not be doing it right... I'm new to Django and fairly new to python also. Any help is appreciated.
A:
I agree with @Gintautas. The general rule of thumb is to create an abstract model class if you need to reuse model fields and meta options; use a simple class if you only need to reuse other properties and methods.
In your case I'd go with the abstract class (because of the photo model field):
class PhotoModels(models.Model):
photo = models.ImageField(upload_to=u'something')
def photo_thumb(self):
if self.photo:
return u'<img src="%s" />' % (settings.MEDIA_URL +
'/thumbs/?h=64&w=80&c=50x0&p=' + self.photo.name)
else:
return u'(no photo)'
photo_thumb.short_description = u'Photo'
photo_thumb.allow_tags = True
photo_thumb.admin_order_field = 'photo'
class meta:
abstract = True
class Something(PhotoModels):
title = models.CharField(max_length=200, default=u'')
text = models.CharField(max_length=250, default=u'', blank=True)
class SomethingElse(PhotoModels):
name = models.CharField(max_length=200, default=u'')
foo = models.CharField(max_length=250, default=u'', blank=True)
photo.upload_to = u'something_else'
def __unicode__(self):
return self.title;
... although this would be legal just as well:
class PhotoModels:
def photo_thumb(self):
if self.photo:
return u'<img src="%s" />' % (settings.MEDIA_URL +
'/thumbs/?h=64&w=80&c=50x0&p=' + self.photo.name)
else:
return u'(no photo)'
photo_thumb.short_description = u'Photo'
photo_thumb.allow_tags = True
photo_thumb.admin_order_field = 'photo'
class Something(models.Model, PhotoModels):
title = models.CharField(max_length=200, default=u'')
text = models.CharField(max_length=250, default=u'', blank=True)
photo = models.ImageField(upload_to=u'something')
class SomethingElse(models.Model, PhotoModels):
name = models.CharField(max_length=200, default=u'')
foo = models.CharField(max_length=250, default=u'', blank=True)
photo = models.ImageField(upload_to=u'something_else')
def __unicode__(self):
return self.title;
A:
Sure you can reuse the code. Just factor it out into a base class, and make both your classes inherit from that base class. That should work just fine. Just don't forget that the base class either needs to inherit from models.Model itself (then I would suggest making it abstract), or you can put the reusable code in a mixin; that means that your both classes will be inheriting from both models.Model and the new mixin base class.
A:
Another solution may be to create a subclass of ImageField and override the contribute_to_class method:
class ImageWithThumbnailField(ImageField):
def contribute_to_class(self, cls, name):
super(ImageWithThumbnailField, self).contribute_to_class(cls, name)
def photo_thumb(self):
photo = getattr(self, name, None)
if photo:
return u'<img src="%s" />' % (settings.MEDIA_URL + '/thumbs/?h=64&w=80&c=50x0&p=' + photo.name)
else:
return u'(no photo)'
photo_thumb.short_description = u'Photo'
photo_thumb.allow_tags = True
photo_thumb.admin_order_field = 'photo'
setattr(cls, 'photo_thumb', photo_thumb);
I think this is better because on calling the photo_thumb method you are expecting the existence of self.photo which is not guaranteed if you are using the other solution that use an abstract model.
EDIT: Note that you can use getattr(self, name) to dynamically access the field. So yes, it is guaranteed that we have some photo field.
A:
Maybe I asked too soon... I think abstract base classes may be the answer.
http://docs.djangoproject.com/en/dev/topics/db/models/#abstract-base-classes
I'll check it out and confirm.
| Resuable model members in django | I have a django model like this:
class Something(models.Model):
title = models.CharField(max_length=200, default=u'')
text = models.CharField(max_length=250, default=u'', blank=True)
photo = models.ImageField(upload_to=u'something')
def photo_thumb(self):
if self.photo:
return u'<img src="%s" />' % (settings.MEDIA_URL + '/thumbs/?h=64&w=80&c=50x0&p=' + self.photo.name)
else:
return u'(no photo)'
photo_thumb.short_description = u'Photo'
photo_thumb.allow_tags = True
photo_thumb.admin_order_field = 'photo'
def __unicode__(self):
return self.title;
class SomethingElse(models.Model):
name = models.CharField(max_length=200, default=u'')
foo = models.CharField(max_length=250, default=u'', blank=True)
photo = models.ImageField(upload_to=u'something_else')
def photo_thumb(self):
if self.photo:
return u'<img src="%s" />' % (settings.MEDIA_URL + '/thumbs/?h=64&w=80&c=50x0&p=' + self.photo.name)
else:
return u'(no photo)'
photo_thumb.short_description = u'Photo'
photo_thumb.allow_tags = True
photo_thumb.admin_order_field = 'photo'
def __unicode__(self):
return self.title;
I feel like this violates DRY, for obvious reasons. My question is, can I stick this somewhere else:
# ...
def photo_thumb(self):
if self.photo:
return u'<img src="%s" />' % (settings.MEDIA_URL + '/thumbs/?h=64&w=80&c=50x0&p=' + self.photo.name)
else:
return u'(no photo)'
photo_thumb.short_description = u'Photo'
photo_thumb.allow_tags = True
photo_thumb.admin_order_field = 'photo'
# ...
And then include it in relevant model classes with a single line of code? Or can photo_thumb be dynamically added to the appropriate classes somehow? I've tried classical and parasitic inheritance, but I may not be doing it right... I'm new to Django and fairly new to python also. Any help is appreciated.
| [
"I agree with @Gintautas. The general rule of thumb is to create an abstract model class if you need to reuse model fields and meta options; use a simple class if you only need to reuse other properties and methods.\nIn your case I'd go with the abstract class (because of the photo model field):\nclass PhotoModels(... | [
3,
1,
1,
0
] | [] | [] | [
"django",
"django_models",
"dry",
"inheritance",
"python"
] | stackoverflow_0003472811_django_django_models_dry_inheritance_python.txt |
Q:
How can I display native accents to languages in console in windows?
print "Español\nPortuguês\nItaliano".encode('utf-8')
Errors:
Traceback (most recent call last):
File "", line 1, in
print "Español\nPortuguês\nItaliano".encode('utf-8')
UnicodeDecodeError: 'ascii' codec can't decode byte 0xf1 in position 4: ordinal not in range(128)
I'm trying to make a multilingual console program in Windows. Is this possible?
I've saved the file in utf-8 encoding as well, I get the same error.
*EDIT
I"m just outputting text in this program. I change to lucida fonts, I keep getting this:
alt text http://img826.imageshack.us/img826/7312/foreignlangwindowsconso.png
I'm just looking for a portable way to correctly display foreign languages in the console in windows. If it can do it cross platform, even better. I thought utf-8 was the answer, but all of you are telling me fonts, etc.. also plays a part. So anyone have a definitive answer?
A:
Short answer:
# -*- coding: utf-8 -*-
print u"Español\nPortuguês\nItaliano".encode('utf-8')
The first line tells Python that your file is encoded in UTF-8 (your editor must use the same settings) and this line should always be on the beginning of your file.
Another thing is that Python 2 knows two different basestring objects - str and unicode. The u prefix will create such a unicode object instead of the default str object, which you can then encode as UTF-8 (but printing unicode objects directly should also work).
A:
First of all, in Python 2.x you can't encode a str that has non-ASCII characters. You have to write
print u"Español\nPortuguês\nItaliano".encode('utf-8')
Using UTF-8 at the Windows console is difficult.
You have to set the Command Prompt font to a Unicode font (of which the only one available by default is Lucida Console), or else you get IBM437 encoding anyway.
chcp 65001
Modify encodings._aliases to treat "cp65001" as an alias of UTF-8.
And even then, it doesn't seem to work right.
A:
This works for me:
# coding=utf-8
print "Español\nPortuguês\nItaliano"
You might want to try running it using chcp 65001 && your_program.py As well, try changing the command prompt font to Lucida Console.
| How can I display native accents to languages in console in windows? | print "Español\nPortuguês\nItaliano".encode('utf-8')
Errors:
Traceback (most recent call last):
File "", line 1, in
print "Español\nPortuguês\nItaliano".encode('utf-8')
UnicodeDecodeError: 'ascii' codec can't decode byte 0xf1 in position 4: ordinal not in range(128)
I'm trying to make a multilingual console program in Windows. Is this possible?
I've saved the file in utf-8 encoding as well, I get the same error.
*EDIT
I"m just outputting text in this program. I change to lucida fonts, I keep getting this:
alt text http://img826.imageshack.us/img826/7312/foreignlangwindowsconso.png
I'm just looking for a portable way to correctly display foreign languages in the console in windows. If it can do it cross platform, even better. I thought utf-8 was the answer, but all of you are telling me fonts, etc.. also plays a part. So anyone have a definitive answer?
| [
"Short answer:\n# -*- coding: utf-8 -*-\nprint u\"Español\\nPortuguês\\nItaliano\".encode('utf-8')\n\nThe first line tells Python that your file is encoded in UTF-8 (your editor must use the same settings) and this line should always be on the beginning of your file.\nAnother thing is that Python 2 knows two differ... | [
3,
2,
1
] | [] | [] | [
"python",
"unicode"
] | stackoverflow_0003473166_python_unicode.txt |
Q:
Python - Sum of numbers
I am trying to sum all the numbers up to a range, with all the numbers up to the same range.
I am using python:
limit = 10
sums = []
for x in range(1,limit+1):
for y in range(1,limit+1):
sums.append(x+y)
This works just fine, however, because of the nested loops, if the limit is too big it will take a lot of time to compute the sums.
Is there any way of doing this without a nested loop?
(This is just a simplification of something that I need to do to solve a ProjectEuler problem. It involves obtaining the sum of all abundant numbers.)
A:
[x + y for x in xrange(limit + 1) for y in xrange(x + 1)]
This still performs just as many calculations but will do it about twice as fast as a for loop.
from itertools import combinations
(a + b for a, b in combinations(xrange(n + 1, 2)))
This avoids a lot of duplicate sums. I don't know if you want to keep track of those or not.
If you just want every sum with no representation of how you got it then xrange(2*n + 2)
gives you what you want with no duplicates or looping at all.
In response to question:
[x + y for x in set set1 for y in set2]
A:
I am trying to sum all the numbers up
to a range, with all the numbers up to
the same range.
So you want to compute limit**2 sums.
because of the nested loops, if the
limit is too big it will take a lot of
time to compute the sums.
Wrong: it's not "because of the nested loops" -- it's because you're computing a quadratic number of sums, and therefore doing a quadratic amount of work.
Is there any way of doing this without
a nested loop?
You can mask the nesting, as in @aaron's answer, and you can halve the number of sums you compute due to the problem's simmetry (though that doesn't do the same thing as your code), but, to prepare a list with a quadratic number of items, there's absolutely no way to avoid doing a quadratic amount of work.
However, for your stated purpose
obtaining the sum of all abundant
numbers.
you're need an infinite amount of work, since there's an infinity of abundant numbers;-).
I think you have in mind problem 23, which is actually very different: it asks for the sum of all numbers that cannot be expressed as the sum of two abundant numbers. How the summation you're asking about would help you move closer to that solution really escapes me.
A:
I'm not sure if there is a good way not using nested loops.
If I put on your shoes, I'll write as following:
[x+y for x in range(1,limit+1) for y in range(1,limit+1)]
| Python - Sum of numbers | I am trying to sum all the numbers up to a range, with all the numbers up to the same range.
I am using python:
limit = 10
sums = []
for x in range(1,limit+1):
for y in range(1,limit+1):
sums.append(x+y)
This works just fine, however, because of the nested loops, if the limit is too big it will take a lot of time to compute the sums.
Is there any way of doing this without a nested loop?
(This is just a simplification of something that I need to do to solve a ProjectEuler problem. It involves obtaining the sum of all abundant numbers.)
| [
"[x + y for x in xrange(limit + 1) for y in xrange(x + 1)]\n\nThis still performs just as many calculations but will do it about twice as fast as a for loop.\nfrom itertools import combinations\n\n(a + b for a, b in combinations(xrange(n + 1, 2)))\n\nThis avoids a lot of duplicate sums. I don't know if you want to ... | [
2,
1,
0
] | [] | [] | [
"for_loop",
"loops",
"python"
] | stackoverflow_0003473413_for_loop_loops_python.txt |
Q:
Python design question (Can/should decorators be used in this case?)
I have a problem that can be simplified as follows: I have a particular set of objects that I want to modify in a particular way. So, it's possible for me to write a function that modifies a single object and then create a decorator that applies that function to all of the objects in the set.
So, let's suppose I have something like this:
def modify_all(f):
def fun(objs):
for o in objs:
f(o)
@modify_all
def modify(obj):
# Modify obj in some way.
modify(all_my_objs)
However, there may also be times when I just want to operate on one object by itself.
Is there a way to "undecorate" the modify function programmatically to get the original (single object) function back again? (Without just removing the decorator, I mean.) Or is there another approach that would be better to use in such a case?
Just for clarity and completeness, there's quite a bit more going on in the actual modify_all decorator than is illustrated here. Both modify and modify_all have a certain amount of work to carry out, which is why I thought the decorator might be nice. Additionally, I have other variants of modify that can directly benefit from modify_all, which makes it even more useful. But I do sometimes need to do things using the original modify function. I know that I can always pass in a one-element set to "trick" it into working as-is, but I'm wondering if there's a better way or a better design for the situation.
A:
Apply the decorator manually to the function once and save the result in a new name.
def modify_one(obj):
...
modify_some = modify_all(modify_one)
modify_some([a, b, c])
modify_one(d)
A:
Decorators are meant for when you apply a higher-order function (HOF) in the specific form
def f ...
f = HOF(f)
In this case, the syntax (with identical semantics)
@HOF
def f ...
is more concise, and immediately warns the reader that f will never be used "bare".
For your use case, where you need both "bare f" and "decorated f", they'll have to have two distinct names, so the decorator syntax is not immediately applicable -- but neither is it at all necessary! An excellent alternative might be to use as the decorated name an attribute of the decorated function, for example:
def modify(obj): ...
modify.all = modify_all(modify)
Now, you can call modify(justone) and modify.all(allofthem) and code happily ever after.
A:
Alex had already posted something similar but it was my first thought too so I'll go ahead and post it because it does sidestep Mike Grahams objection to it. (even if I don't really agree with his objection: the fact that people don't know about something is not a reason to not use it. This is how people continue to not know about it)
def modify_all(f):
def fun(objs):
for o in objs:
f(o)
fun.unmodified = f
return fun
def undecorate(f):
return f.unmodified
@modify_all
def modify(obj):
# Modify obj in some way.
modify(all_my_objs)
undecorate(modify)(one_obj)
You could just call undecorate to access the decorated function.
another alternative would be to handle it like this:
def modify_one(modify, one):
return modify.unmodified(one)
A:
You can use a decorator to apply Alex's idea
>>> def forall(f):
... def fun(objs):
... for o in objs:
... f(o)
... f.forall=fun
... return f
...
>>> @forall
... def modify(obj):
... print "modified ", obj
...
>>> modify("foobar")
modified foobar
>>> modify.forall("foobar")
modified f
modified o
modified o
modified b
modified a
modified r
Perhaps foreach is a better name
| Python design question (Can/should decorators be used in this case?) | I have a problem that can be simplified as follows: I have a particular set of objects that I want to modify in a particular way. So, it's possible for me to write a function that modifies a single object and then create a decorator that applies that function to all of the objects in the set.
So, let's suppose I have something like this:
def modify_all(f):
def fun(objs):
for o in objs:
f(o)
@modify_all
def modify(obj):
# Modify obj in some way.
modify(all_my_objs)
However, there may also be times when I just want to operate on one object by itself.
Is there a way to "undecorate" the modify function programmatically to get the original (single object) function back again? (Without just removing the decorator, I mean.) Or is there another approach that would be better to use in such a case?
Just for clarity and completeness, there's quite a bit more going on in the actual modify_all decorator than is illustrated here. Both modify and modify_all have a certain amount of work to carry out, which is why I thought the decorator might be nice. Additionally, I have other variants of modify that can directly benefit from modify_all, which makes it even more useful. But I do sometimes need to do things using the original modify function. I know that I can always pass in a one-element set to "trick" it into working as-is, but I'm wondering if there's a better way or a better design for the situation.
| [
"Apply the decorator manually to the function once and save the result in a new name.\ndef modify_one(obj):\n ...\n\nmodify_some = modify_all(modify_one)\nmodify_some([a, b, c])\nmodify_one(d)\n\n",
"Decorators are meant for when you apply a higher-order function (HOF) in the specific form\ndef f ...\n\nf = HO... | [
2,
2,
1,
0
] | [
"In your particular case I'd consider using argument unpacking. This can be done with only a slight modification of your existing code:\ndef broadcast(f):\n def fun(*objs):\n for o in objs:\n f(o)\n return fun\n\n@broadcast\ndef modify(obj):\n # Modify obj in some way.\n\nmodify(*all_my_o... | [
-1
] | [
"decorator",
"python"
] | stackoverflow_0003473746_decorator_python.txt |
Q:
Which Exception for notifying that subclass should implement a method?
Suppose I want to create an abstract class in Python with some methods to be implemented by subclasses, for example:
class Base():
def f(self):
print "Hello."
self.g()
print "Bye!"
class A(Base):
def g(self):
print "I am A"
class B(Base):
def g(self):
print "I am B"
I'd like that if the base class is instantiated and its f() method called, when self.g() is called, that throws an exception telling you that a subclass should have implemented method g().
What's the usual thing to do here? Should I raise a NotImplementedError? or is there a more specific way of doing it?
A:
In Python 2.6 and better, you can use the abc module to make Base an "actually" abstract base class:
import abc
class Base:
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def g(self):
pass
def f(self): # &c
this guarantees that Base cannot be instantiated -- and neither can any subclass which fails to override g -- while meeting @Aaron's target of allowing subclasses to use super in their g implementations. Overall, a much better solution than what we used to have in Python 2.5 and earlier!
Side note: having Base inherit from object would be redundant, because the metaclass needs to be set explicitly anyway.
A:
Make a method that does nothing, but still has a docstring explaining the interface. Getting a NameError is confusing, and raising NotImplementedError (or any other exception, for that matter) will break proper usage of super.
A:
Peter Norvig has given a solution for this in his Python Infrequently Asked Questions list. I'll reproduce it here. Do check out the IAQ, it is very useful.
## Python
class MyAbstractClass:
def method1(self): abstract
class MyClass(MyAbstractClass):
pass
def abstract():
import inspect
caller = inspect.getouterframes(inspect.currentframe())[1][3]
raise NotImplementedError(caller + ' must be implemented in subclass')
| Which Exception for notifying that subclass should implement a method? | Suppose I want to create an abstract class in Python with some methods to be implemented by subclasses, for example:
class Base():
def f(self):
print "Hello."
self.g()
print "Bye!"
class A(Base):
def g(self):
print "I am A"
class B(Base):
def g(self):
print "I am B"
I'd like that if the base class is instantiated and its f() method called, when self.g() is called, that throws an exception telling you that a subclass should have implemented method g().
What's the usual thing to do here? Should I raise a NotImplementedError? or is there a more specific way of doing it?
| [
"In Python 2.6 and better, you can use the abc module to make Base an \"actually\" abstract base class:\nimport abc\n\nclass Base:\n __metaclass__ = abc.ABCMeta\n @abc.abstractmethod\n def g(self):\n pass\n def f(self): # &c\n\nthis guarantees that Base cannot be instantiated -- and neither can a... | [
15,
2,
0
] | [] | [] | [
"abstract_base_class",
"coding_style",
"design_patterns",
"exception",
"python"
] | stackoverflow_0003473667_abstract_base_class_coding_style_design_patterns_exception_python.txt |
Q:
Problems with my BaseHTTPServer
I am trying to create my own functions in the subclass of BaseHTTPRequestHandler as such
class Weblog(BaseHTTPServer.BaseHTTPRequestHandler):
def do_HEAD(self):
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()
def do_GET(self):
"""Respond to a GET request."""
if self.path == '/':
do_index()
elif self.path == '/timestamp':
do_entry()
elif self.path == '/post':
do_post_form()
def do_index(self):
'''If the PATH_INFO is '/' then the weblog index should be presented'''
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()
post = None
content = {}
line = '<tr id="%(timestamp)s"><td>%(date)s</td>'
line += '<td><a href="%(timestamp)s">%(title)s</a></td></tr>'
for timestamp in weblog.list_posts():
post = storage.retrieve_post(timestamp)
if content.has_key('lines') == false:
content['lines'] = line %post
else:
content['lines'] += line %post
self.wfile.write('<a href = "post">Add a post</a>')
self.wfile.write('<table><tr><th>Date</th><th>Title</th></tr>%(lines)s</tables>' %content)
When I run it on the commnadline it gives me the following error:-
Exception happened during processing of request from ('127.0.0.1', 59808)
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/SocketServer.py", line 281, in _handle_request_noblock
self.process_request(request, client_address)
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/SocketServer.py", line 307, in process_request
self.finish_request(request, client_address)
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/SocketServer.py", line 320, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/SocketServer.py", line 615, in __init__
self.handle()
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/BaseHTTPServer.py", line 329, in hand
self.handle_one_request()
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/BaseHTTPServer.py", line 323, in handle_one_request
method()
File "weblog.py", line 34, in do_GET
do_index()
NameError: global name 'do_index' is not defined
Am I doing something wrong here?
A:
To call something in the current class, you should use self.method_name()
def do_GET(self):
"""Respond to a GET request."""
if self.path == '/':
self.do_index()
elif self.path == '/timestamp':
self.do_entry()
elif self.path == '/post':
self.do_post_form()
| Problems with my BaseHTTPServer | I am trying to create my own functions in the subclass of BaseHTTPRequestHandler as such
class Weblog(BaseHTTPServer.BaseHTTPRequestHandler):
def do_HEAD(self):
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()
def do_GET(self):
"""Respond to a GET request."""
if self.path == '/':
do_index()
elif self.path == '/timestamp':
do_entry()
elif self.path == '/post':
do_post_form()
def do_index(self):
'''If the PATH_INFO is '/' then the weblog index should be presented'''
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()
post = None
content = {}
line = '<tr id="%(timestamp)s"><td>%(date)s</td>'
line += '<td><a href="%(timestamp)s">%(title)s</a></td></tr>'
for timestamp in weblog.list_posts():
post = storage.retrieve_post(timestamp)
if content.has_key('lines') == false:
content['lines'] = line %post
else:
content['lines'] += line %post
self.wfile.write('<a href = "post">Add a post</a>')
self.wfile.write('<table><tr><th>Date</th><th>Title</th></tr>%(lines)s</tables>' %content)
When I run it on the commnadline it gives me the following error:-
Exception happened during processing of request from ('127.0.0.1', 59808)
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/SocketServer.py", line 281, in _handle_request_noblock
self.process_request(request, client_address)
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/SocketServer.py", line 307, in process_request
self.finish_request(request, client_address)
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/SocketServer.py", line 320, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/SocketServer.py", line 615, in __init__
self.handle()
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/BaseHTTPServer.py", line 329, in hand
self.handle_one_request()
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/BaseHTTPServer.py", line 323, in handle_one_request
method()
File "weblog.py", line 34, in do_GET
do_index()
NameError: global name 'do_index' is not defined
Am I doing something wrong here?
| [
"To call something in the current class, you should use self.method_name()\ndef do_GET(self):\n \"\"\"Respond to a GET request.\"\"\"\n if self.path == '/':\n self.do_index()\n elif self.path == '/timestamp':\n self.do_entry()\n elif self.path == '/post':\n self.do_post_form()\n\n"
... | [
2
] | [] | [] | [
"basehttpserver",
"python"
] | stackoverflow_0003474045_basehttpserver_python.txt |
Q:
Import fails with a strange error
I get:
TemplateSyntaxError at /blog/post/test
Caught NameError while rendering:
global name 'forms' is not defined
for this code:
forms.py
from dojango.forms import widgets
from django.contrib.comments.forms import CommentForm
from Website.Comments.models import PageComment
class PageCommentForm(CommentForm):
title = widgets.TextInput()
rating = widgets.RatingInput()
def get_comment_model(self):
return PageComment
def get_comment_create_data(self):
# Use the data of the superclass, and add in the title field
data = super(PageComment, self).get_comment_create_data()
data['title'] = self.cleaned_data['title']
return data
models.py
from Website.CMS.models import Author, Rating
from django.db.models import CharField, ForeignKey
from django.contrib.comments.models import Comment
class PageComment(Comment):
title = CharField(max_length=300)
parent = ForeignKey(Author, related_name='parent_id', null=True)
author = ForeignKey(Author, related_name='author_id')
def __unicode__(self):
return self.title
class CommentRating(Rating):
comment = ForeignKey(PageComment)
__init__.py
from Website.Comments import *
def get_model():
return models.PageComment
def get_form():
return forms.PageCommentForm #error here
importing form directly inside init.py results in:
AttributeError: 'module' object has no
attribute 'Comments'
Here's the stack trace, the error appears to be coming from dojango but that doesn't really make sense:
File
"I:\wamp\www\Website\Comments__init__.py",
line 1, in
from Website.Comments import models, forms File
"I:\wamp\www\Website\Comments\forms.py",
line 1, in
from dojango import forms File "C:\Python26\lib\site-packages\dojango\forms__init__.py",
line 2, in
from widgets import * File "C:\Python26\lib\site-packages\dojango\forms\widgets.py",
line 11, in
from dojango.util.config import Config File
"C:\Python26\lib\site-packages\dojango\util\config.py",
line 3, in
from dojango.util import media File
"C:\Python26\lib\site-packages\dojango\util\media.py",
line 49, in
for app in settings.INSTALLED_APPS) File
"C:\Python26\lib\site-packages\dojango\util\media.py",
line 49, in
for app in settings.INSTALLED_APPS) File
"C:\Python26\lib\site-packages\dojango\util\media.py",
line 38, in find_ pp_dojo_dir_and_url
media_dir = find_app_dojo_dir(app_name) File
"C:\Python26\lib\site-packages\dojango\util\media.py",
line 27, in find_ pp_dojo_dir
base = find_app_dir(app_name) File
"C:\Python26\lib\site-packages\dojango\util\media.py",
line 20, in find_ pp_dir
mod = getattr(import(m, {}, {}, [a]), a)
The Comments app is in the installed apps.
What should I do?
EDIT:
If I try to include forms directly with import forms I get this:
Traceback (most recent call last):
File "I:\wamp\www\Website\manage.py", line 11, in
execute_manager(settings)
File "C:\Python26\lib\site-packages\django\core\management__init__.py", line
438, in execute_manager
utility.execute()
File "C:\Python26\lib\site-packages\django\core\management__init__.py", line
379, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Python26\lib\site-packages\django\core\management\base.py", line 191,
in run_from_argv
self.execute(*args, **options.dict)
File "C:\Python26\lib\site-packages\django\core\management\base.py", line 209,
in execute
translation.activate('en-us')
File "C:\Python26\lib\site-packages\django\utils\translation__init__.py", lin
e 66, in activate
return real_activate(language)
File "C:\Python26\lib\site-packages\django\utils\functional.py", line 55, in _
curried
return _curried_func(*(args+moreargs), **dict(kwargs, **morekwargs))
File "C:\Python26\lib\site-packages\django\utils\translation__init__.py", lin
e 36, in delayed_loader
return getattr(trans, real_name)(*args, **kwargs)
File "C:\Python26\lib\site-packages\django\utils\translation\trans_real.py", l
ine 193, in activate
_active[currentThread()] = translation(language)
File "C:\Python26\lib\site-packages\django\utils\translation\trans_real.py", l
ine 176, in translation
default_translation = _fetch(settings.LANGUAGE_CODE)
File "C:\Python26\lib\site-packages\django\utils\translation\trans_real.py", l
ine 159, in _fetch
app = import_module(appname)
File "C:\Python26\lib\site-packages\django\utils\importlib.py", line 35, in im
port_module
import(name)
File "I:\wamp\www\Website\Comments__init__.py", line 2, in
import forms
File "I:\wamp\www\Website\Comments\forms.py", line 3, in
from dojango.forms import fields, widgets
File "C:\Python26\lib\site-packages\dojango\forms__init__.py", line 2, in
from widgets import *
File "C:\Python26\lib\site-packages\dojango\forms\widgets.py", line 11, in
from dojango.util.config import Config
File "C:\Python26\lib\site-packages\dojango\util\config.py", line 3, in
from dojango.util import media
File "C:\Python26\lib\site-packages\dojango\util\media.py", line 49, in
for app in settings.INSTALLED_APPS)
File "C:\Python26\lib\site-packages\dojango\util\media.py", line 49, in
for app in settings.INSTALLED_APPS)
File "C:\Python26\lib\site-packages\dojango\util\media.py", line 38, in find_a
pp_dojo_dir_and_url
media_dir = find_app_dojo_dir(app_name)
File "C:\Python26\lib\site-packages\dojango\util\media.py", line 27, in find_a
pp_dojo_dir
base = find_app_dir(app_name)
File "C:\Python26\lib\site-packages\dojango\util\media.py", line 20, in find_a
pp_dir
mod = getattr(import(m, {}, {}, [a]), a)
AttributeError: 'module' object has no attribute 'Comments'
Removing any reference for dojango solves the problem.
A:
put the following in __init__.py:
import forms
A:
This is a bug in dojango.
I will report it.
| Import fails with a strange error | I get:
TemplateSyntaxError at /blog/post/test
Caught NameError while rendering:
global name 'forms' is not defined
for this code:
forms.py
from dojango.forms import widgets
from django.contrib.comments.forms import CommentForm
from Website.Comments.models import PageComment
class PageCommentForm(CommentForm):
title = widgets.TextInput()
rating = widgets.RatingInput()
def get_comment_model(self):
return PageComment
def get_comment_create_data(self):
# Use the data of the superclass, and add in the title field
data = super(PageComment, self).get_comment_create_data()
data['title'] = self.cleaned_data['title']
return data
models.py
from Website.CMS.models import Author, Rating
from django.db.models import CharField, ForeignKey
from django.contrib.comments.models import Comment
class PageComment(Comment):
title = CharField(max_length=300)
parent = ForeignKey(Author, related_name='parent_id', null=True)
author = ForeignKey(Author, related_name='author_id')
def __unicode__(self):
return self.title
class CommentRating(Rating):
comment = ForeignKey(PageComment)
__init__.py
from Website.Comments import *
def get_model():
return models.PageComment
def get_form():
return forms.PageCommentForm #error here
importing form directly inside init.py results in:
AttributeError: 'module' object has no
attribute 'Comments'
Here's the stack trace, the error appears to be coming from dojango but that doesn't really make sense:
File
"I:\wamp\www\Website\Comments__init__.py",
line 1, in
from Website.Comments import models, forms File
"I:\wamp\www\Website\Comments\forms.py",
line 1, in
from dojango import forms File "C:\Python26\lib\site-packages\dojango\forms__init__.py",
line 2, in
from widgets import * File "C:\Python26\lib\site-packages\dojango\forms\widgets.py",
line 11, in
from dojango.util.config import Config File
"C:\Python26\lib\site-packages\dojango\util\config.py",
line 3, in
from dojango.util import media File
"C:\Python26\lib\site-packages\dojango\util\media.py",
line 49, in
for app in settings.INSTALLED_APPS) File
"C:\Python26\lib\site-packages\dojango\util\media.py",
line 49, in
for app in settings.INSTALLED_APPS) File
"C:\Python26\lib\site-packages\dojango\util\media.py",
line 38, in find_ pp_dojo_dir_and_url
media_dir = find_app_dojo_dir(app_name) File
"C:\Python26\lib\site-packages\dojango\util\media.py",
line 27, in find_ pp_dojo_dir
base = find_app_dir(app_name) File
"C:\Python26\lib\site-packages\dojango\util\media.py",
line 20, in find_ pp_dir
mod = getattr(import(m, {}, {}, [a]), a)
The Comments app is in the installed apps.
What should I do?
EDIT:
If I try to include forms directly with import forms I get this:
Traceback (most recent call last):
File "I:\wamp\www\Website\manage.py", line 11, in
execute_manager(settings)
File "C:\Python26\lib\site-packages\django\core\management__init__.py", line
438, in execute_manager
utility.execute()
File "C:\Python26\lib\site-packages\django\core\management__init__.py", line
379, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Python26\lib\site-packages\django\core\management\base.py", line 191,
in run_from_argv
self.execute(*args, **options.dict)
File "C:\Python26\lib\site-packages\django\core\management\base.py", line 209,
in execute
translation.activate('en-us')
File "C:\Python26\lib\site-packages\django\utils\translation__init__.py", lin
e 66, in activate
return real_activate(language)
File "C:\Python26\lib\site-packages\django\utils\functional.py", line 55, in _
curried
return _curried_func(*(args+moreargs), **dict(kwargs, **morekwargs))
File "C:\Python26\lib\site-packages\django\utils\translation__init__.py", lin
e 36, in delayed_loader
return getattr(trans, real_name)(*args, **kwargs)
File "C:\Python26\lib\site-packages\django\utils\translation\trans_real.py", l
ine 193, in activate
_active[currentThread()] = translation(language)
File "C:\Python26\lib\site-packages\django\utils\translation\trans_real.py", l
ine 176, in translation
default_translation = _fetch(settings.LANGUAGE_CODE)
File "C:\Python26\lib\site-packages\django\utils\translation\trans_real.py", l
ine 159, in _fetch
app = import_module(appname)
File "C:\Python26\lib\site-packages\django\utils\importlib.py", line 35, in im
port_module
import(name)
File "I:\wamp\www\Website\Comments__init__.py", line 2, in
import forms
File "I:\wamp\www\Website\Comments\forms.py", line 3, in
from dojango.forms import fields, widgets
File "C:\Python26\lib\site-packages\dojango\forms__init__.py", line 2, in
from widgets import *
File "C:\Python26\lib\site-packages\dojango\forms\widgets.py", line 11, in
from dojango.util.config import Config
File "C:\Python26\lib\site-packages\dojango\util\config.py", line 3, in
from dojango.util import media
File "C:\Python26\lib\site-packages\dojango\util\media.py", line 49, in
for app in settings.INSTALLED_APPS)
File "C:\Python26\lib\site-packages\dojango\util\media.py", line 49, in
for app in settings.INSTALLED_APPS)
File "C:\Python26\lib\site-packages\dojango\util\media.py", line 38, in find_a
pp_dojo_dir_and_url
media_dir = find_app_dojo_dir(app_name)
File "C:\Python26\lib\site-packages\dojango\util\media.py", line 27, in find_a
pp_dojo_dir
base = find_app_dir(app_name)
File "C:\Python26\lib\site-packages\dojango\util\media.py", line 20, in find_a
pp_dir
mod = getattr(import(m, {}, {}, [a]), a)
AttributeError: 'module' object has no attribute 'Comments'
Removing any reference for dojango solves the problem.
| [
"put the following in __init__.py:\nimport forms\n\n",
"This is a bug in dojango.\nI will report it.\n"
] | [
0,
0
] | [] | [] | [
"django",
"python",
"python_import"
] | stackoverflow_0003433066_django_python_python_import.txt |
Q:
In django 1.2.1 how can I get something like the old .as_sql?
In past versions of django you could construct a queryset and then do .as_sql() on it to find out the final query.
in Django 1.2.1 there is a function ._as_sql() which returns something similar, but not the same.
In past versions:
qs=Model.objects.all()
qs.as_sql() ====>
SELECT `model_table.id`, `model_table.name`, `model_table.size` from model_table
This shows me a lot of information.
But if I try it in Django 1.2.1
from django.db import connections
con=connections['default']
qs=Model.objects.all()
qs._as_sql(con) ====>
SELECT U0.`id` from model_table U0
This doesn't show me what fields are actually being selected. I know this information is available somewhere, because in templates, I can still do:
{% for q in sql_queries %}
{{q.time}} - {{q.sql}}
{% endfor %}
which shows me the full version of the query (including the fields selected)
My question is, how can I get this full version within the shell?
A:
qs=Model.objects.all()
qs.query.as_sql()
Should do the job as it is shown here
EDIT:
I just try it and get the same error.
qs=Model.objects.all()
print qs.query
this must give you what you want (:
| In django 1.2.1 how can I get something like the old .as_sql? | In past versions of django you could construct a queryset and then do .as_sql() on it to find out the final query.
in Django 1.2.1 there is a function ._as_sql() which returns something similar, but not the same.
In past versions:
qs=Model.objects.all()
qs.as_sql() ====>
SELECT `model_table.id`, `model_table.name`, `model_table.size` from model_table
This shows me a lot of information.
But if I try it in Django 1.2.1
from django.db import connections
con=connections['default']
qs=Model.objects.all()
qs._as_sql(con) ====>
SELECT U0.`id` from model_table U0
This doesn't show me what fields are actually being selected. I know this information is available somewhere, because in templates, I can still do:
{% for q in sql_queries %}
{{q.time}} - {{q.sql}}
{% endfor %}
which shows me the full version of the query (including the fields selected)
My question is, how can I get this full version within the shell?
| [
"qs=Model.objects.all()\nqs.query.as_sql() \n\nShould do the job as it is shown here\nEDIT:\nI just try it and get the same error. \nqs=Model.objects.all()\nprint qs.query\n\nthis must give you what you want (:\n"
] | [
4
] | [] | [] | [
"django",
"python"
] | stackoverflow_0003474505_django_python.txt |
Q:
Get a count for a pattern using python
In the following how to count the number of times that __TEXT__ appears in the variable sing python
a="This is __TEXT__ message to test __TEXT__ message"
A:
a.count("__TEXT__")
| Get a count for a pattern using python | In the following how to count the number of times that __TEXT__ appears in the variable sing python
a="This is __TEXT__ message to test __TEXT__ message"
| [
"a.count(\"__TEXT__\")\n"
] | [
4
] | [] | [] | [
"python"
] | stackoverflow_0003474690_python.txt |
Q:
How to parse data in a variable length delimited file?
I have a text file which does not confirm to standards. So I know the (end,start) positions of each column value.
Sample text file :
# # # #
Techy Inn Val NJ
Found the position of # using this code :
1 f = open('sample.txt', 'r')
2 i = 0
3 positions = []
4 for line in f:
5 if line.find('#') > 0:
6 print line
7 for each in line:
8 i += 1
9 if each == '#':
10 positions.append(i)
1 7 11 15 => Positions
So far, so good! Now, how do I fetch the values from each row based on the positions I fetched? I am trying to construct an efficient loop but any pointers are greatly appreciated guys! Thanks (:
A:
Here's a way to read fixed width fields using regexp
>>> import re
>>> s="Techy Inn Val NJ"
>>> var1,var2,var3,var4 = re.match("(.{5}) (.{3}) (.{3}) (.{2})",s).groups()
>>> var1
'Techy'
>>> var2
'Inn'
>>> var3
'Val'
>>> var4
'NJ'
>>>
A:
Off the top of my head:
f = open(.......)
header = f.next() # get first line
posns = [i for i, c in enumerate(header + "#") if c = '#']
for line in f:
fields = [line[posns[k]:posns[k+1]] for k in xrange(len(posns) - 1)]
Update with tested, fixed code:
import sys
f = open(sys.argv[1])
header = f.next() # get first line
print repr(header)
posns = [i for i, c in enumerate(header) if c == '#'] + [-1]
print posns
for line in f:
posns[-1] = len(line)
fields = [line[posns[k]:posns[k+1]].rstrip() for k in xrange(len(posns) - 1)]
print fields
Input file:
# # #
Foo BarBaz
123456789abcd
Debug output:
'# # #\n'
[0, 7, 10, -1]
['Foo', 'Bar', 'Baz']
['1234567', '89a', 'bcd']
Robustification notes:
This solution caters for any old rubbish (or nothing) after the last # in the header line; it doesn't need the header line to be padded out with spaces or anything else.
The OP needs to consider whether it's an error if the first character of the header is not #.
Each field has trailing whitespace stripped; this automatically removes a trailing newline from the rihtmost field (and doesn't run amok if the last line is not terminated by a newline).
Final(?) update: Leapfrooging @gnibbler's suggestion to use slice(): set up the slices once before looping.
import sys
f = open(sys.argv[1])
header = f.next() # get first line
print repr(header)
posns = [i for i, c in enumerate(header) if c == '#']
print posns
slices = [slice(lo, hi) for lo, hi in zip(posns, posns[1:] + [None])]
print slices
for line in f:
fields = [line[sl].rstrip() for sl in slices]
print fields
A:
Adapted from John Machin's answer
>>> header = "# # # #"
>>> row = "Techy Inn Val NJ"
>>> posns = [i for i, c in enumerate(header) if c == '#']
>>> [row[slice(*x)] for x in zip(posns, posns[1:]+[None])]
['Techy ', 'Inn ', 'Val ', 'NJ']
You can also write the last line like this
>>> [row[i:j] for i,j in zip(posns, posns[1:]+[None])]
For the other example you give in the comments, you just need to have the correct header
>>> header = "# # # #"
>>> row = "Techiyi Iniin Viial NiiJ"
>>> posns = [i for i, c in enumerate(header) if c == '#']
>>> [row[slice(*x)] for x in zip(posns, posns[1:]+[None])]
['Techiyi ', 'Iniin ', 'Viial ', 'NiiJ']
>>>
A:
Ok, to be little different and to give the asked in comments generalized solution, I use the header line instead of slice and generator function. Additionally I have allowed first columns to be comment by not putting field name in first column and using of multichar field names instead of only '#'.
Minus point is that one char fields are not possible to have header names but only have '#' in header line (which are allways considered like in previous solutions as beginning of field, even after letters in header)
sample="""
HOTEL CAT ST DEP ##
Test line Techy Inn Val NJ FT FT
"""
data=sample.splitlines()[1:]
def fields(header,line):
previndex=0
prevchar=''
for index,char in enumerate(header):
if char == '#' or (prevchar != char and prevchar == ' '):
if previndex or header[0] != ' ':
yield line[previndex:index]
previndex=index
prevchar = char
yield line[previndex:]
header,dataline = data
print list(fields(header,dataline))
Output
['Techy Inn ', 'Val ', 'NJ ', 'FT ', 'F', 'T']
One practical use of this is to use in parsing fixed field length data without knowing the lengths by just putting copy of dataline with all fields and no comment present and spaces replaced with something else like '_' and single character field values replaced by #.
Header from sample line:
' Techy_Inn Val NJ FT ##'
A:
def parse(your_file):
first_line = your_file.next().rstrip()
slices = []
start = None
for e, c in enumerate(first_line):
if c != '#':
continue
if start is None:
start = e
continue
slices.append(slice(start, e))
start = e
if start is not None:
slices.append(slice(start, None))
for line in your_file:
parsed = [line[s] for s in slices]
yield parsed
A:
f = open('sample.txt', 'r')
pos = [m.span() for m in re.finditer('#\s*', f.next())]
pos[-1] = (pos[-1][0], None)
for line in f:
print [line[i:j].strip() for i, j in pos]
f.close()
A:
How about this?
with open('somefile','r') as source:
line= source.next()
sizes= map( len, line.split("#") )[1:]
positions = [ (sum(sizes[:x]),sum(sizes[:x+1])) for x in xrange(len(sizes)) ]
for line in source:
fields = [ line[start,end] for start,end in positions ]
Is this what you're looking for?
| How to parse data in a variable length delimited file? | I have a text file which does not confirm to standards. So I know the (end,start) positions of each column value.
Sample text file :
# # # #
Techy Inn Val NJ
Found the position of # using this code :
1 f = open('sample.txt', 'r')
2 i = 0
3 positions = []
4 for line in f:
5 if line.find('#') > 0:
6 print line
7 for each in line:
8 i += 1
9 if each == '#':
10 positions.append(i)
1 7 11 15 => Positions
So far, so good! Now, how do I fetch the values from each row based on the positions I fetched? I am trying to construct an efficient loop but any pointers are greatly appreciated guys! Thanks (:
| [
"Here's a way to read fixed width fields using regexp\n>>> import re\n>>> s=\"Techy Inn Val NJ\"\n>>> var1,var2,var3,var4 = re.match(\"(.{5}) (.{3}) (.{3}) (.{2})\",s).groups()\n>>> var1\n'Techy'\n>>> var2\n'Inn'\n>>> var3\n'Val'\n>>> var4\n'NJ'\n>>> \n\n",
"Off the top of my head:\nf = open(.......)\nheader = f.... | [
3,
2,
1,
1,
0,
0,
0
] | [] | [] | [
"file",
"python",
"text"
] | stackoverflow_0003472884_file_python_text.txt |
Q:
Python: traverse tree adding html list (ul)
I have this python code that will traverse a tree structure. I am trying to add ul and li tags to the function but I am not very succesful. I though I was able to keep the code clean without to many conditionals but now I ain't so sure anymore.
def findNodes(nodes):
def traverse(ns):
for child in ns:
traverse.level += 1
traverse(child.Children)
traverse.level -= 1
traverse.level = 1
traverse(nodes)
This is the base function I have for traversing my tree structure. The end result should be nested ul and li tags. If need I can post my own not working examples but they might be a little confusing.
Update: Example with parameter
def findNodes(nodes):
def traverse(ns, level):
for child in ns:
level += 1
traverse(child.Children, level)
level -= 1
traverse(nodes, 1)
A:
I removed the unused level parameter. Adding in any sort of text is left as an exercise to the reader.
def findNodes(nodes):
def traverse(ns):
if not ns:
return ''
ret = ['<ul>']
for child in ns:
ret.extend(['<li>', traverse(child.Children), '</li>'])
ret.append('</ul>')
return ''.join(ret)
return traverse(nodes)
| Python: traverse tree adding html list (ul) | I have this python code that will traverse a tree structure. I am trying to add ul and li tags to the function but I am not very succesful. I though I was able to keep the code clean without to many conditionals but now I ain't so sure anymore.
def findNodes(nodes):
def traverse(ns):
for child in ns:
traverse.level += 1
traverse(child.Children)
traverse.level -= 1
traverse.level = 1
traverse(nodes)
This is the base function I have for traversing my tree structure. The end result should be nested ul and li tags. If need I can post my own not working examples but they might be a little confusing.
Update: Example with parameter
def findNodes(nodes):
def traverse(ns, level):
for child in ns:
level += 1
traverse(child.Children, level)
level -= 1
traverse(nodes, 1)
| [
"I removed the unused level parameter. Adding in any sort of text is left as an exercise to the reader.\ndef findNodes(nodes):\n def traverse(ns):\n if not ns:\n return ''\n\n ret = ['<ul>']\n for child in ns:\n ret.extend(['<li>', traverse(child.Children), '</li>'])\n ... | [
2
] | [] | [] | [
"python"
] | stackoverflow_0003474684_python.txt |
Q:
What is the difference between getiterator() and iter() wrt to lxml
As the question says, what would be the difference between:
x.getiterator() and x.iter(), where x is an ElementTree or an Element? Cause it seems to work for both, I have tried it.
If I am wrong somewhere, correct me please.
A:
The Python documentation for ElementTree states that the getiterator() method has been deprecated starting with version 2.7 and says to use Element.iter(). The lxml API documentation states the same but also mentions that the implementation of getiterator() in lxml diverges from the original ElementTree behavior.
Interestingly enough the documentation also states that "If you want an efficient iterator, use the tree.iter() method instead". Note the word "efficient", which leads me to believe there is most certainly a difference in implementation between getiterator() and iter(), but without checking out the source I can't be 100% sure.
Anyhow, if something has been deprecated it's clear they don't want you to use it.
A:
getiterator is the ElementTree standard spelling for this method; iter is an equivalent lxml-only method that will stop your code from working in ElementTree if you need it, and appears to have no redeeming qualities whatsoever except saving you from typing 7 more characters for the method name;-).
| What is the difference between getiterator() and iter() wrt to lxml | As the question says, what would be the difference between:
x.getiterator() and x.iter(), where x is an ElementTree or an Element? Cause it seems to work for both, I have tried it.
If I am wrong somewhere, correct me please.
| [
"The Python documentation for ElementTree states that the getiterator() method has been deprecated starting with version 2.7 and says to use Element.iter(). The lxml API documentation states the same but also mentions that the implementation of getiterator() in lxml diverges from the original ElementTree behavior. ... | [
8,
0
] | [] | [] | [
"lxml",
"python"
] | stackoverflow_0003077010_lxml_python.txt |
Q:
Reading a SOAP header to a SOAPpy response?
How can I read s SOAP header from a SOAPpy response?
A:
You can't, without modifying SOAPy.
When you call a SOAP method, SOAPy runs its own request function, which returns a valid HTTPResponse object. However, it does not retain that object; within the same method call, it parses the body, and returns the result.
In order to alter this behaviour, you'll want to look at the __call__ method of the Method class in soap.py.
| Reading a SOAP header to a SOAPpy response? | How can I read s SOAP header from a SOAPpy response?
| [
"You can't, without modifying SOAPy.\nWhen you call a SOAP method, SOAPy runs its own request function, which returns a valid HTTPResponse object. However, it does not retain that object; within the same method call, it parses the body, and returns the result. \nIn order to alter this behaviour, you'll want to look... | [
0
] | [] | [] | [
"python",
"soap",
"soappy"
] | stackoverflow_0003474809_python_soap_soappy.txt |
Q:
How do I Handle Database changes and port my data?
Example, for web applications using Turbogears and SQLAlchemy. Every time I update my data model, I need to delete my database and recreate it.
Is there an easy way to update the production database?
Do I have to write a custom script that transfers all the production data into a new database model? Or is there an easier way to upgrade a production database?
A:
These database changes are called schema migrations. For SQLAlchemy, sqlalchemy-migrate is the defacto standard. Other ORMs/abstraction layers have similar solutions, e.g. South for Django.
A:
You can ALTER TABLE, i think that's the easiest way.
| How do I Handle Database changes and port my data? | Example, for web applications using Turbogears and SQLAlchemy. Every time I update my data model, I need to delete my database and recreate it.
Is there an easy way to update the production database?
Do I have to write a custom script that transfers all the production data into a new database model? Or is there an easier way to upgrade a production database?
| [
"These database changes are called schema migrations. For SQLAlchemy, sqlalchemy-migrate is the defacto standard. Other ORMs/abstraction layers have similar solutions, e.g. South for Django.\n",
"You can ALTER TABLE, i think that's the easiest way.\n"
] | [
4,
1
] | [] | [] | [
"database",
"python",
"sqlalchemy",
"turbogears"
] | stackoverflow_0003474754_database_python_sqlalchemy_turbogears.txt |
Q:
Is it possible to combine annotations with defer/only in django 1.2.1?
I have two simple models: Book, and Author
Each Book has one Author, linked through a foreignkey.
Things work normally until I try to use defer/only on an annotation:
authors=Author.objects.all().annotate(bookcount=Count('books'))
that works. The query looks like:
select table_author.name, table_author.birthday, COUNT(table_book.id) as bookcount
from table_book left outer join table_author on table_author.id=table_book.author_id
group by table_author.id
so very simple - selecting everything from author, and additionally selecting a count of the books.
But when I do the following, everything changes:
simple=authors.defer('birthday')
now, the simple query looks like this:
select COUNT(table_book.id) as bookcount from table_book left outer join
table_author on table_author.id=table_book.author_id group by table_author.id
and it has completely lost the extra information. What's the deal?
A:
Well, this would seem to be a bug. There's already a ticket, but it hasn't had much attention for a while. Might be worth making a post to the django-developers Google group to chivvy things along.
| Is it possible to combine annotations with defer/only in django 1.2.1? | I have two simple models: Book, and Author
Each Book has one Author, linked through a foreignkey.
Things work normally until I try to use defer/only on an annotation:
authors=Author.objects.all().annotate(bookcount=Count('books'))
that works. The query looks like:
select table_author.name, table_author.birthday, COUNT(table_book.id) as bookcount
from table_book left outer join table_author on table_author.id=table_book.author_id
group by table_author.id
so very simple - selecting everything from author, and additionally selecting a count of the books.
But when I do the following, everything changes:
simple=authors.defer('birthday')
now, the simple query looks like this:
select COUNT(table_book.id) as bookcount from table_book left outer join
table_author on table_author.id=table_book.author_id group by table_author.id
and it has completely lost the extra information. What's the deal?
| [
"Well, this would seem to be a bug. There's already a ticket, but it hasn't had much attention for a while. Might be worth making a post to the django-developers Google group to chivvy things along.\n"
] | [
1
] | [] | [] | [
"django",
"django_queryset",
"python"
] | stackoverflow_0003474728_django_django_queryset_python.txt |
Q:
How to create a floor function with a "step" argument
I would like to create a function floor(number, step), which acts like :
floor(0, 1) = 0
floor(1, 1) = 1
floor(1, 2) = 0
floor(5, 2) = 4
floor(.8, .25) = .75
What is the better way to do something like that ?
Thanks.
A:
You could do something like floor( val / step ) * step
A:
what you want is basically the same as
step * (x // step)
isn't ?
A:
Something along the lines of the code below ought to do the job.
def stepped_floor (n, step=1):
return n - (n % step)
| How to create a floor function with a "step" argument | I would like to create a function floor(number, step), which acts like :
floor(0, 1) = 0
floor(1, 1) = 1
floor(1, 2) = 0
floor(5, 2) = 4
floor(.8, .25) = .75
What is the better way to do something like that ?
Thanks.
| [
"You could do something like floor( val / step ) * step\n",
"what you want is basically the same as\nstep * (x // step)\nisn't ?\n",
"Something along the lines of the code below ought to do the job.\ndef stepped_floor (n, step=1):\n return n - (n % step)\n\n"
] | [
6,
2,
1
] | [] | [] | [
"algorithm",
"math",
"python"
] | stackoverflow_0003472618_algorithm_math_python.txt |
Q:
Python, Threads, the GIL, and C++
Is there some way to make boost::python control the Python GIL for every interaction with python?
I am writing a project with boost::python. I am trying to write a C++ wrapper for an external library, and control the C++ library with python scripts. I cannot change the external library, only my wrapper program. (I am writing a functional testing application for said external library)
The external library is written in C and uses function pointers and callbacks to do a lot of heavy lifting. Its a messaging system, so when a message comes in, a callback function gets called, for example.
I implemented an observer pattern in my library so that multiple objects could listen to one callback. I have all the major players exported properly and I can control things very well up to a certain point.
The external library creates threads to handle messages, send messages, processing, etc. Some of these callbacks might be called from different processes, and I recently found out that python is not thread safe.
These observers can be defined in python, so I need to be able to call into python and python needs to call into my program at any point.
I setup the object and observer like so
class TestObserver( MyLib.ConnectionObserver ):
def receivedMsg( self, msg ):
print("Received a message!")
ob = TestObserver()
cnx = MyLib.Conection()
cnx.attachObserver( ob )
Then I create a source to send to the connection and the receivedMsg function is called.
So a regular source.send('msg') will go into my C++ app, go to the C library, which will send the message, the connection will get it, then call the callback, which goes back into my C++ library and the connection tries to notify all observers, which at this point is the python class here, so it calls that method.
And of course the callback is called from the connection thread, not the main application thread.
Yesterday everything was crashing, I could not send 1 message. Then after digging around in the Cplusplus-sig archives I learned about the GIL and a couple of nifty functions to lock things up.
So my C++ python wrapper for my observer class looks like this now
struct IConnectionObserver_wrapper : Observers::IConnectionObserver, wrapper<Observers::IConnectionObserver>
{
void receivedMsg( const Message* msg )
{
PyGILState_STATE gstate = PyGILState_Ensure();
if( override receivedMsg_func = this->get_override( "receivedMsg" ) )
receivedMsg_func( msg );
Observers::IConnectionObserver::receivedMsg( msg );
PyGILState_Release( gstate );
}
}
And that WORKS, however, when I try to send over 250 messages like so
for i in range(250)
source.send('msg")
it crashes again. With the same message and symptoms that it has before,
PyThreadState_Get: no current thread
so I am thinking that this time I have a problem calling into my C++ app, rather then calling into python.
My question is, is there some way to make boost::python handle the GIL itself for every interaction with python? I can not find anything in the code, and its really hard trying to find where the source.send call enters boost_python :(
A:
I found a really obscure post on the mailing list that said to use
PyEval_InitThreads();
in BOOST_PYTHON_MODULE
and that actually seemed to stop the crashes.
Its still a crap shoot whether it the program reports all the messages it got or not. If i send 2000, most of the time it says it got 2000, but sometimes it reports significantly less.
I suspect this might be due to the threads accessing my counter at the same time, so I am answering this question because that is a different problem.
To fix just do
BOOST_PYTHON_MODULE(MyLib)
{
PyEval_InitThreads();
class_ stuff
A:
Don't know about your problem exactly, but take a look at CallPolicies:
http://www.boost.org/doc/libs/1_37_0/libs/python/doc/v2/CallPolicies.html#CallPolicies-concept
You can define new call policies (one call policy is "return_internal_reference" for instance) that will execute some code before and/or after the wrapped C++ function is executed. I have successfully implemented a call policy to automatically release the GIL before executing a C++ wrapped function and acquiring it again before returning to Python, so I can write code like this:
.def( "long_operation", &long_operation, release_gil<>() );
A call policy might help you in writing this code more easily.
A:
I think the best approach is to avoid the GIL and ensure your interaction with python is single-threaded.
I'm designing a boost.python based test tool at the moment and think I'll probably use a producer/consumer queue to dispatch events from the multi-threaded libraries which will be read sequentially by the python thread.
| Python, Threads, the GIL, and C++ | Is there some way to make boost::python control the Python GIL for every interaction with python?
I am writing a project with boost::python. I am trying to write a C++ wrapper for an external library, and control the C++ library with python scripts. I cannot change the external library, only my wrapper program. (I am writing a functional testing application for said external library)
The external library is written in C and uses function pointers and callbacks to do a lot of heavy lifting. Its a messaging system, so when a message comes in, a callback function gets called, for example.
I implemented an observer pattern in my library so that multiple objects could listen to one callback. I have all the major players exported properly and I can control things very well up to a certain point.
The external library creates threads to handle messages, send messages, processing, etc. Some of these callbacks might be called from different processes, and I recently found out that python is not thread safe.
These observers can be defined in python, so I need to be able to call into python and python needs to call into my program at any point.
I setup the object and observer like so
class TestObserver( MyLib.ConnectionObserver ):
def receivedMsg( self, msg ):
print("Received a message!")
ob = TestObserver()
cnx = MyLib.Conection()
cnx.attachObserver( ob )
Then I create a source to send to the connection and the receivedMsg function is called.
So a regular source.send('msg') will go into my C++ app, go to the C library, which will send the message, the connection will get it, then call the callback, which goes back into my C++ library and the connection tries to notify all observers, which at this point is the python class here, so it calls that method.
And of course the callback is called from the connection thread, not the main application thread.
Yesterday everything was crashing, I could not send 1 message. Then after digging around in the Cplusplus-sig archives I learned about the GIL and a couple of nifty functions to lock things up.
So my C++ python wrapper for my observer class looks like this now
struct IConnectionObserver_wrapper : Observers::IConnectionObserver, wrapper<Observers::IConnectionObserver>
{
void receivedMsg( const Message* msg )
{
PyGILState_STATE gstate = PyGILState_Ensure();
if( override receivedMsg_func = this->get_override( "receivedMsg" ) )
receivedMsg_func( msg );
Observers::IConnectionObserver::receivedMsg( msg );
PyGILState_Release( gstate );
}
}
And that WORKS, however, when I try to send over 250 messages like so
for i in range(250)
source.send('msg")
it crashes again. With the same message and symptoms that it has before,
PyThreadState_Get: no current thread
so I am thinking that this time I have a problem calling into my C++ app, rather then calling into python.
My question is, is there some way to make boost::python handle the GIL itself for every interaction with python? I can not find anything in the code, and its really hard trying to find where the source.send call enters boost_python :(
| [
"I found a really obscure post on the mailing list that said to use \nPyEval_InitThreads();\nin BOOST_PYTHON_MODULE\nand that actually seemed to stop the crashes.\nIts still a crap shoot whether it the program reports all the messages it got or not. If i send 2000, most of the time it says it got 2000, but sometim... | [
4,
4,
2
] | [] | [] | [
"boost_python",
"c",
"c++",
"multithreading",
"python"
] | stackoverflow_0001934898_boost_python_c_c++_multithreading_python.txt |
Q:
I have trouble installing the django-socialregistration app!
I'm a Django amateur, and have problems getting django-registration to work. I followed the installation instructions on their website, but for someone like me these instructions are not 100% clear as to what I should be doing. Here is what I've done:
I installed the oauth2 and python-openid packages using pip. I then copied the facebook.py file from the facebook-python-sdk package to my main django app directory. (As I write this, I'm wondering whether this file should be copied to the socialregistration app directory? Does it make a difference?)
I copied the socialregistration directory to my django project's directory.
I added socialresgitration to my INSTALLED_APPS setting.
To add socialregistration.urls to my urls.py file, I added the following line (not sure if this is correct, since the instructions don't give details):
(r'^social/', include('socialregistration.urls')),
I added the facebook API key and secret key to my settings
I added socialregistration.auth.FacebookAuth to AUTHENTICATION_BACKENDS.
I added socialregistration.middleware.FacebookMiddleware to MIDDLEWARE_CLASSES.
Finally I added the three facebook tags they give in the instructions to one of my templates.
When I then load my website, I get the folllowing error:
Caught AttributeError while rendering: Please add the django.core.context_processors.request context processors to your settings.TEMPLATE_CONTEXT_PROCESSORS set
So, what can I do? I thought installation would be quite simple, but apparently this is not the case. ANY help would be appreciated!
Oh, BTW, I'm using Django 1.2.1 and Python 2.6.
Thanks!
A:
Please add the django.core.context_processors.request context processors to your settings.
Have you done that?
You'll need to change TEMPLATE_CONTEXT_PROCESSORS to include django.core.context_processors.request.
A:
I've found the problem. When my view renders the template, it needs to pass the RequestContext to the template.
return render_to_response('my_template.html', my_data_dictionary, context_instance=RequestContext(request))
Source: http://lincolnloop.com/blog/2008/may/10/getting-requestcontext-your-templates/
| I have trouble installing the django-socialregistration app! | I'm a Django amateur, and have problems getting django-registration to work. I followed the installation instructions on their website, but for someone like me these instructions are not 100% clear as to what I should be doing. Here is what I've done:
I installed the oauth2 and python-openid packages using pip. I then copied the facebook.py file from the facebook-python-sdk package to my main django app directory. (As I write this, I'm wondering whether this file should be copied to the socialregistration app directory? Does it make a difference?)
I copied the socialregistration directory to my django project's directory.
I added socialresgitration to my INSTALLED_APPS setting.
To add socialregistration.urls to my urls.py file, I added the following line (not sure if this is correct, since the instructions don't give details):
(r'^social/', include('socialregistration.urls')),
I added the facebook API key and secret key to my settings
I added socialregistration.auth.FacebookAuth to AUTHENTICATION_BACKENDS.
I added socialregistration.middleware.FacebookMiddleware to MIDDLEWARE_CLASSES.
Finally I added the three facebook tags they give in the instructions to one of my templates.
When I then load my website, I get the folllowing error:
Caught AttributeError while rendering: Please add the django.core.context_processors.request context processors to your settings.TEMPLATE_CONTEXT_PROCESSORS set
So, what can I do? I thought installation would be quite simple, but apparently this is not the case. ANY help would be appreciated!
Oh, BTW, I'm using Django 1.2.1 and Python 2.6.
Thanks!
| [
"\nPlease add the django.core.context_processors.request context processors to your settings.\n\nHave you done that?\nYou'll need to change TEMPLATE_CONTEXT_PROCESSORS to include django.core.context_processors.request.\n",
"I've found the problem. When my view renders the template, it needs to pass the RequestCon... | [
3,
3
] | [] | [] | [
"django",
"python"
] | stackoverflow_0003465371_django_python.txt |
Q:
sqlalchemy Oracle REF CURSOR
I am using sqlalchemy for connection pooling only (need to call existing procs) and want to return a REF CURSOR which is an out parameter.
There seems to be no cursor in sqlalchemy to do this.
Any advice greatly appreciated.
A:
Gut feel - you may have to dive down a lower level than SQLAlchemy, perhaps to the underlying cx_oracle classes.
From an answer provided by Gerhard Häring on another forum :
import cx_Oracle
con = cx_Oracle.connect("me/secret@tns")
cur = con.cursor()
outcur = con.cursor()
cur.execute("""
BEGIN
MyPkg.MyProc(:cur);
END;""", cur=outcur)
for row in out_cur:
print row
I would presume that as SQLAlchemy uses cx_oracle under the hood there should be some way to use the same pooled connections.
Is there any option to wrap your function returning the REF CURSOR in a view on the Oracle side?? (Provided the shape of the REF CURSOR doesn't change, and you can somehow get the right parameters into your function - possibly by sticking them in as session variables, or in a temp table, this should work - I've used this approach to retrieve data from a REF CURSOR function to a language that only supports a limited subset of Oracle features).
| sqlalchemy Oracle REF CURSOR | I am using sqlalchemy for connection pooling only (need to call existing procs) and want to return a REF CURSOR which is an out parameter.
There seems to be no cursor in sqlalchemy to do this.
Any advice greatly appreciated.
| [
"Gut feel - you may have to dive down a lower level than SQLAlchemy, perhaps to the underlying cx_oracle classes.\nFrom an answer provided by Gerhard Häring on another forum :\nimport cx_Oracle \n\ncon = cx_Oracle.connect(\"me/secret@tns\") \ncur = con.cursor() \noutcur = con.cursor() \ncur.execute(\"\"\" \nBEGIN \... | [
0
] | [] | [] | [
"oracle",
"python",
"sqlalchemy"
] | stackoverflow_0003474152_oracle_python_sqlalchemy.txt |
Q:
Design Golf: modeling an Address in appengine, aka an AddressProperty?
Today I was refactoring some code and revisited an old friend, an Address class (see below). It occurred to me that, in our application, we don't do anything special with addresses-- no queries, only lightweight validation and frequent serialization to JSON. The only "useful" properties from the developer point-of-view are the label and person.
So, I considered refactoring the Address model to use a custom AddressProperty (see below), which strikes me as a good thing to do, but off-the-top I don't see any compelling advantages.
Which method would you choose, why and what tradeoffs guide that decision?
# a lightweight Address for form-based CRUD
# many-to-one relationship with Person objects
class Address(db.Model):
label=db.StringProperty()
person=db.ReferenceProperty(collection_name='addresses')
address1=db.StringProperty()
address2=db.StringProperty()
city=db.StringProperty()
zipcode=db.StringProperty()
# an alternate representation -- worthwhile? tradeoffs?
class Address(db.Model):
label=db.StringProperty()
person=db.ReferenceProperty(collection_name='addresses')
details=AddressProperty() # like db.PostalAddressProperty, more methods
A:
Given that you don't need to query on addresses, and given they tend to be fairly small (as opposed to, say, a large binary blob), I would suggest going with the latter. It'll save space and time (fetching it) - the only real downside is that you have to implement the property yourself.
A:
If you wanted to be a little different you could always store the data in one table as structures and then have another table for lookups and metadata
| Design Golf: modeling an Address in appengine, aka an AddressProperty? | Today I was refactoring some code and revisited an old friend, an Address class (see below). It occurred to me that, in our application, we don't do anything special with addresses-- no queries, only lightweight validation and frequent serialization to JSON. The only "useful" properties from the developer point-of-view are the label and person.
So, I considered refactoring the Address model to use a custom AddressProperty (see below), which strikes me as a good thing to do, but off-the-top I don't see any compelling advantages.
Which method would you choose, why and what tradeoffs guide that decision?
# a lightweight Address for form-based CRUD
# many-to-one relationship with Person objects
class Address(db.Model):
label=db.StringProperty()
person=db.ReferenceProperty(collection_name='addresses')
address1=db.StringProperty()
address2=db.StringProperty()
city=db.StringProperty()
zipcode=db.StringProperty()
# an alternate representation -- worthwhile? tradeoffs?
class Address(db.Model):
label=db.StringProperty()
person=db.ReferenceProperty(collection_name='addresses')
details=AddressProperty() # like db.PostalAddressProperty, more methods
| [
"Given that you don't need to query on addresses, and given they tend to be fairly small (as opposed to, say, a large binary blob), I would suggest going with the latter. It'll save space and time (fetching it) - the only real downside is that you have to implement the property yourself.\n",
"If you wanted to be ... | [
1,
0
] | [] | [] | [
"google_app_engine",
"python"
] | stackoverflow_0003471798_google_app_engine_python.txt |
Q:
Static variable inheritance in Python
I'm writing Python scripts for Blender for a project, but I'm pretty new to the language. Something I am confused about is the usage of static variables. Here is the piece of code I am currently working on:
class panelToggle(bpy.types.Operator):
active = False
def invoke(self, context, event):
self.active = not self.active
return{'FINISHED'}
class OBJECT_OT_openConstraintPanel(panelToggle):
bl_label = "openConstraintPanel"
bl_idname = "openConstraintPanel"
The idea is that the second class should inherit the active variable and the invoke method from the first, so that calling OBJECT_OT_openConstraintPanel.invoke() changes OBJECT_OT_openConstraintPanel.active. Using self as I did above won't work however, and neither does using panelToggle instead. Any idea of how I go about this?
A:
use type(self) for access to class attributes
>>> class A(object):
var = 2
def write(self):
print type(self).var
>>> class B(A):
pass
>>> B().write()
2
>>> B.var = 3
>>> B().write()
3
>>> A().write()
2
A:
You can access active through the class it belongs to:
if panelToggle.active:
# do something
If you want to access the class variable from a method, you could write:
def am_i_active(self):
""" This method will access the right *class* variable by
looking at its own class type first.
"""
if self.__class__.active:
print 'Yes, sir!'
else:
print 'Nope.'
A working example can be found here: http://gist.github.com/522619
The self variable (named self by convention) is the current instance of the class, implicitly passed but explicitely recieved.
class A(object):
answer = 42
def add(self, a, b):
""" ``self`` is received explicitely. """
return A.answer + a + b
a = A()
print a.add(1, 2) # ``The instance -- ``a`` -- is passed implicitely.``
# => 45
print a.answer
# => print 42
| Static variable inheritance in Python | I'm writing Python scripts for Blender for a project, but I'm pretty new to the language. Something I am confused about is the usage of static variables. Here is the piece of code I am currently working on:
class panelToggle(bpy.types.Operator):
active = False
def invoke(self, context, event):
self.active = not self.active
return{'FINISHED'}
class OBJECT_OT_openConstraintPanel(panelToggle):
bl_label = "openConstraintPanel"
bl_idname = "openConstraintPanel"
The idea is that the second class should inherit the active variable and the invoke method from the first, so that calling OBJECT_OT_openConstraintPanel.invoke() changes OBJECT_OT_openConstraintPanel.active. Using self as I did above won't work however, and neither does using panelToggle instead. Any idea of how I go about this?
| [
"use type(self) for access to class attributes\n>>> class A(object):\n var = 2\n def write(self):\n print type(self).var\n>>> class B(A):\n pass\n>>> B().write()\n2\n>>> B.var = 3\n>>> B().write()\n3\n>>> A().write()\n2\n\n",
"You can access active through the class it belongs to:\nif panelToggle.active:\n #... | [
22,
4
] | [] | [] | [
"blender",
"inheritance",
"python",
"static",
"syntax"
] | stackoverflow_0003475488_blender_inheritance_python_static_syntax.txt |
Q:
how to return the element tree instance
I want to generate a xml file. I have written a xml_generator method. When /xxx url hits i call this generator function. how should i return this Because returning instance of generator function creates an error.
A:
if you meant you're returning a function, it will be erroneous when you're actually trying to return the results of the function. I.e.: my_function instead of my_function().
| how to return the element tree instance | I want to generate a xml file. I have written a xml_generator method. When /xxx url hits i call this generator function. how should i return this Because returning instance of generator function creates an error.
| [
"if you meant you're returning a function, it will be erroneous when you're actually trying to return the results of the function. I.e.: my_function instead of my_function().\n"
] | [
3
] | [] | [] | [
"elementtree",
"python"
] | stackoverflow_0003475563_elementtree_python.txt |
Q:
Difference between save() and put()?
What is the difference between Mymodel.save() and Mymodel.put() in appengine with python?
I know that save is used in django but does is work with appengine models too?
A:
save() is a (deprecated) alias for put(). They work exactly equivalently - in fact, they're the same function!
| Difference between save() and put()? | What is the difference between Mymodel.save() and Mymodel.put() in appengine with python?
I know that save is used in django but does is work with appengine models too?
| [
"save() is a (deprecated) alias for put(). They work exactly equivalently - in fact, they're the same function!\n"
] | [
5
] | [] | [] | [
"google_app_engine",
"python"
] | stackoverflow_0003476152_google_app_engine_python.txt |
Q:
append columns of data
I have tab delimited data that I am exporting a select few columns into another file. I have:
a b c d
1 2 3 4
5 6 7 8
9 10 11 12
and I get:
b, d
b, d
2, 4
b, d
2, 4
6, 8
b, d
2, 4
6, 8
10, 12
......
I want:
b, d
2, 4
6, 8
10, 12
My code is
f=open('data.txt', 'r')
f1=open('newdata.txt','w')
t=[]
for line in f.readlines():
line =line.split('\t')
t.append('%s,%s\n' %(line[0], line[3]))
f1.writelines(t)
What am I doing wrong??? Why is it repeating?
PLease help
Thanks!!
A:
The indentation is wrong so you are writing the entire array t on every iteration instead of only at the end. Change it to this:
t=[]
for line in f.readlines():
line = line.split('\t')
t.append('%s,%s\n' % (line[0], line[3]))
f1.writelines(t)
Alternatively you could write the lines one at a time instead of waiting until the end, then you don't need the array t at all.
for line in f.readlines():
line = line.split('\t')
s = '%s,%s\n' % (line[0], line[3])
f1.write(s)
A:
As already mentioned, the last line is incorrectly indented. On top of that, you are making things hard and error prone. You don't need the t list, and you don't need to use f.readlines().
Another problem with your code is that your line[3] will end with a newline (because readlines() and friends leave the newline at the end of the line), and you are adding another newline in the format '%s,%s\n' ... this would have produced double spacing on your output file, but you haven't mentioned that.
Also you say that you want b, d in the first output line, and you say that you get b, d -- however your code says '%s,%s\n' %(line[0], line[3]) which would produce a,d. Note TWO differences: (1) space missing (2) a instead of b.
Overall: you say that you get b, d\n but the code that you show would produce a,d\n\n. In future, please show code and output that correspond with each other. Use copy/paste; don't type from memory.
Try this:
f = open('data.txt', 'r')
f1 = open('newdata.txt','w')
for line in f: # reading one line at a time
fields = line.rstrip('\n').split('\t')
# ... using rstrip to remove the newline.
# Re-using the name `line` as you did makes your script less clear.
f1.write('%s,%s\n' % (fields[0], fields[3]))
# Change the above line as needed to make it agree with your desired output.
f.close()
f1.close()
# Always close files when you have finished with them,
# especially files that you have written to.
| append columns of data | I have tab delimited data that I am exporting a select few columns into another file. I have:
a b c d
1 2 3 4
5 6 7 8
9 10 11 12
and I get:
b, d
b, d
2, 4
b, d
2, 4
6, 8
b, d
2, 4
6, 8
10, 12
......
I want:
b, d
2, 4
6, 8
10, 12
My code is
f=open('data.txt', 'r')
f1=open('newdata.txt','w')
t=[]
for line in f.readlines():
line =line.split('\t')
t.append('%s,%s\n' %(line[0], line[3]))
f1.writelines(t)
What am I doing wrong??? Why is it repeating?
PLease help
Thanks!!
| [
"The indentation is wrong so you are writing the entire array t on every iteration instead of only at the end. Change it to this:\nt=[]\nfor line in f.readlines():\n line = line.split('\\t')\n t.append('%s,%s\\n' % (line[0], line[3]))\nf1.writelines(t)\n\nAlternatively you could write the lines one at a time ... | [
4,
1
] | [] | [] | [
"python"
] | stackoverflow_0003476161_python.txt |
Q:
How do I cache a list/dictionary in Pylons?
On a website I'm making, there's a section that hits the database pretty hard. Harder than I want. The data that's being retrieved is all very static. It will rarely change. So I want to cache it.
I came across http://wiki.pylonshq.com/display/pylonsdocs/Caching+in+Templates+and+Controllers and had a good read have been making use of template caching using:
return render('tmpl.html', cache_expire='never')
That works great until I modify the HTML. The only way I've found to delete the cache is to remove the cache_expire parameter from render() and delete the cache folder. But, meh, it works.
What I want to be able to, however, is cache Lists, Tuples and Dictionaries. From reading the above wiki page, it seems this isn't possible?
I want to be able to do something like:
data = [i for i in range(0, 2000000)]
mycache = cache.get_cache('cachename')
value = mycache.get(key='dataset1', list=data, type='memory', expiretime='3600')
print value
Allowing me to do some CPU intensive work (list generation, in this example) and then cache it.
Can this be done with Pylons?
A:
As alternative of traditional cache you can use app globals variables. Once on server startup load data to variable and then use data in you actions or direct in templates.
http://pylonsbook.com/en/1.1/exploring-pylons.html#app-globals-object
Also you can code some action to update this global variable through the admin interface or by other events.
A:
Why not use memcached?
Look at this question on SO on how to use it with pylons: Pylons and Memcached
| How do I cache a list/dictionary in Pylons? | On a website I'm making, there's a section that hits the database pretty hard. Harder than I want. The data that's being retrieved is all very static. It will rarely change. So I want to cache it.
I came across http://wiki.pylonshq.com/display/pylonsdocs/Caching+in+Templates+and+Controllers and had a good read have been making use of template caching using:
return render('tmpl.html', cache_expire='never')
That works great until I modify the HTML. The only way I've found to delete the cache is to remove the cache_expire parameter from render() and delete the cache folder. But, meh, it works.
What I want to be able to, however, is cache Lists, Tuples and Dictionaries. From reading the above wiki page, it seems this isn't possible?
I want to be able to do something like:
data = [i for i in range(0, 2000000)]
mycache = cache.get_cache('cachename')
value = mycache.get(key='dataset1', list=data, type='memory', expiretime='3600')
print value
Allowing me to do some CPU intensive work (list generation, in this example) and then cache it.
Can this be done with Pylons?
| [
"As alternative of traditional cache you can use app globals variables. Once on server startup load data to variable and then use data in you actions or direct in templates.\nhttp://pylonsbook.com/en/1.1/exploring-pylons.html#app-globals-object\nAlso you can code some action to update this global variable through t... | [
1,
1
] | [] | [] | [
"caching",
"pylons",
"python"
] | stackoverflow_0003473864_caching_pylons_python.txt |
Q:
Is there any way to create an app for python3 script?
Py2app will create the app for python2. But for python3?
Has anyone succeeded in creating an app for python3 script?
Any clue would be helpful for my script in creating that.
A:
cx_freeze
Unlike these two tools, cx_Freeze is
cross platform and should work on any
platform that Python itself works on.
It requires Python 2.3 or higher since
it makes use of the zip import
facility which was introduced in that
version.
A:
Py2app also create app for python3 script.
A:
I have just written a blog entry about app builders a few days ago: http://publicfields.blogspot.com/2010/08/mac-os-application-builders-for-python.html
My personal opinion is the same as katriealex's: cx_freeze is the best solution at the moment. Although I did not manage to find a way of bundling binaries cx_freeze produces into a single app. I'd be happy to learn the way to do this and update the post accordingly )
| Is there any way to create an app for python3 script? | Py2app will create the app for python2. But for python3?
Has anyone succeeded in creating an app for python3 script?
Any clue would be helpful for my script in creating that.
| [
"cx_freeze\n\nUnlike these two tools, cx_Freeze is\n cross platform and should work on any\n platform that Python itself works on.\n It requires Python 2.3 or higher since\n it makes use of the zip import\n facility which was introduced in that\n version.\n\n",
"Py2app also create app for python3 script.\n"... | [
1,
0,
0
] | [] | [] | [
"macos",
"python",
"python_3.x"
] | stackoverflow_0003412796_macos_python_python_3.x.txt |
Q:
Is there a python equivalent of the prefuse visualization toolkit?
The prefuse visualization toolkit is pretty nice, but for Java. I was wondering if there was something similar for python. My primary interest is being able to navigate dynamic graphs.
A:
I know this is not exactly python, but you could use prefuse in python through jython
Something along the lines of:
Add prefuse to your path:
export JYTHONPATH=$JYTHONPATH:prefuse.jar
and
>>> import prefuse
from your jython machinery
this guy has an example of using prefuse from jython here
A:
You might want to check out SUMMON, a visualization system that uses python but handles fairly large data sets. There's an impressive video of visualizing and navigating a massive tree. (Can't post the link because I'm a first time poster. It's on the SUMMON front page.)
A:
If you're using a Mac, check out NodeBox. One extension it offers is a graph library that looks pretty good. Poke around in the NodeBox gallery some to find something similar to your problem and it should have some helpful links.
A:
This is well after OP, but just in case:
pydot. Allows generation & rendering of graphs. If you need graph algorithms (transitive closure etc.) also look at pygraphlib which extends and integrates pydot.
Note that neither allows interactive editing of the rendered diagram. They both use graphviz to generate output.
A:
Note that prefuse now has the flare package which uses flash.
Connect that to a Python backend via web2py and you've got a great web app (just an idea).
A:
You could try using prefuse with JPype, if you can't find a suitable replacement.
| Is there a python equivalent of the prefuse visualization toolkit? | The prefuse visualization toolkit is pretty nice, but for Java. I was wondering if there was something similar for python. My primary interest is being able to navigate dynamic graphs.
| [
"I know this is not exactly python, but you could use prefuse in python through jython\nSomething along the lines of:\nAdd prefuse to your path:\nexport JYTHONPATH=$JYTHONPATH:prefuse.jar\nand\n>>> import prefuse\nfrom your jython machinery\nthis guy has an example of using prefuse from jython here\n",
"You might... | [
6,
3,
2,
1,
0,
0
] | [
"MayaVi\n"
] | [
-1
] | [
"prefuse",
"python",
"visualization"
] | stackoverflow_0000591839_prefuse_python_visualization.txt |
Q:
Intelligent date range parsing of human input?
Has anyone come across a script / cl app written in any language that handles the parsing of human-entered dates well? I'd love to be able to parse, for example:
"3 to 4 weeks"
"2 - 3 days"
"3 weeks to 2 months"
A:
The Chronic gem for ruby will allow you to express dates in a natural form.
Some examples of supported forms (from the documentation)
thursday
november
summer
friday 13:00
mon 2:35
4pm
yesterday at 4:00
last friday at 20:00
last week tuesday
tomorrow at 6:45pm
afternoon yesterday
thursday last week
3 years ago
5 months before now
7 hours ago
7 days from now
1 week hence
in 3 hours
1 year ago tomorrow
I have not used it so can not comment on its performance.
A:
Look here:
Is there a natural language parser for dates/times in ColdFusion?
| Intelligent date range parsing of human input? | Has anyone come across a script / cl app written in any language that handles the parsing of human-entered dates well? I'd love to be able to parse, for example:
"3 to 4 weeks"
"2 - 3 days"
"3 weeks to 2 months"
| [
"The Chronic gem for ruby will allow you to express dates in a natural form.\nSome examples of supported forms (from the documentation)\n\n thursday\n november\n summer\n friday 13:00\n mon 2:35\n 4pm\n yesterday at 4:00\n last friday at 20:00\n last week tuesday\n tomorrow at 6:45pm\n afternoon yesterda... | [
4,
0
] | [] | [] | [
"coldfusion",
"javascript",
"php",
"python",
"ruby"
] | stackoverflow_0003473830_coldfusion_javascript_php_python_ruby.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.