content stringlengths 85 101k | title stringlengths 0 150 | question stringlengths 15 48k | answers list | answers_scores list | non_answers list | non_answers_scores list | tags list | name stringlengths 35 137 |
|---|---|---|---|---|---|---|---|---|
Q:
More pythonic way to iterate
I am using a module that is part of a commercial software API. The good news is there is a python module - the bad news is that its pretty unpythonic.
To iterate over rows, the follwoing syntax is used:
cursor = gp.getcursor(table)
row = cursor.next()
while row:
#do something with row
row = cursor.next()
What is the most pythonic way to deal with this situation? I have considered creating a first class function/generator and wrapping calls to a for loop in it:
def cursor_iterator(cursor):
row = cursor.next()
while row:
yield row
row = cursor.next()
[...]
cursor = gp.getcursor(table)
for row in cursor_iterator(cursor):
# do something with row
This is an improvement, but feels a little clumsy. Is there a more pythonic approach? Should I create a wrapper class around the table type?
A:
Assuming that one of Next and next is a typo and they're both the same, you can use the not-so-well-known variant of the built-in iter function:
for row in iter(cursor.next, None):
<do something>
A:
You could create a custom wrapper like:
class Table(object):
def __init__(self, gp, table):
self.gp = gp
self.table = table
self.cursor = None
def __iter__(self):
self.cursor = self.gp.getcursor(self.table)
return self
def next(self):
n = self.cursor.next()
if not n:
raise StopIteration()
return n
and then:
for row in Table(gp, table)
See also: Iterator Types
A:
The best way is to use a Python iterator interface around the table object, imho:
class Table(object):
def __init__(self, table):
self.table = table
def rows(self):
cursor = gp.get_cursor(self.table)
row = cursor.Next()
while row:
yield row
row = cursor.next()
Now you just call:
my_table = Table(t)
for row in my_table.rows():
# do stuff with row
It's very readable, in my opinion.
| More pythonic way to iterate | I am using a module that is part of a commercial software API. The good news is there is a python module - the bad news is that its pretty unpythonic.
To iterate over rows, the follwoing syntax is used:
cursor = gp.getcursor(table)
row = cursor.next()
while row:
#do something with row
row = cursor.next()
What is the most pythonic way to deal with this situation? I have considered creating a first class function/generator and wrapping calls to a for loop in it:
def cursor_iterator(cursor):
row = cursor.next()
while row:
yield row
row = cursor.next()
[...]
cursor = gp.getcursor(table)
for row in cursor_iterator(cursor):
# do something with row
This is an improvement, but feels a little clumsy. Is there a more pythonic approach? Should I create a wrapper class around the table type?
| [
"Assuming that one of Next and next is a typo and they're both the same, you can use the not-so-well-known variant of the built-in iter function:\nfor row in iter(cursor.next, None):\n <do something>\n\n",
"You could create a custom wrapper like:\nclass Table(object):\n def __init__(self, gp, table):\n self.gp = gp\n self.table = table\n self.cursor = None\n\n def __iter__(self):\n self.cursor = self.gp.getcursor(self.table)\n return self\n\n def next(self):\n n = self.cursor.next()\n if not n:\n raise StopIteration()\n return n\n\nand then:\nfor row in Table(gp, table)\n\nSee also: Iterator Types\n",
"The best way is to use a Python iterator interface around the table object, imho:\nclass Table(object):\n def __init__(self, table):\n self.table = table\n\n def rows(self):\n cursor = gp.get_cursor(self.table)\n row = cursor.Next()\n while row:\n yield row\n row = cursor.next()\n\nNow you just call:\nmy_table = Table(t)\nfor row in my_table.rows():\n # do stuff with row\n\nIt's very readable, in my opinion.\n"
] | [
11,
2,
1
] | [] | [] | [
"iterator",
"python",
"refactoring"
] | stackoverflow_0002980031_iterator_python_refactoring.txt |
Q:
Django: saving pickled object
i have a large dictionary I'd like to save. I have pickled it using cPickle.dumps and saved the result into a TextField. When trying to retrieve it (cPicle.loads) i get the following error:
loads() argument 1 must be string, not unicode
Does anybody have any experience in serializing python objects and storing them in a DB using Django? Thanks in advance.
A:
The best advice you're probably going to get is to use json rather than pickle not only for security reasons, but also because it's simply a string which can easily be read and modified if necessary.
edit: in response to the actual problem you're having -
pickle.loads(str(textfield))
| Django: saving pickled object | i have a large dictionary I'd like to save. I have pickled it using cPickle.dumps and saved the result into a TextField. When trying to retrieve it (cPicle.loads) i get the following error:
loads() argument 1 must be string, not unicode
Does anybody have any experience in serializing python objects and storing them in a DB using Django? Thanks in advance.
| [
"The best advice you're probably going to get is to use json rather than pickle not only for security reasons, but also because it's simply a string which can easily be read and modified if necessary.\nedit: in response to the actual problem you're having -\npickle.loads(str(textfield))\n\n"
] | [
8
] | [] | [] | [
"django",
"python"
] | stackoverflow_0002980092_django_python.txt |
Q:
Efficient way to access a mapping of identifiers in Python
I am writing an app to do a file conversion and part of that is replacing old account numbers with a new account numbers.
Right now I have a CSV file mapping the old and new account numbers with around 30K records. I read this in and store it as dict and when writing the new file grab the new account from the dict by key.
My question is what is the best way to do this if the CSV file increases to 100K+ records?
Would it be more efficient to convert the account mappings from a CSV to a sqlite database rather than storing them as a dict in memory?
A:
As long as they will all fit in memory, a dict will be the most efficient solution. It's also a lot easier to code. 100k records should be no problem on a modern computer.
You are right that switching to an SQLite database is a good choice when the number of records gets very large.
| Efficient way to access a mapping of identifiers in Python | I am writing an app to do a file conversion and part of that is replacing old account numbers with a new account numbers.
Right now I have a CSV file mapping the old and new account numbers with around 30K records. I read this in and store it as dict and when writing the new file grab the new account from the dict by key.
My question is what is the best way to do this if the CSV file increases to 100K+ records?
Would it be more efficient to convert the account mappings from a CSV to a sqlite database rather than storing them as a dict in memory?
| [
"As long as they will all fit in memory, a dict will be the most efficient solution. It's also a lot easier to code. 100k records should be no problem on a modern computer.\nYou are right that switching to an SQLite database is a good choice when the number of records gets very large.\n"
] | [
1
] | [] | [] | [
"csv",
"database",
"dictionary",
"python",
"sqlite"
] | stackoverflow_0002980257_csv_database_dictionary_python_sqlite.txt |
Q:
Python dictionary key missing
I thought I'd put together a quick script to consolidate the CSS rules I have distributed across multiple CSS files, then I can minify it.
I'm new to Python but figured this would be a good exercise to try a new language. My main loop isn't parsing the CSS as I thought it would.
I populate a list with selectors parsed from the CSS files to return the CSS rules in order. Later in the script, the list contains an element that is not found in the dictionary.
for line in self.file.readlines():
if self.hasSelector(line):
selector = self.getSelector(line)
if selector not in self.order:
self.order.append(selector)
elif selector and self.hasProperty(line):
# rules.setdefault(selector,[]).append(self.getProperty(line))
property = self.getProperty(line)
properties = [] if selector not in rules else rules[selector]
if property not in properties:
properties.append(property)
rules[selector] = properties
# print "%s :: %s" % (selector, "".join(rules[selector]))
return rules
Error encountered:
$ css-combine combined.css test1.css test2.css
Traceback (most recent call last):
File "css-combine", line 108, in <module>
c.run(outfile, stylesheets)
File "css-combine", line 64, in run
[(selector, rules[selector]) for selector in parser.order],
KeyError: 'p'
Swap the inputs:
$ css-combine combined.css test2.css test1.css
Traceback (most recent call last):
File "css-combine", line 108, in <module>
c.run(outfile, stylesheets)
File "css-combine", line 64, in run
[(selector, rules[selector]) for selector in parser.order],
KeyError: '#header_.title'
I've done some quirky things in the code like sub spaces for underscores in dictionary key names in case it was an issue - maybe this is a benign precaution? Depending on the order of the inputs, a different key cannot be found in the dictionary.
The script:
#!/usr/bin/env python
import optparse
import re
class CssParser:
def __init__(self):
self.file = False
self.order = [] # store rules assignment order
def parse(self, rules = {}):
if self.file == False:
raise IOError("No file to parse")
selector = False
for line in self.file.readlines():
if self.hasSelector(line):
selector = self.getSelector(line)
if selector not in self.order:
self.order.append(selector)
elif selector and self.hasProperty(line):
# rules.setdefault(selector,[]).append(self.getProperty(line))
property = self.getProperty(line)
properties = [] if selector not in rules else rules[selector]
if property not in properties:
properties.append(property)
rules[selector] = properties
# print "%s :: %s" % (selector, "".join(rules[selector]))
return rules
def hasSelector(self, line):
return True if re.search("^([#a-z,\.:\s]+){", line) else False
def getSelector(self, line):
s = re.search("^([#a-z,:\.\s]+){", line).group(1)
return "_".join(s.strip().split())
def hasProperty(self, line):
return True if re.search("^\s?[a-z-]+:[^;]+;", line) else False
def getProperty(self, line):
return re.search("([a-z-]+:[^;]+;)", line).group(1)
class Consolidator:
"""Class to consolidate CSS rule attributes"""
def run(self, outfile, files):
parser = CssParser()
rules = {}
for file in files:
try:
parser.file = open(file)
rules = parser.parse(rules)
except IOError:
print "Cannot read file: " + file
finally:
parser.file.close()
self.serialize(
[(selector, rules[selector]) for selector in parser.order],
outfile
)
def serialize(self, rules, outfile):
try:
f = open(outfile, "w")
for rule in rules:
f.write(
"%s {\n\t%s\n}\n\n" % (
" ".join(rule[0].split("_")), "\n\t".join(rule[1])
)
)
except IOError:
print "Cannot write output to: " + outfile
finally:
f.close()
def init():
op = optparse.OptionParser(
usage="Usage: %prog [options] <output file> <stylesheet1> " +
"<stylesheet2> ... <stylesheetN>",
description="Combine CSS rules spread across multiple " +
"stylesheets into a single file"
)
opts, args = op.parse_args()
if len(args) < 3:
if len(args) == 1:
print "Error: No input files specified.\n"
elif len(args) == 2:
print "Error: One input file specified, nothing to combine.\n"
op.print_help();
exit(-1)
return [opts, args]
if __name__ == '__main__':
opts, args = init()
outfile, stylesheets = [args[0], args[1:]]
c = Consolidator()
c.run(outfile, stylesheets)
Test CSS file 1:
body {
background-color: #e7e7e7;
}
p {
margin: 1em 0em;
}
File 2:
body {
font-size: 16px;
}
#header .title {
font-family: Tahoma, Geneva, sans-serif;
font-size: 1.9em;
}
#header .title a, #header .title a:hover {
color: #f5f5f5;
border-bottom: none;
text-shadow: 2px 2px 3px rgba(0, 0, 0, 1);
}
Thanks in advance.
A:
Change
def hasProperty(self, line):
return True if re.search("^\s?[a-z-]+:[^;]+;", line) else False
to
def hasProperty(self, line):
return True if re.search("^\s*[a-z-]+:[^;]+;", line) else False
The hasProperty was not matching anything because \s? matches only 0 or 1 whitespace character.
| Python dictionary key missing | I thought I'd put together a quick script to consolidate the CSS rules I have distributed across multiple CSS files, then I can minify it.
I'm new to Python but figured this would be a good exercise to try a new language. My main loop isn't parsing the CSS as I thought it would.
I populate a list with selectors parsed from the CSS files to return the CSS rules in order. Later in the script, the list contains an element that is not found in the dictionary.
for line in self.file.readlines():
if self.hasSelector(line):
selector = self.getSelector(line)
if selector not in self.order:
self.order.append(selector)
elif selector and self.hasProperty(line):
# rules.setdefault(selector,[]).append(self.getProperty(line))
property = self.getProperty(line)
properties = [] if selector not in rules else rules[selector]
if property not in properties:
properties.append(property)
rules[selector] = properties
# print "%s :: %s" % (selector, "".join(rules[selector]))
return rules
Error encountered:
$ css-combine combined.css test1.css test2.css
Traceback (most recent call last):
File "css-combine", line 108, in <module>
c.run(outfile, stylesheets)
File "css-combine", line 64, in run
[(selector, rules[selector]) for selector in parser.order],
KeyError: 'p'
Swap the inputs:
$ css-combine combined.css test2.css test1.css
Traceback (most recent call last):
File "css-combine", line 108, in <module>
c.run(outfile, stylesheets)
File "css-combine", line 64, in run
[(selector, rules[selector]) for selector in parser.order],
KeyError: '#header_.title'
I've done some quirky things in the code like sub spaces for underscores in dictionary key names in case it was an issue - maybe this is a benign precaution? Depending on the order of the inputs, a different key cannot be found in the dictionary.
The script:
#!/usr/bin/env python
import optparse
import re
class CssParser:
def __init__(self):
self.file = False
self.order = [] # store rules assignment order
def parse(self, rules = {}):
if self.file == False:
raise IOError("No file to parse")
selector = False
for line in self.file.readlines():
if self.hasSelector(line):
selector = self.getSelector(line)
if selector not in self.order:
self.order.append(selector)
elif selector and self.hasProperty(line):
# rules.setdefault(selector,[]).append(self.getProperty(line))
property = self.getProperty(line)
properties = [] if selector not in rules else rules[selector]
if property not in properties:
properties.append(property)
rules[selector] = properties
# print "%s :: %s" % (selector, "".join(rules[selector]))
return rules
def hasSelector(self, line):
return True if re.search("^([#a-z,\.:\s]+){", line) else False
def getSelector(self, line):
s = re.search("^([#a-z,:\.\s]+){", line).group(1)
return "_".join(s.strip().split())
def hasProperty(self, line):
return True if re.search("^\s?[a-z-]+:[^;]+;", line) else False
def getProperty(self, line):
return re.search("([a-z-]+:[^;]+;)", line).group(1)
class Consolidator:
"""Class to consolidate CSS rule attributes"""
def run(self, outfile, files):
parser = CssParser()
rules = {}
for file in files:
try:
parser.file = open(file)
rules = parser.parse(rules)
except IOError:
print "Cannot read file: " + file
finally:
parser.file.close()
self.serialize(
[(selector, rules[selector]) for selector in parser.order],
outfile
)
def serialize(self, rules, outfile):
try:
f = open(outfile, "w")
for rule in rules:
f.write(
"%s {\n\t%s\n}\n\n" % (
" ".join(rule[0].split("_")), "\n\t".join(rule[1])
)
)
except IOError:
print "Cannot write output to: " + outfile
finally:
f.close()
def init():
op = optparse.OptionParser(
usage="Usage: %prog [options] <output file> <stylesheet1> " +
"<stylesheet2> ... <stylesheetN>",
description="Combine CSS rules spread across multiple " +
"stylesheets into a single file"
)
opts, args = op.parse_args()
if len(args) < 3:
if len(args) == 1:
print "Error: No input files specified.\n"
elif len(args) == 2:
print "Error: One input file specified, nothing to combine.\n"
op.print_help();
exit(-1)
return [opts, args]
if __name__ == '__main__':
opts, args = init()
outfile, stylesheets = [args[0], args[1:]]
c = Consolidator()
c.run(outfile, stylesheets)
Test CSS file 1:
body {
background-color: #e7e7e7;
}
p {
margin: 1em 0em;
}
File 2:
body {
font-size: 16px;
}
#header .title {
font-family: Tahoma, Geneva, sans-serif;
font-size: 1.9em;
}
#header .title a, #header .title a:hover {
color: #f5f5f5;
border-bottom: none;
text-shadow: 2px 2px 3px rgba(0, 0, 0, 1);
}
Thanks in advance.
| [
"Change\ndef hasProperty(self, line):\n return True if re.search(\"^\\s?[a-z-]+:[^;]+;\", line) else False\n\nto\ndef hasProperty(self, line):\n return True if re.search(\"^\\s*[a-z-]+:[^;]+;\", line) else False\n\nThe hasProperty was not matching anything because \\s? matches only 0 or 1 whitespace character.\n"
] | [
3
] | [] | [] | [
"dictionary",
"list",
"parsing",
"python"
] | stackoverflow_0002980375_dictionary_list_parsing_python.txt |
Q:
how to show all method and data when the object not has "__iter__" function in python
i find a way :
(1):the dir(object) is :
a="['__class__', '__contains__', '__delattr__', '__delitem__', '__dict__', '__doc__', '__getattribute__', '__getitem__', '__hash__', '__init__', '__iter__', '__metaclass__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__setitem__', '__str__', '__weakref__', '_errors', '_fields', '_prefix', '_unbound_fields', 'confirm', 'data', 'email', 'errors', 'password', 'populate_obj', 'process', 'username', 'validate']"
(2):
b=eval(a)
(3)and it became a list of all method :
['__class__', '__contains__', '__delattr__', '__delitem__', '__dict__', '__doc__', '__getattribute__', '__getitem__', '__hash__', '__init__', '__iter__', '__metaclass__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__setitem__', '__str__', '__weakref__', '_errors', '_fields', '_prefix', '_unbound_fields', 'confirm', 'data', 'email', 'errors', 'password', 'populate_obj', 'process', 'username', 'validate']
(3)then show the object's method,and all code is :
s=''
a=eval(str(dir(object)))
for i in a:
s+=str(i)+':'+str(object[i])
print s
but it show error :
KeyError: '__class__'
so how to make my code running .
thanks
A:
s += str(i)+':'+str(getattr(object, i))
A:
s = ''.join('%s: %s' % (a, getattr(o, a)) for a in dir(o))
dir lists all attributes
the for ... in creates a generator which returns each attribute name
the getattr retrieves the value of the attribute for the object
the % interpolates those values into a string
the ''.join concatenates all the strings into a single one
| how to show all method and data when the object not has "__iter__" function in python | i find a way :
(1):the dir(object) is :
a="['__class__', '__contains__', '__delattr__', '__delitem__', '__dict__', '__doc__', '__getattribute__', '__getitem__', '__hash__', '__init__', '__iter__', '__metaclass__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__setitem__', '__str__', '__weakref__', '_errors', '_fields', '_prefix', '_unbound_fields', 'confirm', 'data', 'email', 'errors', 'password', 'populate_obj', 'process', 'username', 'validate']"
(2):
b=eval(a)
(3)and it became a list of all method :
['__class__', '__contains__', '__delattr__', '__delitem__', '__dict__', '__doc__', '__getattribute__', '__getitem__', '__hash__', '__init__', '__iter__', '__metaclass__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__setitem__', '__str__', '__weakref__', '_errors', '_fields', '_prefix', '_unbound_fields', 'confirm', 'data', 'email', 'errors', 'password', 'populate_obj', 'process', 'username', 'validate']
(3)then show the object's method,and all code is :
s=''
a=eval(str(dir(object)))
for i in a:
s+=str(i)+':'+str(object[i])
print s
but it show error :
KeyError: '__class__'
so how to make my code running .
thanks
| [
"s += str(i)+':'+str(getattr(object, i))\n\n",
"s = ''.join('%s: %s' % (a, getattr(o, a)) for a in dir(o))\n\n\ndir lists all attributes\nthe for ... in creates a generator which returns each attribute name\nthe getattr retrieves the value of the attribute for the object\nthe % interpolates those values into a string\nthe ''.join concatenates all the strings into a single one\n\n"
] | [
2,
2
] | [] | [] | [
"dir",
"methods",
"python",
"show"
] | stackoverflow_0002979856_dir_methods_python_show.txt |
Q:
Change|Assign parent for the Model instance on Google App Engine Datastore
Is it possible to change or assign new parent to the Model instance that already in datastore? For example I need something like this
task = db.get(db.Key(task_key))
project = db.get(db.Key(project_key))
task.parent = project
task.put()
but it doesn't works this way because task.parent is built-in method. I was thinking about creating a new Key instance for the task but there is no way to change key as well.
Any thoughts?
A:
According to the docs, no:
The parent of an entity is defined
when the entity is created, and cannot
be changed later.
...
The complete key of an entity,
including the path, the kind and the
name or numeric ID, is unique and
specific to that entity. The complete
key is assigned when the entity is
created in the datastore, and none of
its parts can change.
Setting a parent entity is useful when you need to manipulate the parent and child in the same transaction. Otherwise, you're just limiting performance by forcing them both to be in the same entity group, and restricting your ability to update the relationship after the entity has been created.
Use a ReferenceProperty instead.
| Change|Assign parent for the Model instance on Google App Engine Datastore | Is it possible to change or assign new parent to the Model instance that already in datastore? For example I need something like this
task = db.get(db.Key(task_key))
project = db.get(db.Key(project_key))
task.parent = project
task.put()
but it doesn't works this way because task.parent is built-in method. I was thinking about creating a new Key instance for the task but there is no way to change key as well.
Any thoughts?
| [
"According to the docs, no:\n\nThe parent of an entity is defined\n when the entity is created, and cannot\n be changed later.\n...\nThe complete key of an entity,\n including the path, the kind and the\n name or numeric ID, is unique and\n specific to that entity. The complete\n key is assigned when the entity is\n created in the datastore, and none of\n its parts can change.\n\nSetting a parent entity is useful when you need to manipulate the parent and child in the same transaction. Otherwise, you're just limiting performance by forcing them both to be in the same entity group, and restricting your ability to update the relationship after the entity has been created.\nUse a ReferenceProperty instead.\n"
] | [
9
] | [] | [] | [
"google_app_engine",
"google_cloud_datastore",
"python",
"transactions"
] | stackoverflow_0002980196_google_app_engine_google_cloud_datastore_python_transactions.txt |
Q:
how to get all 'username' from my model 'MyUser' on google-app-engine
my model is :
class MyUser(db.Model):
username = db.StringProperty()
password = db.StringProperty(default=UNUSABLE_PASSWORD)
email = db.StringProperty()
nickname = db.StringProperty(indexed=False)
and my method which want to get all username is :
s=[]
a=MyUser.all()
for i in a:
s.append(i.username)
and i want to know : has any other way to show all 'username' .
thanks
A:
Yes, there are many other ways to show all usernames. One is templates:
users = MyUser.all()
template.render('userlist.html', {'users': users})
<ul>
<% for user in users %>
<li>{{ user.username }}</li>
<% endfor %>
</ul>
| how to get all 'username' from my model 'MyUser' on google-app-engine | my model is :
class MyUser(db.Model):
username = db.StringProperty()
password = db.StringProperty(default=UNUSABLE_PASSWORD)
email = db.StringProperty()
nickname = db.StringProperty(indexed=False)
and my method which want to get all username is :
s=[]
a=MyUser.all()
for i in a:
s.append(i.username)
and i want to know : has any other way to show all 'username' .
thanks
| [
"Yes, there are many other ways to show all usernames. One is templates:\nusers = MyUser.all()\ntemplate.render('userlist.html', {'users': users})\n\n<ul>\n <% for user in users %>\n <li>{{ user.username }}</li>\n <% endfor %>\n</ul>\n\n"
] | [
0
] | [] | [] | [
"google_app_engine",
"model",
"python"
] | stackoverflow_0002979952_google_app_engine_model_python.txt |
Q:
from string of bytes to OpenCV's IplImage in Python?
I am streaming some data down from a webcam. When I get all of the bytes for a full image (in a string called byteString) I want to display the image using OpenCV. Done fast enough, this will "stream" video from the webcam to an OpenCV window.
Here's what I've done to set up the window:
cvNamedWindow('name of window', CV_WINDOW_AUTOSIZE)
And here's what I do when the byte string is complete:
img = cvCreateImage(IMG_SIZE,PIXEL_DEPTH,CHANNELS)
buf = ctypes.create_string_buffer(byteString)
img.imageData = ctypes.cast(buf, ctypes.POINTER(ctypes.c_byte))
cvShowImage('name of window', img)
cvWaitKey(0)
For some reason this is producing an error:
File "C:\Python26\lib\site-packages\ctypes_opencv\highgui_win32.py", line 226, in execute
return func(*args, **kwargs)
WindowsError: exception: access violation reading 0x015399E8
Does anybody know how to do what I'm trying to do / how to fix this crazy violation error?
A:
I actually solved this problem and forgot to post the solution. Here's how I did it, though it may not be entirely robust:
I analyzed the headers coming from the MJPEG of the network camera I was doing this to, then I just read from the stream 1 byte at a time, and, when I detected that the header of the next image was also in the bytestring, I cut the last 42 bytes off (since that's the length of the header).
Then I had the bytes of the JPEG, so I simply created a new Cv Image by using the open(...) method and passing it the byte string wrapped in a StringIO class.
A:
Tyler:
I'm not sure what you are trying to do..i have a few guesses.
if you are trying to simply read an image from a webcam connected to your pc then this code should work:
import cv
cv.NamedWindow("camera", 1)
capture = cv.CaptureFromCAM(0)
while True:
img = cv.QueryFrame(capture)
cv.ShowImage("camera", img)
if cv.WaitKey(10) == 27:
break
are you trying to stream video from an internet cam?
if so, you should check this other post:
opencv-with-network-cameras
If for some reason you cannot do it in any of these ways then may be you can just somehow savethe image on the hard drive and then load it in your opencv program by doing a simple cvLoadImage ( of course this way is much slower).
another approach would be to set the new image pixels by hand by reading each of the values from the byteString, doing something like this:
for(int x=0;x<640;x++){
for(int y=0;y<480;y++){
uchar * pixelxy=&((uchar*) (img->imageData+img->widthStep*y))[x];
*pixelxy=buf[y*img->widthStep + x];
}
}
this is also slower but faster than using the hard drive.
Anyway, hope some of this helps, you should also specify which opencv version are you using.
| from string of bytes to OpenCV's IplImage in Python? | I am streaming some data down from a webcam. When I get all of the bytes for a full image (in a string called byteString) I want to display the image using OpenCV. Done fast enough, this will "stream" video from the webcam to an OpenCV window.
Here's what I've done to set up the window:
cvNamedWindow('name of window', CV_WINDOW_AUTOSIZE)
And here's what I do when the byte string is complete:
img = cvCreateImage(IMG_SIZE,PIXEL_DEPTH,CHANNELS)
buf = ctypes.create_string_buffer(byteString)
img.imageData = ctypes.cast(buf, ctypes.POINTER(ctypes.c_byte))
cvShowImage('name of window', img)
cvWaitKey(0)
For some reason this is producing an error:
File "C:\Python26\lib\site-packages\ctypes_opencv\highgui_win32.py", line 226, in execute
return func(*args, **kwargs)
WindowsError: exception: access violation reading 0x015399E8
Does anybody know how to do what I'm trying to do / how to fix this crazy violation error?
| [
"I actually solved this problem and forgot to post the solution. Here's how I did it, though it may not be entirely robust:\nI analyzed the headers coming from the MJPEG of the network camera I was doing this to, then I just read from the stream 1 byte at a time, and, when I detected that the header of the next image was also in the bytestring, I cut the last 42 bytes off (since that's the length of the header).\nThen I had the bytes of the JPEG, so I simply created a new Cv Image by using the open(...) method and passing it the byte string wrapped in a StringIO class.\n",
"Tyler:\nI'm not sure what you are trying to do..i have a few guesses. \nif you are trying to simply read an image from a webcam connected to your pc then this code should work:\nimport cv\n\ncv.NamedWindow(\"camera\", 1)\n\ncapture = cv.CaptureFromCAM(0)\n\nwhile True:\n img = cv.QueryFrame(capture)\n cv.ShowImage(\"camera\", img)\n if cv.WaitKey(10) == 27:\n break\n\nare you trying to stream video from an internet cam? \nif so, you should check this other post:\nopencv-with-network-cameras\nIf for some reason you cannot do it in any of these ways then may be you can just somehow savethe image on the hard drive and then load it in your opencv program by doing a simple cvLoadImage ( of course this way is much slower). \nanother approach would be to set the new image pixels by hand by reading each of the values from the byteString, doing something like this:\nfor(int x=0;x<640;x++){\n for(int y=0;y<480;y++){\n uchar * pixelxy=&((uchar*) (img->imageData+img->widthStep*y))[x];\n *pixelxy=buf[y*img->widthStep + x]; \n }\n }\n\nthis is also slower but faster than using the hard drive. \nAnyway, hope some of this helps, you should also specify which opencv version are you using.\n"
] | [
2,
0
] | [] | [] | [
"byte",
"image",
"opencv",
"python",
"string"
] | stackoverflow_0002970207_byte_image_opencv_python_string.txt |
Q:
python and overflowing byte?
I need to make a variable with similar behaviour like in C lanquage.
I need byte or unsigned char with range 0-255.
This variable should overflow, that means...
myVar = 255
myVar += 1
print myVar #!!myVar = 0!!
A:
I see lots of good answers here. However, if you want to create your own type as you mentioned, you could look at the Python Data model documentation. It explains how to make classes that have customized behaviours, for example emulating numeric types.
With this info, you could make a class like so:
class Num:
def __init__(self, n):
self.n = (n % 256)
def __repr__(self):
return repr(self.n)
def __add__(self, other):
return Num(self.n+int(other))
# transform ourselves into an int, so
# int-expecting methods can use us
def __int__(self):
return self.n
Then you can do things like this:
>>> a = Num(100)
>>> print a
100
>>> b = a + 50
>>> print b
150
>>> c = Num(200)
>>> d = a + c
>>> print d
44
I realize that you may want to support more operations than I've shown in Num, but from this example and the documentation, it should be fairly clear how to add them.
A:
You'll have to do myVar &= 0xFF to ensure it stays in the range 0-255.
You can generally do as many manipulations as you want to the number, so long as you mask it before you print, send to method written in C, or whatever requires it to be in the 8-bit range.
A:
The ctypes module contains the functionality you need, albeit in a difficult to use form. For example:
>>> import ctypes
>>> ctypes.c_ubyte(255)
c_ubyte(255)
>>> ctypes.c_ubyte(255 + 1)
c_ubyte(0)
This also works for signed types:
>>> ctypes.c_byte(127 + 1)
c_byte(-128)
You can unbox the object to get the primitive
int like so:
>>> ctypes.c_byte(127 + 1).value
-128
A:
Combining Blair's excellent answer, and my previous one (because they are all distinct solutions and you might like one more than the other:
import ctypes
class CInt:
def __init__(self, ctype, n):
self.ctype = ctype
self.n = ctype(n)
def __repr__(self):
return repr(self.n.value)
def __add__(self, other):
return CInt(self.ctype, self.n.value + int(other))
# transform ourselves into an int, so
# int-expecting methods can use us
def __int__(self):
return self.n.value
It is similar to Blair's, except that you can pass it the ctypes type constructor you want to use in the constructor:
>>> n = CInt(ctypes.c_byte, 127)
>>> n + 1
-128
A:
To extend on @Blair Conrad's answer: an alternative implementation could subclass int and override desired methods:
class Byte(int):
_all = None # cache
__slots__ = ()
def __new__(cls, value):
if Byte._all is None:
Byte._all = [int.__new__(cls, i) for i in xrange(256)]
return Byte._all[value % 256]
def __iadd__(self, other):
return self + Byte(other)
def __isub__(self, other):
return self - Byte(other)
def __add__(self, other):
if isinstance(other, Byte):
return Byte(int(self) + other)
return int(self) + other
def __sub__(self, other):
if isinstance(other, Byte):
return Byte(int(self) - other)
return int(self) - other
def __neg__(self):
return Byte(-int(self))
def __repr__(self):
return "Byte(%d)" % self
Example:
>>> myvar = Byte(255)
>>> myvar
Byte(255)
>>> myvar += 1
>>> myvar
Byte(0)
>>> myvar -= 1
>>> myvar
Byte(255)
>>> -myvar
Byte(1)
>>> myvar.i = 1
Traceback (most recent call last):
...
AttributeError: 'Byte' object has no attribute 'i'
>>> from itertools import permutations
>>> for a,b in permutations((Byte(1), Byte(-1), 1), 2):
... print "%r + %r = %r" % (a,b, a+b)
... print "%r - %r = %r" % (a,b, a-b)
Byte(1) + Byte(255) = Byte(0)
Byte(1) - Byte(255) = Byte(2)
Byte(1) + 1 = 2
Byte(1) - 1 = 0
Byte(255) + Byte(1) = Byte(0)
Byte(255) - Byte(1) = Byte(254)
Byte(255) + 1 = 256
Byte(255) - 1 = 254
1 + Byte(1) = 2
1 - Byte(1) = 0
1 + Byte(255) = 256
1 - Byte(255) = -254
>>> id(Byte(255)) == id(Byte(1)+Byte(254))
True
| python and overflowing byte? | I need to make a variable with similar behaviour like in C lanquage.
I need byte or unsigned char with range 0-255.
This variable should overflow, that means...
myVar = 255
myVar += 1
print myVar #!!myVar = 0!!
| [
"I see lots of good answers here. However, if you want to create your own type as you mentioned, you could look at the Python Data model documentation. It explains how to make classes that have customized behaviours, for example emulating numeric types.\nWith this info, you could make a class like so:\nclass Num:\n def __init__(self, n):\n self.n = (n % 256)\n\n def __repr__(self):\n return repr(self.n)\n\n def __add__(self, other):\n return Num(self.n+int(other))\n\n # transform ourselves into an int, so\n # int-expecting methods can use us\n def __int__(self):\n return self.n\n\nThen you can do things like this:\n>>> a = Num(100)\n>>> print a\n100\n>>> b = a + 50\n>>> print b\n150\n>>> c = Num(200)\n>>> d = a + c\n>>> print d\n44\n\nI realize that you may want to support more operations than I've shown in Num, but from this example and the documentation, it should be fairly clear how to add them.\n",
"You'll have to do myVar &= 0xFF to ensure it stays in the range 0-255.\nYou can generally do as many manipulations as you want to the number, so long as you mask it before you print, send to method written in C, or whatever requires it to be in the 8-bit range.\n",
"The ctypes module contains the functionality you need, albeit in a difficult to use form. For example:\n>>> import ctypes\n>>> ctypes.c_ubyte(255)\nc_ubyte(255)\n>>> ctypes.c_ubyte(255 + 1)\nc_ubyte(0)\n\nThis also works for signed types:\n>>> ctypes.c_byte(127 + 1)\nc_byte(-128)\n\nYou can unbox the object to get the primitive\n int like so:\n>>> ctypes.c_byte(127 + 1).value\n-128\n\n",
"Combining Blair's excellent answer, and my previous one (because they are all distinct solutions and you might like one more than the other:\nimport ctypes\nclass CInt:\n def __init__(self, ctype, n):\n self.ctype = ctype\n self.n = ctype(n)\n\n def __repr__(self):\n return repr(self.n.value)\n\n def __add__(self, other):\n return CInt(self.ctype, self.n.value + int(other))\n\n # transform ourselves into an int, so\n # int-expecting methods can use us\n def __int__(self):\n return self.n.value\n\nIt is similar to Blair's, except that you can pass it the ctypes type constructor you want to use in the constructor:\n>>> n = CInt(ctypes.c_byte, 127)\n>>> n + 1\n-128\n\n",
"To extend on @Blair Conrad's answer: an alternative implementation could subclass int and override desired methods:\nclass Byte(int):\n _all = None # cache\n __slots__ = ()\n def __new__(cls, value):\n if Byte._all is None:\n Byte._all = [int.__new__(cls, i) for i in xrange(256)]\n return Byte._all[value % 256]\n def __iadd__(self, other):\n return self + Byte(other)\n def __isub__(self, other):\n return self - Byte(other)\n def __add__(self, other):\n if isinstance(other, Byte): \n return Byte(int(self) + other)\n return int(self) + other\n def __sub__(self, other):\n if isinstance(other, Byte): \n return Byte(int(self) - other)\n return int(self) - other\n def __neg__(self):\n return Byte(-int(self))\n def __repr__(self):\n return \"Byte(%d)\" % self\n\nExample:\n>>> myvar = Byte(255)\n>>> myvar\nByte(255)\n>>> myvar += 1\n>>> myvar\nByte(0)\n>>> myvar -= 1\n>>> myvar\nByte(255)\n>>> -myvar\nByte(1)\n>>> myvar.i = 1\nTraceback (most recent call last):\n...\nAttributeError: 'Byte' object has no attribute 'i'\n>>> from itertools import permutations\n>>> for a,b in permutations((Byte(1), Byte(-1), 1), 2):\n... print \"%r + %r = %r\" % (a,b, a+b)\n... print \"%r - %r = %r\" % (a,b, a-b)\nByte(1) + Byte(255) = Byte(0)\nByte(1) - Byte(255) = Byte(2)\nByte(1) + 1 = 2\nByte(1) - 1 = 0\nByte(255) + Byte(1) = Byte(0)\nByte(255) - Byte(1) = Byte(254)\nByte(255) + 1 = 256\nByte(255) - 1 = 254\n1 + Byte(1) = 2\n1 - Byte(1) = 0\n1 + Byte(255) = 256\n1 - Byte(255) = -254\n>>> id(Byte(255)) == id(Byte(1)+Byte(254))\nTrue\n\n"
] | [
9,
5,
5,
1,
1
] | [] | [] | [
"overflow",
"python",
"variables"
] | stackoverflow_0002980213_overflow_python_variables.txt |
Q:
Crossed import in django
On example, i have 2 apps: alpha and beta
in alpha/models.py import of model from beta.models
and in beta/models.py import of model from alpha.models
manage.py validate says that ImportError: cannot import name ModelName
how to solve this problem?
A:
I have had this issue in the past there are two models that refer to one another, i.e. using a ForeignKey field. There is a simple way to deal with it, per the Django documentation:
If you need to create a relationship on a model that has not yet been defined, you can use the name of the model, rather than the model object itself:
So in your beta/models.py model, you would have this:
class BetaModel(models.Model):
alpha = models.ForeignKey('alpha.AlphaModel')
...
At this point, importing from alpha.models is not necessary.
| Crossed import in django | On example, i have 2 apps: alpha and beta
in alpha/models.py import of model from beta.models
and in beta/models.py import of model from alpha.models
manage.py validate says that ImportError: cannot import name ModelName
how to solve this problem?
| [
"I have had this issue in the past there are two models that refer to one another, i.e. using a ForeignKey field. There is a simple way to deal with it, per the Django documentation:\n\nIf you need to create a relationship on a model that has not yet been defined, you can use the name of the model, rather than the model object itself:\n\nSo in your beta/models.py model, you would have this:\nclass BetaModel(models.Model):\n alpha = models.ForeignKey('alpha.AlphaModel')\n ...\n\nAt this point, importing from alpha.models is not necessary.\n"
] | [
8
] | [] | [] | [
"django",
"django_models",
"import",
"python"
] | stackoverflow_0002958141_django_django_models_import_python.txt |
Q:
Python/Sqlite program, write as browser app or desktop app?
I am in the planning stages of rewriting an Access db I wrote several years ago in a full fledged program. I have very slight experience coding, but not enough to call myself a programmer by far. I'll definitely be learning as I go, so I'd like to keep everything as simple as possible. I've decided on Python and SQLite for my program, but I need help on my next decision.
Here is my situation
1) It'll be run locally on each machine, all Windows computers
2) I would really like a nice looking GUI with colors, nice screens, menus, lists, etc,
3) I'm thinking about using a browser interface because (a) from what I've read, browser apps
can look really great, and (b) I understand there are lots of free tools to assist in setting up the GUI/GUI code with drag and drop tools, so that helps my "keep it simple" goal.
4) I want the program to be totally portable so it runs completely from one single folder on a user's PC, with no installation(s) needed for it to run
(If I did it as a browser app, isn't there the possibility that a user's browser settings could affect or break the app. How likely is this?)
For my situation, should/could I make it a browser app? What would be the pros and cons for my situation?
A:
Writing a desktop application as a locally-hosted web application isn't typically a good idea. Although it's possible to create great user interfaces with HTML, CSS, and Javascript, it's far easier to create interfaces with conventional GUI frameworks.
Using web technologies to create your desktop GUI would introduce a great deal of unnecessary complexity to your application.
Creating user interfaces with HTML and CSS is difficult and time-consuming. HTML is a document markup language and CSS is a document formatting language; neither is well-suited to creating GUIs.
Using web technologies makes your application depend on the user's web browser. Far too many people are still using old, crippled browsers such as IE 6 and 7 that don't follow modern standards. You'll spend hours if not days trying to track down interface bugs that only happen on certain browsers.
You'll need to serve your application with a web server, introducing another layer of complexity. Your application will have to communicate with your interface through restricted web technologies without any of the benefits of a true web application.
I recommend using a desktop GUI framework, instead. In particular, I think wxPython would be the best GUI framework for you to use; it's stable, widely used, well documented, and highly portable. In addition, you can use a GUI-based interface builder such as Boa Constructor or possibly wxGlade to design your application's user interface. In summary, creating an application with almost any desktop GUI framework would be easier than using web technologies.
A:
I've done a desktop app running on windows and I think that it is a great way to develop app.
I would recommend to have a look at bottle. It is a lightweight web framework. It is less capabale than Django for example but it does well. It can be packed with py2exe if you want to deploy on machines without Python.
There is a lot of javascript libs on the web to help you. I like jquery and jquery-ui, raphaeljs ... but there are some others.
My app is running into a small browser based on the mshtml component of Pyjama-Desktop. This way, the user doesn't know that it is a web app. But you could let the app running into the favorite browser, webbrowser python module might be interesting for you.
If your app needs to access your filesystem, a browser-based app may be tricky. For security reasons, a browser doesn't have full access to your filesystem. However, you can mimick file open with ajaxupload and file save with an iframe.
If it only deals with a sqllite database, i think that it is a very good choice.
I hope it helps
A:
You did not mention if you are on windows or linux or any other OS.
If you are writing a browser app the first thing you are going to need is a web server, if each user is running the app on his local machine => means each user has to have a webserver running locally.
Also there are a lot of Rapid Development GUI toolkits such as wxPython and Glade which make design of GUI apps simple and easier.
I would suggest that if you are building a network app -> Take the browser route.
If you are building standalone app then go with a native application.
Here is an almost exhaustive list of all the frameworks. you can choose whatever suits your needs.
http://wiki.python.org/moin/GuiProgramming
I personally favor PyGtk, however it has a little learning curve associated with it if you havent done any GUI programming before.
A:
Pyjamas-Desktop
A:
I think it should work. What are you afraid of? Proxy settings, firewall?
I think running web server locally isn't hard for power user but it could be a problem for average user (even integrated with your app).
Probably You should run your app as service because forcing user to start server before entering web page, could be frustrating.
I would prefer other solutions. I would probably use Java (Swing) or C++ with QT. But I like Your approach, especially that it allows easy prototyping. If You prefer web style development you could try http://www.appcelerator.com/products/titanium-desktop-application-development/ it creates desktop apps using html+java script +webkit. But I didn't tried it my self (but I would like to).
Also Adobe Air could be probably good option for You.
A:
I would suggest a browser application. This eliminates the need for installation on client computers (and as such, as OS agnostic), and is accessible from anywhere in the world if the DNS is set up correctly for the server.
Using a web interface allows you to make use of some of the more powerful User Interface tools, such as:
The ability to use CSS for spectacular design
The availability of JavaScript Utilities (jQuery, ExtJS, etc.)
Easily modified compared to Desktop applications
Higher accessibility
Consistent UI (e.g. Users already know how "back" works, etc)
Centralized updates (Just update the server, not each client)
A:
Your choice of application type will be related both to the technology constraints and the type of user experience you plan to deliver.
Rich Client Application:
Usually developed as a stand-alone application.
Can support disconnected or occasionally connected scenarios.
Uses the processing and storage resources of the local machine.
Web Application:
Can support multiple platforms and browsers.
Supports only connected scenarios.
Uses the processing and storage resources of the server.
I personally favor PyQt in your case for a portable application.
The homepage for PyQt is http://www.riverbankcomputing.com/software/pyqt/
PyQt supports the Windows, Linux, UNIX and MacOS/X platforms.
PyQt4 is a set of Python bindings for Qt 4 that are dual-licensed under the GPL (version 2 and 3, with additional license exceptions) and a commercial license. There is also PySide by Nokia - new alternative bindings (as of November 2009) with LGPL license that struggle to be API compatible (at least until Qt 4.6) with PyQt4.
Tools and docs
PyQt Reference Documentation.
PyQt4 book: http://www.qtrac.eu/pyqtbook.html
The pyuic4 utility is a command line interface to the uic module. Conver xml ui from Qt to python.
Qt Designer is a powerful cross-platform GUI layout and forms builder. It allows you to rapidly design and build widgets and dialogs using on-screen forms using the same widgets that will be used in your application.
PyQt4 exposes much of the functionality of Qt 4 (From Nokia) to Python, including:
A comprehensive set of widgets
Flexible layout managers
Standard GUI features for applications (menus, toolbars, dock windows)
Easy communication between application components (signals and slots)
A unified painting system with transparency, anti-aliasing, OpenGL integration and SVG support
Internationalization (i18n) support and integration with the Qt Linguist translation tool
Etc.
A:
You question is a little broad. I'll try to cover as much as I can.
First, what I understood and my assumptions.
In your situation, the sqlite database is just a data store. Only one process (unless your application is multiprocess) will be accessing it so you won't need to worry about locking issues. The application doesn't need to communicate with other instances etc. over the network. It's a single desktop app. The platform is Windows.
Here are some thoughts that come to mind.
If you develop an application in Python (either web based or desktop), you will have to package it as a single executable and distribute it to your users. They might have to install the Python runtime as well as any extra modules that you might be using.
Guis are in my experience easier to develop using a standalone widget system than in a browser with Javascript. There are things like Pyjamas that make this better but it's still hard.
While it's not impossible to have local web applications running on each computer, your real benefits come if you centralise it. One place to update software. No need to "distribute" etc. This of course entails that you use a more powerful database system and you can actually manage multiple users. It will also require that you worry about browser specific quirks.
I'd go with a simple desktop app that uses a prepackaged toolkit (perhaps Tkinter which ships with Python). It's not the best of approaches but it will avoid problems for you. I'd also consider using a language that's more "first class" on windows like C# so that the runtimes and other things are already there. You requirement for a fancy GUI is secondary and I'd recommend that you get the functionality working fine before you focus on the bells and whistles.
Good luck.
| Python/Sqlite program, write as browser app or desktop app? | I am in the planning stages of rewriting an Access db I wrote several years ago in a full fledged program. I have very slight experience coding, but not enough to call myself a programmer by far. I'll definitely be learning as I go, so I'd like to keep everything as simple as possible. I've decided on Python and SQLite for my program, but I need help on my next decision.
Here is my situation
1) It'll be run locally on each machine, all Windows computers
2) I would really like a nice looking GUI with colors, nice screens, menus, lists, etc,
3) I'm thinking about using a browser interface because (a) from what I've read, browser apps
can look really great, and (b) I understand there are lots of free tools to assist in setting up the GUI/GUI code with drag and drop tools, so that helps my "keep it simple" goal.
4) I want the program to be totally portable so it runs completely from one single folder on a user's PC, with no installation(s) needed for it to run
(If I did it as a browser app, isn't there the possibility that a user's browser settings could affect or break the app. How likely is this?)
For my situation, should/could I make it a browser app? What would be the pros and cons for my situation?
| [
"Writing a desktop application as a locally-hosted web application isn't typically a good idea. Although it's possible to create great user interfaces with HTML, CSS, and Javascript, it's far easier to create interfaces with conventional GUI frameworks.\nUsing web technologies to create your desktop GUI would introduce a great deal of unnecessary complexity to your application.\n\nCreating user interfaces with HTML and CSS is difficult and time-consuming. HTML is a document markup language and CSS is a document formatting language; neither is well-suited to creating GUIs.\nUsing web technologies makes your application depend on the user's web browser. Far too many people are still using old, crippled browsers such as IE 6 and 7 that don't follow modern standards. You'll spend hours if not days trying to track down interface bugs that only happen on certain browsers.\nYou'll need to serve your application with a web server, introducing another layer of complexity. Your application will have to communicate with your interface through restricted web technologies without any of the benefits of a true web application.\n\nI recommend using a desktop GUI framework, instead. In particular, I think wxPython would be the best GUI framework for you to use; it's stable, widely used, well documented, and highly portable. In addition, you can use a GUI-based interface builder such as Boa Constructor or possibly wxGlade to design your application's user interface. In summary, creating an application with almost any desktop GUI framework would be easier than using web technologies.\n",
"I've done a desktop app running on windows and I think that it is a great way to develop app.\nI would recommend to have a look at bottle. It is a lightweight web framework. It is less capabale than Django for example but it does well. It can be packed with py2exe if you want to deploy on machines without Python. \nThere is a lot of javascript libs on the web to help you. I like jquery and jquery-ui, raphaeljs ... but there are some others.\nMy app is running into a small browser based on the mshtml component of Pyjama-Desktop. This way, the user doesn't know that it is a web app. But you could let the app running into the favorite browser, webbrowser python module might be interesting for you.\nIf your app needs to access your filesystem, a browser-based app may be tricky. For security reasons, a browser doesn't have full access to your filesystem. However, you can mimick file open with ajaxupload and file save with an iframe.\nIf it only deals with a sqllite database, i think that it is a very good choice.\nI hope it helps\n",
"\nYou did not mention if you are on windows or linux or any other OS.\nIf you are writing a browser app the first thing you are going to need is a web server, if each user is running the app on his local machine => means each user has to have a webserver running locally.\n\nAlso there are a lot of Rapid Development GUI toolkits such as wxPython and Glade which make design of GUI apps simple and easier. \nI would suggest that if you are building a network app -> Take the browser route. \nIf you are building standalone app then go with a native application.\nHere is an almost exhaustive list of all the frameworks. you can choose whatever suits your needs. \nhttp://wiki.python.org/moin/GuiProgramming\nI personally favor PyGtk, however it has a little learning curve associated with it if you havent done any GUI programming before.\n",
"Pyjamas-Desktop\n",
"I think it should work. What are you afraid of? Proxy settings, firewall?\nI think running web server locally isn't hard for power user but it could be a problem for average user (even integrated with your app).\nProbably You should run your app as service because forcing user to start server before entering web page, could be frustrating.\nI would prefer other solutions. I would probably use Java (Swing) or C++ with QT. But I like Your approach, especially that it allows easy prototyping. If You prefer web style development you could try http://www.appcelerator.com/products/titanium-desktop-application-development/ it creates desktop apps using html+java script +webkit. But I didn't tried it my self (but I would like to).\nAlso Adobe Air could be probably good option for You. \n",
"I would suggest a browser application. This eliminates the need for installation on client computers (and as such, as OS agnostic), and is accessible from anywhere in the world if the DNS is set up correctly for the server.\nUsing a web interface allows you to make use of some of the more powerful User Interface tools, such as:\n\nThe ability to use CSS for spectacular design\nThe availability of JavaScript Utilities (jQuery, ExtJS, etc.)\nEasily modified compared to Desktop applications\nHigher accessibility\nConsistent UI (e.g. Users already know how \"back\" works, etc)\nCentralized updates (Just update the server, not each client)\n\n",
"Your choice of application type will be related both to the technology constraints and the type of user experience you plan to deliver.\n\nRich Client Application: \nUsually developed as a stand-alone application.\nCan support disconnected or occasionally connected scenarios.\nUses the processing and storage resources of the local machine.\nWeb Application:\nCan support multiple platforms and browsers.\nSupports only connected scenarios.\nUses the processing and storage resources of the server.\n\nI personally favor PyQt in your case for a portable application.\nThe homepage for PyQt is http://www.riverbankcomputing.com/software/pyqt/\nPyQt supports the Windows, Linux, UNIX and MacOS/X platforms.\nPyQt4 is a set of Python bindings for Qt 4 that are dual-licensed under the GPL (version 2 and 3, with additional license exceptions) and a commercial license. There is also PySide by Nokia - new alternative bindings (as of November 2009) with LGPL license that struggle to be API compatible (at least until Qt 4.6) with PyQt4.\nTools and docs\n\nPyQt Reference Documentation.\nPyQt4 book: http://www.qtrac.eu/pyqtbook.html\nThe pyuic4 utility is a command line interface to the uic module. Conver xml ui from Qt to python.\n\nQt Designer is a powerful cross-platform GUI layout and forms builder. It allows you to rapidly design and build widgets and dialogs using on-screen forms using the same widgets that will be used in your application.\nPyQt4 exposes much of the functionality of Qt 4 (From Nokia) to Python, including: \n\nA comprehensive set of widgets \nFlexible layout managers \nStandard GUI features for applications (menus, toolbars, dock windows) \nEasy communication between application components (signals and slots) \nA unified painting system with transparency, anti-aliasing, OpenGL integration and SVG support \nInternationalization (i18n) support and integration with the Qt Linguist translation tool \nEtc.\n\n",
"You question is a little broad. I'll try to cover as much as I can. \nFirst, what I understood and my assumptions.\nIn your situation, the sqlite database is just a data store. Only one process (unless your application is multiprocess) will be accessing it so you won't need to worry about locking issues. The application doesn't need to communicate with other instances etc. over the network. It's a single desktop app. The platform is Windows. \nHere are some thoughts that come to mind. \n\nIf you develop an application in Python (either web based or desktop), you will have to package it as a single executable and distribute it to your users. They might have to install the Python runtime as well as any extra modules that you might be using.\nGuis are in my experience easier to develop using a standalone widget system than in a browser with Javascript. There are things like Pyjamas that make this better but it's still hard. \nWhile it's not impossible to have local web applications running on each computer, your real benefits come if you centralise it. One place to update software. No need to \"distribute\" etc. This of course entails that you use a more powerful database system and you can actually manage multiple users. It will also require that you worry about browser specific quirks. \n\nI'd go with a simple desktop app that uses a prepackaged toolkit (perhaps Tkinter which ships with Python). It's not the best of approaches but it will avoid problems for you. I'd also consider using a language that's more \"first class\" on windows like C# so that the runtimes and other things are already there. You requirement for a fancy GUI is secondary and I'd recommend that you get the functionality working fine before you focus on the bells and whistles. \nGood luck.\n"
] | [
5,
4,
3,
3,
1,
1,
1,
0
] | [] | [] | [
"browser",
"python",
"sqlite"
] | stackoverflow_0002924231_browser_python_sqlite.txt |
Q:
Launching browser within CherryPy
I have a html page displayed using...
cherrypy.quickstart(ShowHTML(htmlfile), config=configfile)
Once the page is loaded (eg. initiated via. the command 'python mypage.py'), I would like to automatically launch the browser to display the page (eg. via. http://localhost/8000). Is there any way I can achieve this (eg. via. a hook within CherryPy), or do I have to call-up the browser manually (eg. by double-clicking an icon)?
TIA
Alan
A:
You can either hook your webbrowser into the engine start/stop lifecycle:
def browse():
webbrowser.open("http://127.0.0.1:8080")
cherrypy.engine.subscribe('start', browse, priority=90)
Or, unpack quickstart:
from cherrypy import config, engine, tree
config.update(configfile)
tree.mount(ShowHTML(htmlfile), '/', configfile)
if hasattr(engine, "signal_handler"):
engine.signal_handler.subscribe()
if hasattr(engine, "console_control_handler"):
engine.console_control_handler.subscribe()
engine.start()
webbrowser.open("http://127.0.0.1:8080")
engine.block()
| Launching browser within CherryPy | I have a html page displayed using...
cherrypy.quickstart(ShowHTML(htmlfile), config=configfile)
Once the page is loaded (eg. initiated via. the command 'python mypage.py'), I would like to automatically launch the browser to display the page (eg. via. http://localhost/8000). Is there any way I can achieve this (eg. via. a hook within CherryPy), or do I have to call-up the browser manually (eg. by double-clicking an icon)?
TIA
Alan
| [
"You can either hook your webbrowser into the engine start/stop lifecycle:\ndef browse():\n webbrowser.open(\"http://127.0.0.1:8080\")\ncherrypy.engine.subscribe('start', browse, priority=90)\n\nOr, unpack quickstart:\nfrom cherrypy import config, engine, tree\n\nconfig.update(configfile)\ntree.mount(ShowHTML(htmlfile), '/', configfile)\nif hasattr(engine, \"signal_handler\"):\n engine.signal_handler.subscribe()\nif hasattr(engine, \"console_control_handler\"):\n engine.console_control_handler.subscribe()\n\nengine.start()\nwebbrowser.open(\"http://127.0.0.1:8080\") \nengine.block()\n\n"
] | [
4
] | [] | [] | [
"browser",
"cherrypy",
"python"
] | stackoverflow_0002978934_browser_cherrypy_python.txt |
Q:
Rails and Python hosting
I am trying to host some files/rails app in the port 8080 for external access. For Python I am using the SimpleHTTPServer module, and for Rails, webrick.
However, both of them does not work very well. I don't get the response back, and, sometimes, if I get it, it's VERY slow. Nevertheless, apache works very well on the port 8080 (I am not running them at the same time).
What is going on?
A:
I can't speak for Python, but Webrick is not meant to be used for an in-production application—you didn't mention if this application was in production, though you did say 'external access'.
For Rails, have a look at Passenger.
| Rails and Python hosting | I am trying to host some files/rails app in the port 8080 for external access. For Python I am using the SimpleHTTPServer module, and for Rails, webrick.
However, both of them does not work very well. I don't get the response back, and, sometimes, if I get it, it's VERY slow. Nevertheless, apache works very well on the port 8080 (I am not running them at the same time).
What is going on?
| [
"I can't speak for Python, but Webrick is not meant to be used for an in-production application—you didn't mention if this application was in production, though you did say 'external access'.\nFor Rails, have a look at Passenger.\n"
] | [
1
] | [] | [] | [
"apache",
"python",
"ruby_on_rails"
] | stackoverflow_0002981016_apache_python_ruby_on_rails.txt |
Q:
strange syntax error in python, version 2.6 and 3.1
this may not be an earth-shattering deficiency of python, but i still
wonder about the rationale behind the following behavior: when i
run
source = """
print( 'helo' )
if __name__ == '__main__':
print( 'yeah!' )
#"""
print( compile( source, '<whatever>', 'exec' ) )
i get ::
File "<whatever>", line 6
#
^
SyntaxError: invalid syntax
i can avoid this exception by (1) deleting the trailing #; (2)
deleting or outcommenting the if __name__ == '__main__':\n
print( 'yeah!' ) lines; (3) add a newline to very end of the
source.
moreover, if i have the source end without a trailing newline right
behind the print( 'yeah!' ), the source will also compile without
error.
i could also reproduce this behavior with python 2.6, so it’s not new
to the 3k series.
i find this error to be highly irritating, all the more since when i
put above source inside a file and execute it directly or have it
imported, no error will occur—which is the expected behavior.
a # (hash) outside a string literal should always represent the
start of a (possibly empty) comment in a python source; moreover, the
presence or absence of a if __name__ == '__main__' clause should
not change the interpretation of a soure on a syntactical level.
can anyone reproduce the above problem, and/or comment on the
phenomenon?
cheers
A:
update
turns out this is indeed a bug as pointed out by http://groups.google.com/group/comp.lang.python/msg/b4842cc7abd75fe9; the bug report is at http://bugs.python.org/issue1184112; it appears to be fixed in 2.7 and 3.2.
solution
once recognized, this bug is extremely simple to fix: since a valid python source should stay both syntactically valid and semantically unchanged when a newline is added to the source text, just mechanically do just that to any source text. this reminds me of the ; semicolon you mechanically put in between source texts when assembling a multi-file javascript source for efficient gzipped delivery to the remote client.
| strange syntax error in python, version 2.6 and 3.1 | this may not be an earth-shattering deficiency of python, but i still
wonder about the rationale behind the following behavior: when i
run
source = """
print( 'helo' )
if __name__ == '__main__':
print( 'yeah!' )
#"""
print( compile( source, '<whatever>', 'exec' ) )
i get ::
File "<whatever>", line 6
#
^
SyntaxError: invalid syntax
i can avoid this exception by (1) deleting the trailing #; (2)
deleting or outcommenting the if __name__ == '__main__':\n
print( 'yeah!' ) lines; (3) add a newline to very end of the
source.
moreover, if i have the source end without a trailing newline right
behind the print( 'yeah!' ), the source will also compile without
error.
i could also reproduce this behavior with python 2.6, so it’s not new
to the 3k series.
i find this error to be highly irritating, all the more since when i
put above source inside a file and execute it directly or have it
imported, no error will occur—which is the expected behavior.
a # (hash) outside a string literal should always represent the
start of a (possibly empty) comment in a python source; moreover, the
presence or absence of a if __name__ == '__main__' clause should
not change the interpretation of a soure on a syntactical level.
can anyone reproduce the above problem, and/or comment on the
phenomenon?
cheers
| [
"update\nturns out this is indeed a bug as pointed out by http://groups.google.com/group/comp.lang.python/msg/b4842cc7abd75fe9; the bug report is at http://bugs.python.org/issue1184112; it appears to be fixed in 2.7 and 3.2. \nsolution\nonce recognized, this bug is extremely simple to fix: since a valid python source should stay both syntactically valid and semantically unchanged when a newline is added to the source text, just mechanically do just that to any source text. this reminds me of the ; semicolon you mechanically put in between source texts when assembling a multi-file javascript source for efficient gzipped delivery to the remote client.\n"
] | [
3
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0002976798_python_python_3.x.txt |
Q:
Best DataMining Database
I am an occasional Python programer who only have worked so far with MYSQL or SQLITE databases. I am the computer person for everything in a small company and I have been started a new project where I think it is about time to try new databases.
Sales department makes a CSV dump every week and I need to make a small scripting application that allow people form other departments mixing the information, mostly linking the records. I have all this solved, my problem is the speed, I am using just plain text files for all this and unsurprisingly it is very slow.
I thought about using mysql, but then I need installing mysql in every desktop, sqlite is easier, but it is very slow. I do not need a full relational database, just some way of play with big amounts of data in a decent time.
Update: I think I was not being very detailed about my database usage thus explaining my problem badly. I am working reading all the data ~900 Megas or more from a csv into a Python dictionary then working with it. My problem is storing and mostly reading the data quickly.
Many thanks!
A:
Quick Summary
You need enough memory(RAM) to solve your problem efficiently. I think you should upgrade memory?? When reading the excellent High Scalability Blog you will notice that for big sites to solve there problem efficiently they store the complete problem set in memory.
You do need a central database solution. I don't think hand doing this with python dictionary's only will get the job done.
How to solve "your problem" depends on your "query's". What I would try to do first is put your data in elastic-search(see below) and query the database(see how it performs). I think this is the easiest way to tackle your problem. But as you can read below there are a lot of ways to tackle your problem.
We know:
You used python as your program language.
Your database is ~900MB (I think that's pretty large, but absolute manageable).
You have loaded all the data in a python dictionary. Here I am assume the problem lays. Python tries to store the dictionary(also python dictionary's aren't the most memory friendly) in your memory, but you don't have enough memory(How much memory do you have????). When that happens you are going to have a lot of Virtual Memory. When you attempt to read the dictionary you are constantly swapping data from you disc into memory. This swapping causes "Trashing". I am assuming that your computer does not have enough Ram. If true then I would first upgrade your memory with at least 2 Gigabytes extra RAM. When your problem set is able to fit in memory solving the problem is going to be a lot faster. I opened my computer architecture book where it(The memory hierarchy) says that main memory access time is about 40-80ns while disc memory access time is 5 ms. That is a BIG difference.
Missing information
Do you have a central server. You should use/have a server.
What kind of architecture does your server have? Linux/Unix/Windows/Mac OSX? In my opinion your server should have linux/Unix/Mac OSX architecture.
How much memory does your server have?
Could you specify your data set(CSV) a little better.
What kind of data mining are you doing? Do you need full-text-search capabilities? I am not assuming you are doing any complicated (SQL) query's. Performing that task with only python dictionary's will be a complicated problem. Could you formalize the query's that you would like to perform? For example:
"get all users who work for departement x"
"get all sales from user x"
Database needed
I am the computer person for
everything in a small company and I
have been started a new project where
I think it is about time to try new
databases.
You are sure right that you need a database to solve your problem. Doing that yourself only using python dictionary's is difficult. Especially when your problem set can't fit in memory.
MySQL
I thought about using mysql, but then
I need installing mysql in every
desktop, sqlite is easier, but it is
very slow. I do not need a full
relational database, just some way of
play with big amounts of data in a
decent time.
A centralized(Client-server architecture) database is exactly what you need to solve your problem. Let all the users access the database from 1 PC which you manage. You can use MySQL to solve your problem.
Tokyo Tyrant
You could also use Tokyo Tyrant to store all your data. Tokyo Tyrant is pretty fast and it does not have to be stored in RAM. It handles getting data a more efficient(instead of using python dictionary's). However if your problem can completely fit in Memory I think you should have look at Redis(below).
Redis:
You could for example use Redis(quick start in 5 minutes)(Redis is extremely fast) to store all sales in memory. Redis is extremely powerful and can do this kind of queries insanely fast. The only problem with Redis is that it has to fit completely in RAM, but I believe he is working on that(nightly build already supports it). Also like I already said previously solving your problem set completely from memory is how big sites solve there problem in a timely manner.
Document stores
This article tries to evaluate kv-stores with document stores like couchdb/riak/mongodb. These stores are better capable of searching(a little slower then KV stores), but aren't good at full-text-search.
Full-text-search
If you want to do full-text-search queries you could like at:
elasticsearch(videos): When I saw the video demonstration of elasticsearch it looked pretty cool. You could try put(post simple json) your data in elasticsearch and see how fast it is. I am following elastissearch on github and the author is commiting a lot of new code to it.
solr(tutorial): A lot of big companies are using solr(github, digg) to power there search. They got a big boost going from MySQL full-text search to solr.
A:
You probably do need a full relational DBMS, if not right now, very soon. If you start now while your problems and data are simple and straightforward then when they become complex and difficult you will have plenty of experience with at least one DBMS to help you. You probably don't need MySQL on all desktops, you might install it on a server for example and feed data out over your network, but you perhaps need to provide more information about your requirements, toolset and equipment to get better suggestions.
And, while the other DBMSes have their strengths and weaknesses too, there's nothing wrong with MySQL for large and complex databases. I don't know enough about SQLite to comment knowledgeably about it.
EDIT: @Eric from your comments to my answer and the other answers I form even more strongly the view that it is time you moved to a database. I'm not surprised that trying to do database operations on a 900MB Python dictionary is slow. I think you have to first convince yourself, then your management, that you have reached the limits of what your current toolset can cope with, and that future developments are threatened unless you rethink matters.
If your network really can't support a server-based database than (a) you really need to make your network robust, reliable and performant enough for such a purpose, but (b) if that is not an option, or not an early option, you should be thinking along the lines of a central database server passing out digests/extracts/reports to other users, rather than simultaneous, full RDBMS working in a client-server configuration.
The problems you are currently experiencing are problems of not having the right tools for the job. They are only going to get worse. I wish I could suggest a magic way in which this is not the case, but I can't and I don't think anyone else will.
A:
Here is a performance benchmark of different database suits ->
Database Speed Comparison
I'm not sure how objective the above comparison is though, seeing as it's hosted on sqlite.org. Sqlite only seems to be a bit slower when dropping tables, otherwise you shouldn't have any problems using it. Both sqlite and mysql seem to have their own strengths and weaknesses, in some tests the one is faster then the other, in other tests, the reverse is true.
If you've been experiencing lower then expected performance, perhaps it is not sqlite that is the causing this, have you done any profiling or otherwise to make sure nothing else is causing your program to misbehave?
EDIT: Updated with a link to a slightly more recent speed comparison.
A:
Have you done any bench marking to confirm that it is the text files that are slowing you down? If you haven't, there's a good chance that tweaking some other part of the code will speed things up so that it's fast enough.
A:
It sounds like each department has their own feudal database, and this implies a lot of unnecessary redundancy and inefficiency.
Instead of transferring hundreds of megabytes to everyone across your network, why not keep your data in MySQL and have the departments upload their data to the database, where it can be normalized and accessible by everyone?
As your organization grows, having completely different departmental databases that are unaware of each other, and contain potentially redundant or conflicting data, is going to become very painful.
A:
Does the machine this process runs on have sufficient memory and bandwidth to handle this efficiently? Putting MySQL on a slow machine and recoding the tool to use MySQL rather than text files could potentially be far more costly than simply adding memory or upgrading the machine.
A:
It has been a couple of months since I posted this question and I wanted to let you all know how I solved this problem. I am using Berkeley DB with the module bsddb instead loading all the data in a Python dictionary. I am not fully happy, but my users are.
My next step is trying to get a shared server with redis, but unless users starts complaining about speed, I doubt I will get it.
Many thanks everybody who helped here, and I hope this question and answers are useful to somebody else.
A:
If you have that problem with a CSV file, maybe you can just pickle the dictionary and generate a pickle "binary" file with pickle.HIGHEST_PROTOCOL option. It can be faster to read and you get a smaller file. You can load the CSV file once and then generate the pickled file, allowing faster load in next accesses.
Anyway, with 900 Mb of information, you're going to deal with some time loading it in memory. Another approach is not loading it on one step on memory, but load only the information when needed, maybe making different files by date, or any other category (company, type, etc..)
A:
Take a look at mongodb.
| Best DataMining Database | I am an occasional Python programer who only have worked so far with MYSQL or SQLITE databases. I am the computer person for everything in a small company and I have been started a new project where I think it is about time to try new databases.
Sales department makes a CSV dump every week and I need to make a small scripting application that allow people form other departments mixing the information, mostly linking the records. I have all this solved, my problem is the speed, I am using just plain text files for all this and unsurprisingly it is very slow.
I thought about using mysql, but then I need installing mysql in every desktop, sqlite is easier, but it is very slow. I do not need a full relational database, just some way of play with big amounts of data in a decent time.
Update: I think I was not being very detailed about my database usage thus explaining my problem badly. I am working reading all the data ~900 Megas or more from a csv into a Python dictionary then working with it. My problem is storing and mostly reading the data quickly.
Many thanks!
| [
"Quick Summary\n\nYou need enough memory(RAM) to solve your problem efficiently. I think you should upgrade memory?? When reading the excellent High Scalability Blog you will notice that for big sites to solve there problem efficiently they store the complete problem set in memory.\nYou do need a central database solution. I don't think hand doing this with python dictionary's only will get the job done.\nHow to solve \"your problem\" depends on your \"query's\". What I would try to do first is put your data in elastic-search(see below) and query the database(see how it performs). I think this is the easiest way to tackle your problem. But as you can read below there are a lot of ways to tackle your problem.\n\nWe know:\n\nYou used python as your program language.\nYour database is ~900MB (I think that's pretty large, but absolute manageable).\nYou have loaded all the data in a python dictionary. Here I am assume the problem lays. Python tries to store the dictionary(also python dictionary's aren't the most memory friendly) in your memory, but you don't have enough memory(How much memory do you have????). When that happens you are going to have a lot of Virtual Memory. When you attempt to read the dictionary you are constantly swapping data from you disc into memory. This swapping causes \"Trashing\". I am assuming that your computer does not have enough Ram. If true then I would first upgrade your memory with at least 2 Gigabytes extra RAM. When your problem set is able to fit in memory solving the problem is going to be a lot faster. I opened my computer architecture book where it(The memory hierarchy) says that main memory access time is about 40-80ns while disc memory access time is 5 ms. That is a BIG difference.\n\nMissing information\n\nDo you have a central server. You should use/have a server.\nWhat kind of architecture does your server have? Linux/Unix/Windows/Mac OSX? In my opinion your server should have linux/Unix/Mac OSX architecture.\nHow much memory does your server have?\nCould you specify your data set(CSV) a little better.\nWhat kind of data mining are you doing? Do you need full-text-search capabilities? I am not assuming you are doing any complicated (SQL) query's. Performing that task with only python dictionary's will be a complicated problem. Could you formalize the query's that you would like to perform? For example:\n\n\n\"get all users who work for departement x\"\n\"get all sales from user x\"\n\n\nDatabase needed\n\nI am the computer person for\n everything in a small company and I\n have been started a new project where\n I think it is about time to try new\n databases.\n\nYou are sure right that you need a database to solve your problem. Doing that yourself only using python dictionary's is difficult. Especially when your problem set can't fit in memory.\nMySQL\n\nI thought about using mysql, but then\n I need installing mysql in every\n desktop, sqlite is easier, but it is\n very slow. I do not need a full\n relational database, just some way of\n play with big amounts of data in a\n decent time.\n\nA centralized(Client-server architecture) database is exactly what you need to solve your problem. Let all the users access the database from 1 PC which you manage. You can use MySQL to solve your problem.\nTokyo Tyrant\nYou could also use Tokyo Tyrant to store all your data. Tokyo Tyrant is pretty fast and it does not have to be stored in RAM. It handles getting data a more efficient(instead of using python dictionary's). However if your problem can completely fit in Memory I think you should have look at Redis(below).\nRedis:\nYou could for example use Redis(quick start in 5 minutes)(Redis is extremely fast) to store all sales in memory. Redis is extremely powerful and can do this kind of queries insanely fast. The only problem with Redis is that it has to fit completely in RAM, but I believe he is working on that(nightly build already supports it). Also like I already said previously solving your problem set completely from memory is how big sites solve there problem in a timely manner.\nDocument stores\nThis article tries to evaluate kv-stores with document stores like couchdb/riak/mongodb. These stores are better capable of searching(a little slower then KV stores), but aren't good at full-text-search.\nFull-text-search\nIf you want to do full-text-search queries you could like at: \n\nelasticsearch(videos): When I saw the video demonstration of elasticsearch it looked pretty cool. You could try put(post simple json) your data in elasticsearch and see how fast it is. I am following elastissearch on github and the author is commiting a lot of new code to it.\nsolr(tutorial): A lot of big companies are using solr(github, digg) to power there search. They got a big boost going from MySQL full-text search to solr.\n\n",
"You probably do need a full relational DBMS, if not right now, very soon. If you start now while your problems and data are simple and straightforward then when they become complex and difficult you will have plenty of experience with at least one DBMS to help you. You probably don't need MySQL on all desktops, you might install it on a server for example and feed data out over your network, but you perhaps need to provide more information about your requirements, toolset and equipment to get better suggestions.\nAnd, while the other DBMSes have their strengths and weaknesses too, there's nothing wrong with MySQL for large and complex databases. I don't know enough about SQLite to comment knowledgeably about it.\nEDIT: @Eric from your comments to my answer and the other answers I form even more strongly the view that it is time you moved to a database. I'm not surprised that trying to do database operations on a 900MB Python dictionary is slow. I think you have to first convince yourself, then your management, that you have reached the limits of what your current toolset can cope with, and that future developments are threatened unless you rethink matters.\nIf your network really can't support a server-based database than (a) you really need to make your network robust, reliable and performant enough for such a purpose, but (b) if that is not an option, or not an early option, you should be thinking along the lines of a central database server passing out digests/extracts/reports to other users, rather than simultaneous, full RDBMS working in a client-server configuration.\nThe problems you are currently experiencing are problems of not having the right tools for the job. They are only going to get worse. I wish I could suggest a magic way in which this is not the case, but I can't and I don't think anyone else will.\n",
"Here is a performance benchmark of different database suits ->\nDatabase Speed Comparison\nI'm not sure how objective the above comparison is though, seeing as it's hosted on sqlite.org. Sqlite only seems to be a bit slower when dropping tables, otherwise you shouldn't have any problems using it. Both sqlite and mysql seem to have their own strengths and weaknesses, in some tests the one is faster then the other, in other tests, the reverse is true.\nIf you've been experiencing lower then expected performance, perhaps it is not sqlite that is the causing this, have you done any profiling or otherwise to make sure nothing else is causing your program to misbehave?\nEDIT: Updated with a link to a slightly more recent speed comparison.\n",
"Have you done any bench marking to confirm that it is the text files that are slowing you down? If you haven't, there's a good chance that tweaking some other part of the code will speed things up so that it's fast enough.\n",
"It sounds like each department has their own feudal database, and this implies a lot of unnecessary redundancy and inefficiency.\nInstead of transferring hundreds of megabytes to everyone across your network, why not keep your data in MySQL and have the departments upload their data to the database, where it can be normalized and accessible by everyone?\nAs your organization grows, having completely different departmental databases that are unaware of each other, and contain potentially redundant or conflicting data, is going to become very painful.\n",
"Does the machine this process runs on have sufficient memory and bandwidth to handle this efficiently? Putting MySQL on a slow machine and recoding the tool to use MySQL rather than text files could potentially be far more costly than simply adding memory or upgrading the machine.\n",
"It has been a couple of months since I posted this question and I wanted to let you all know how I solved this problem. I am using Berkeley DB with the module bsddb instead loading all the data in a Python dictionary. I am not fully happy, but my users are.\nMy next step is trying to get a shared server with redis, but unless users starts complaining about speed, I doubt I will get it.\nMany thanks everybody who helped here, and I hope this question and answers are useful to somebody else.\n",
"If you have that problem with a CSV file, maybe you can just pickle the dictionary and generate a pickle \"binary\" file with pickle.HIGHEST_PROTOCOL option. It can be faster to read and you get a smaller file. You can load the CSV file once and then generate the pickled file, allowing faster load in next accesses.\nAnyway, with 900 Mb of information, you're going to deal with some time loading it in memory. Another approach is not loading it on one step on memory, but load only the information when needed, maybe making different files by date, or any other category (company, type, etc..)\n",
"Take a look at mongodb.\n"
] | [
16,
12,
1,
1,
1,
1,
1,
0,
0
] | [] | [] | [
"data_mining",
"database",
"nosql",
"python"
] | stackoverflow_0002577967_data_mining_database_nosql_python.txt |
Q:
Writing a program to scrape forums
I need to write a program to scrape forums.
Should I write the program in Python using the Scrapy framework or should I use Php cURL?
Also is there a Php equivalent to Scrapy?
Thanks
A:
I would choose Python due to superior libxml2 bindings, specifically things like lxml.html and pyQuery. Scrapy has its own libxml2 bindings, I haven't looked at them to test them, though skimming the Scrapy documentation didn't leave me very impressed (I've done lots of scraping just using these parsers and manual coding). With any of these you get a truly superior HTML parser, querying via XPath, and with lxml.html and pyquery (also built on lxml) you get CSS selectors.
If you are doing a small job scraping a forum, I'd skip a scraping framework and just do it by hand -- it's easy and parallelizing etc is not really needed.
A:
I wouldn't use PHP for a new application that I'm writing. I don't like the language for various reasons.
Also, it's strength is as a server side scripting language to deliver dynamic pages over the web. Not as a general purpose programming language. That's another minus point. I'd stick with Python.
As for which framework to use, there are lots of them around. Harvestman, Scrapy etc. There's also the 80legs cloud based crawler than you might be able to use.
Update : People have been downvoting this answer probably because I said I didn't like PHP. Here's a list of reasons why. Not entirely accurate but a decent summary nevertheless http://wiki.python.org/moin/PythonVsPhp
| Writing a program to scrape forums | I need to write a program to scrape forums.
Should I write the program in Python using the Scrapy framework or should I use Php cURL?
Also is there a Php equivalent to Scrapy?
Thanks
| [
"I would choose Python due to superior libxml2 bindings, specifically things like lxml.html and pyQuery. Scrapy has its own libxml2 bindings, I haven't looked at them to test them, though skimming the Scrapy documentation didn't leave me very impressed (I've done lots of scraping just using these parsers and manual coding). With any of these you get a truly superior HTML parser, querying via XPath, and with lxml.html and pyquery (also built on lxml) you get CSS selectors.\nIf you are doing a small job scraping a forum, I'd skip a scraping framework and just do it by hand -- it's easy and parallelizing etc is not really needed.\n",
"I wouldn't use PHP for a new application that I'm writing. I don't like the language for various reasons. \nAlso, it's strength is as a server side scripting language to deliver dynamic pages over the web. Not as a general purpose programming language. That's another minus point. I'd stick with Python.\nAs for which framework to use, there are lots of them around. Harvestman, Scrapy etc. There's also the 80legs cloud based crawler than you might be able to use. \nUpdate : People have been downvoting this answer probably because I said I didn't like PHP. Here's a list of reasons why. Not entirely accurate but a decent summary nevertheless http://wiki.python.org/moin/PythonVsPhp\n"
] | [
4,
3
] | [] | [] | [
"information_retrieval",
"php",
"python",
"scrapy",
"web_scraping"
] | stackoverflow_0002980519_information_retrieval_php_python_scrapy_web_scraping.txt |
Q:
Google App Engine: TypeError problem with Models
I'm running Google App Engine on the dev server.
Here is my models file:
from google.appengine.ext import db
import pickle
import re
re_dept_code = re.compile(r'[A-Z]{2,}')
re_course_number = re.compile(r'[0-9]{4}')
class DependencyArcHead(db.Model):
sink = db.ReferenceProperty()
tails = db.ListProperty()
class DependencyArcTail(db.Model):
courses = db.ListProperty()
It gives this error:
Traceback (most recent call last):
File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 3192, in _HandleRequest
self._Dispatch(dispatcher, self.rfile, outfile, env_dict)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 3135, in _Dispatch
base_env_dict=env_dict)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 516, in Dispatch
base_env_dict=base_env_dict)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2394, in Dispatch
self._module_dict)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2304, in ExecuteCGI
reset_modules = exec_script(handler_path, cgi_path, hook)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2200, in ExecuteOrImportScript
exec module_code in script_module.__dict__
File "main.py", line 19, in <module>
from src.Models import Course, findCourse, validateCourse, dictForJSON, clearAndBuildDependencyGraph
File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1279, in Decorate
return func(self, *args, **kwargs)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1929, in load_module
return self.FindAndLoadModule(submodule, fullname, search_path)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1279, in Decorate
return func(self, *args, **kwargs)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1831, in FindAndLoadModule
description)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1279, in Decorate
return func(self, *args, **kwargs)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1782, in LoadModuleRestricted
description)
File "src\Models.py", line 14, in <module>
class DependencyArcHead(db.Model):
File "src\Models.py", line 17, in DependencyArcHead
tails = db.ListProperty()
TypeError: __init__() takes at least 2 arguments (1 given)
What am I doing wrong?
A:
Possible solution: I was missing the type_name argument in the listProperty() constructor. Oops.
| Google App Engine: TypeError problem with Models | I'm running Google App Engine on the dev server.
Here is my models file:
from google.appengine.ext import db
import pickle
import re
re_dept_code = re.compile(r'[A-Z]{2,}')
re_course_number = re.compile(r'[0-9]{4}')
class DependencyArcHead(db.Model):
sink = db.ReferenceProperty()
tails = db.ListProperty()
class DependencyArcTail(db.Model):
courses = db.ListProperty()
It gives this error:
Traceback (most recent call last):
File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 3192, in _HandleRequest
self._Dispatch(dispatcher, self.rfile, outfile, env_dict)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 3135, in _Dispatch
base_env_dict=env_dict)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 516, in Dispatch
base_env_dict=base_env_dict)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2394, in Dispatch
self._module_dict)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2304, in ExecuteCGI
reset_modules = exec_script(handler_path, cgi_path, hook)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2200, in ExecuteOrImportScript
exec module_code in script_module.__dict__
File "main.py", line 19, in <module>
from src.Models import Course, findCourse, validateCourse, dictForJSON, clearAndBuildDependencyGraph
File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1279, in Decorate
return func(self, *args, **kwargs)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1929, in load_module
return self.FindAndLoadModule(submodule, fullname, search_path)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1279, in Decorate
return func(self, *args, **kwargs)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1831, in FindAndLoadModule
description)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1279, in Decorate
return func(self, *args, **kwargs)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1782, in LoadModuleRestricted
description)
File "src\Models.py", line 14, in <module>
class DependencyArcHead(db.Model):
File "src\Models.py", line 17, in DependencyArcHead
tails = db.ListProperty()
TypeError: __init__() takes at least 2 arguments (1 given)
What am I doing wrong?
| [
"Possible solution: I was missing the type_name argument in the listProperty() constructor. Oops.\n"
] | [
2
] | [] | [] | [
"google_app_engine",
"python"
] | stackoverflow_0002981548_google_app_engine_python.txt |
Q:
syntax difference between ruby and python?
i wonder if there are tutorials that go through the syntax differences for ruby and python?
i have seen a comparison between ruby and php but not between ruby and python.
i have looked at both ruby and python but it would be very useful with this side-by-side comparison for deciding which one to choose.
thanks
A:
Here's the link from ruby language site: http://www.ruby-lang.org/en/documentation/ruby-from-other-languages/to-ruby-from-python/
A:
Check out http://c2.com/cgi/wiki?PythonVsRuby.
| syntax difference between ruby and python? | i wonder if there are tutorials that go through the syntax differences for ruby and python?
i have seen a comparison between ruby and php but not between ruby and python.
i have looked at both ruby and python but it would be very useful with this side-by-side comparison for deciding which one to choose.
thanks
| [
"Here's the link from ruby language site: http://www.ruby-lang.org/en/documentation/ruby-from-other-languages/to-ruby-from-python/\n",
"Check out http://c2.com/cgi/wiki?PythonVsRuby.\n"
] | [
5,
3
] | [] | [] | [
"python",
"ruby"
] | stackoverflow_0002981611_python_ruby.txt |
Q:
Google App Engine: get_or_create()?
Does Google App Engine have an equivalent of Django's get_or_create()?
A:
There is no full equivalent, but get_or_insert is something similar. The main differences is that get_or_insert accepts key_name as lookup against filters set in get_or_create.
A:
Haven't tested this, but it should be something like the following:
class BaseModel(db.Model):
@classmethod
def get_or_create(cls, parent=None, **kwargs):
def txn():
query = cls.all()
if parent:
query.ancestor(parent)
for kw in kwargs:
query.filter("%s =" % kw, kwargs[kw])
entity = query.get()
if entity:
created = False
else:
entity = cls(parent, **kwargs)
entity.put()
created = True
return (entity, created)
return db.run_in_transaction(txn)
class Person(BaseModel):
first_name = db.StringProperty()
last_name = db.StringProperty()
p, created = Person.get_or_create(first_name='Tom', last_name='Smith')
| Google App Engine: get_or_create()? | Does Google App Engine have an equivalent of Django's get_or_create()?
| [
"There is no full equivalent, but get_or_insert is something similar. The main differences is that get_or_insert accepts key_name as lookup against filters set in get_or_create.\n",
"Haven't tested this, but it should be something like the following:\nclass BaseModel(db.Model):\n @classmethod\n def get_or_create(cls, parent=None, **kwargs):\n def txn():\n query = cls.all()\n if parent:\n query.ancestor(parent)\n for kw in kwargs:\n query.filter(\"%s =\" % kw, kwargs[kw])\n entity = query.get()\n if entity:\n created = False\n else:\n entity = cls(parent, **kwargs)\n entity.put()\n created = True\n return (entity, created)\n return db.run_in_transaction(txn)\n\nclass Person(BaseModel):\n first_name = db.StringProperty()\n last_name = db.StringProperty()\n\np, created = Person.get_or_create(first_name='Tom', last_name='Smith')\n\n"
] | [
8,
2
] | [] | [] | [
"django",
"google_app_engine",
"python"
] | stackoverflow_0002981630_django_google_app_engine_python.txt |
Q:
How do I get contents of a Google Wave given the wave id and wavelet id?
I am using the robots api. I have a wave id and wavelet id, and my app's email is added to the wave. How can I simply get the wave (or wavelet's) contents using the python api?
A:
Based on the example here:
def OnWaveletSelfAdded(event, wavelet):
for id in wavelet.blips:
blip = wavelet.blips[id]
logging.debug(blip.text)
| How do I get contents of a Google Wave given the wave id and wavelet id? | I am using the robots api. I have a wave id and wavelet id, and my app's email is added to the wave. How can I simply get the wave (or wavelet's) contents using the python api?
| [
"Based on the example here:\ndef OnWaveletSelfAdded(event, wavelet):\n for id in wavelet.blips:\n blip = wavelet.blips[id]\n logging.debug(blip.text)\n\n"
] | [
0
] | [] | [] | [
"google_app_engine",
"google_wave",
"python"
] | stackoverflow_0002981672_google_app_engine_google_wave_python.txt |
Q:
I need to change a zip code into a series of dots and dashes (a barcode), but I can't figure out how
Here's what I've got so far:
def encodeFive(zip):
zero = "||:::"
one = ":::||"
two = "::|:|"
three = "::||:"
four = ":|::|"
five = ":|:|:"
six = ":||::"
seven = "|:::|"
eight = "|::|:"
nine = "|:|::"
codeList = [zero,one,two,three,four,five,six,seven,eight,nine]
allCodes = zero+one+two+three+four+five+six+seven+eight+nine
code = ""
digits = str(zip)
for i in digits:
code = code + i
return code
With this I'll get the original zip code in a string, but none of the numbers are encoded into the barcode. I've figured out how to encode one number, but it wont work the same way with five numbers.
A:
codeList = ["||:::", ":::||", "::|:|", "::||:", ":|::|",
":|:|:", ":||::", "|:::|", "|::|:", "|:|::" ]
barcode = "".join(codeList[int(digit)] for digit in str(zipcode))
A:
Perhaps use a dictionary:
barcode = {'0':"||:::",
'1':":::||",
'2':"::|:|",
'3':"::||:",
'4':":|::|",
'5':":|:|:",
'6':":||::",
'7':"|:::|",
'8':"|::|:",
'9':"|:|::",
}
def encodeFive(zipcode):
return ''.join(barcode[n] for n in str(zipcode))
print(encodeFive(72353))
# |:::|::|:|::||::|:|:::||:
PS. It is better not to name a variable zip, since doing so overrides the builtin function zip. And similarly, it is better to avoid naming a variable code, since code is a module in the standard library.
A:
You're just adding i (the character in digits) to the string where I think you want to be adding codeList[int(i)].
The code would probably be much simpler by just using a dict for lookups.
A:
I find it easier to use split() to create lists of strings:
codes = "||::: :::|| ::|:| ::||: :|::| :|:|: :||:: |:::| |::|: |:|::".split()
def zipencode(numstr):
return ''.join(codes[int(x)] for x in str(numstr))
print zipencode("32345")
A:
I don't know what language you are usingm so I made an example in C#:
int zip = 72353;
string[] codeList = {
"||:::", ":::||", "::|:|", "::||:", ":|::|",
":|:|:", ":||::", "|:::|", "|::|:", "|:|::"
};
string code = String.Empty;
while (zip > 0) {
code = codeList[zip % 10] + code;
zip /= 10;
}
return code;
Note: Instead of converting the zip code to a string, and the convert each character back to a number, I calculated the digits numerically.
Just for fun, here's a one-liner:
return String.Concat(zip.ToString().Select(c => "||::::::||::|:|::||::|::|:|:|::||::|:::||::|:|:|::".Substring(((c-'0') % 10) * 5, 5)).ToArray());
A:
This is made in python.
number = ["||:::",
":::||",
"::|:|",
"::||:",
":|::|",
":|:|:",
":||::",
"|:::|",
"|::|:",
"|:|::"
]
def encode(num):
return ''.join(map(lambda x: number[int(x)], str(num)))
print encode(32345)
A:
It appears you're trying to generate a "postnet" barcode. Note that the five-digit ZIP postnet barcodes were obsoleted by ZIP+4 postnet barcodes, which were obsoleted by ZIP+4+2 delivery point postnet barcodes, all of which are supposed to include a checksum digit and leading and ending framing bars. In any case, all of those forms are being obsoleted by the new "intelligent mail" 4-state barcodes, which require a lot of computational code to generate and no longer rely on straight digit-to-bars mappings. Search USPS.COM for more details.
| I need to change a zip code into a series of dots and dashes (a barcode), but I can't figure out how | Here's what I've got so far:
def encodeFive(zip):
zero = "||:::"
one = ":::||"
two = "::|:|"
three = "::||:"
four = ":|::|"
five = ":|:|:"
six = ":||::"
seven = "|:::|"
eight = "|::|:"
nine = "|:|::"
codeList = [zero,one,two,three,four,five,six,seven,eight,nine]
allCodes = zero+one+two+three+four+five+six+seven+eight+nine
code = ""
digits = str(zip)
for i in digits:
code = code + i
return code
With this I'll get the original zip code in a string, but none of the numbers are encoded into the barcode. I've figured out how to encode one number, but it wont work the same way with five numbers.
| [
"codeList = [\"||:::\", \":::||\", \"::|:|\", \"::||:\", \":|::|\",\n \":|:|:\", \":||::\", \"|:::|\", \"|::|:\", \"|:|::\" ]\nbarcode = \"\".join(codeList[int(digit)] for digit in str(zipcode))\n\n",
"Perhaps use a dictionary:\nbarcode = {'0':\"||:::\",\n '1':\":::||\",\n '2':\"::|:|\",\n '3':\"::||:\",\n '4':\":|::|\",\n '5':\":|:|:\",\n '6':\":||::\",\n '7':\"|:::|\",\n '8':\"|::|:\",\n '9':\"|:|::\",\n }\n\ndef encodeFive(zipcode):\n return ''.join(barcode[n] for n in str(zipcode))\n\nprint(encodeFive(72353))\n# |:::|::|:|::||::|:|:::||:\n\nPS. It is better not to name a variable zip, since doing so overrides the builtin function zip. And similarly, it is better to avoid naming a variable code, since code is a module in the standard library.\n",
"You're just adding i (the character in digits) to the string where I think you want to be adding codeList[int(i)]. \nThe code would probably be much simpler by just using a dict for lookups.\n",
"I find it easier to use split() to create lists of strings:\ncodes = \"||::: :::|| ::|:| ::||: :|::| :|:|: :||:: |:::| |::|: |:|::\".split()\n\ndef zipencode(numstr): \n return ''.join(codes[int(x)] for x in str(numstr))\n\nprint zipencode(\"32345\") \n\n",
"I don't know what language you are usingm so I made an example in C#:\nint zip = 72353;\n\nstring[] codeList = {\n \"||:::\", \":::||\", \"::|:|\", \"::||:\", \":|::|\",\n \":|:|:\", \":||::\", \"|:::|\", \"|::|:\", \"|:|::\"\n};\nstring code = String.Empty;\nwhile (zip > 0) {\n code = codeList[zip % 10] + code;\n zip /= 10;\n}\nreturn code;\n\nNote: Instead of converting the zip code to a string, and the convert each character back to a number, I calculated the digits numerically.\nJust for fun, here's a one-liner:\nreturn String.Concat(zip.ToString().Select(c => \"||::::::||::|:|::||::|::|:|:|::||::|:::||::|:|:|::\".Substring(((c-'0') % 10) * 5, 5)).ToArray());\n\n",
"This is made in python.\nnumber = [\"||:::\",\n \":::||\",\n \"::|:|\",\n \"::||:\",\n \":|::|\",\n \":|:|:\",\n \":||::\",\n \"|:::|\",\n \"|::|:\",\n \"|:|::\"\n ]\ndef encode(num):\n return ''.join(map(lambda x: number[int(x)], str(num)))\n\nprint encode(32345)\n\n",
"It appears you're trying to generate a \"postnet\" barcode. Note that the five-digit ZIP postnet barcodes were obsoleted by ZIP+4 postnet barcodes, which were obsoleted by ZIP+4+2 delivery point postnet barcodes, all of which are supposed to include a checksum digit and leading and ending framing bars. In any case, all of those forms are being obsoleted by the new \"intelligent mail\" 4-state barcodes, which require a lot of computational code to generate and no longer rely on straight digit-to-bars mappings. Search USPS.COM for more details.\n"
] | [
4,
2,
1,
1,
0,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0002798766_python.txt |
Q:
Python: combine logging and wx so that logging stream is redirectet to stdout/stderr frame
Here's the thing:
I'm trying to combine the logging module with wx.App()'s redirect feature. My intention is to log to a file AND to stderr. But I want stderr/stdout redirected to a separate frame as is the feature of wx.App.
My test code:
import logging
import wx
class MyFrame(wx.Frame):
def __init__(self):
self.logger = logging.getLogger("main.MyFrame")
wx.Frame.__init__(self, parent = None, id = wx.ID_ANY, title = "MyFrame")
self.logger.debug("MyFrame.__init__() called.")
def OnExit(self):
self.logger.debug("MyFrame.OnExit() called.")
class MyApp(wx.App):
def __init__(self, redirect):
self.logger = logging.getLogger("main.MyApp")
wx.App.__init__(self, redirect = redirect)
self.logger.debug("MyApp.__init__() called.")
def OnInit(self):
self.frame = MyFrame()
self.frame.Show()
self.SetTopWindow(self.frame)
self.logger.debug("MyApp.OnInit() called.")
return True
def OnExit(self):
self.logger.debug("MyApp.OnExit() called.")
def main():
logger_formatter = logging.Formatter("%(name)s\t%(levelname)s\t%(message)s")
logger_stream_handler = logging.StreamHandler()
logger_stream_handler.setLevel(logging.INFO)
logger_stream_handler.setFormatter(logger_formatter)
logger_file_handler = logging.FileHandler("test.log", mode = "w")
logger_file_handler.setLevel(logging.DEBUG)
logger_file_handler.setFormatter(logger_formatter)
logger = logging.getLogger("main")
logger.setLevel(logging.DEBUG)
logger.addHandler(logger_stream_handler)
logger.addHandler(logger_file_handler)
logger.info("Logger configured.")
app = MyApp(redirect = True)
logger.debug("Created instance of MyApp. Calling MainLoop().")
app.MainLoop()
logger.debug("MainLoop() ended.")
logger.info("Exiting program.")
return 0
if (__name__ == "__main__"):
main()
Expected behavior is:
- a file is created named test.log
- the file contains logging messages with level DEBUG and INFO/ERROR/WARNING/CRITICAL
- messages from type INFO and ERROR/WARNING/CRITICAL are either shown on the console or in a separate frame, depending on where they are created
- logger messages that are not inside MyApp or MyFrame are displayed at the console
- logger messages from inside MyApp or MyFrame are shown in a separate frame
Actual behavior is:
- The file is created and contains:
main INFO Logger configured.
main.MyFrame DEBUG MyFrame.__init__() called.
main.MyFrame INFO MyFrame.__init__() called.
main.MyApp DEBUG MyApp.OnInit() called.
main.MyApp INFO MyApp.OnInit() called.
main.MyApp DEBUG MyApp.__init__() called.
main DEBUG Created instance of MyApp. Calling MainLoop().
main.MyApp DEBUG MyApp.OnExit() called.
main DEBUG MainLoop() ended.
main INFO Exiting program.
- Console output is:
main INFO Logger configured.
main.MyFrame INFO MyFrame.__init__() called.
main.MyApp INFO MyApp.OnInit() called.
main INFO Exiting program.
- No separate frame is opened, although the lines
main.MyFrame INFO MyFrame.__init__() called.
main.MyApp INFO MyApp.OnInit() called.
shouldget displayed within a frame and not on the console.
It seems to me that wx.App can't redirect stderr to a frame as soon as a logger instance uses stderr as output. wxPythons Docs claim the wanted behavior though, see here.
Any ideas?
Uwe
A:
When wx.App says it will redirect stdout/stderr to a popup window, what it means really is that it will redirect sys.stdout and sys.stderr, so if you directly write to sys.stdout or sys.stderr it will be redirected to a popup window e.g. try this
print "this will go to wx msg frame"
sys.stdout.write("yes it goes")
sys.stderr.write("... and this one too")
Problem here is that if wxApp is created after creating streamhandler, streamhandler is pointing to old(original) sys.stderr and sys.stdout not to the new ones which wxApp has set, so a simpler solution is to create wx.App before creating streap handler e.g. in code move app = MyApp(redirect = True) before logging initialization code.
Alternatively create a custom logging handler and write data to sys.stdout and sys.stderr or better create you own window and add data there. e.g. try this
class LogginRedirectHandler(logging.Handler):
def __init__(self,):
# run the regular Handler __init__
logging.Handler.__init__(self)
def emit(self, record):
sys.stdout.write(record.message)
loggingRedirectHandler = LogginRedirectHandler()
logger_file_handler.setLevel(logging.DEBUG)
logger.addHandler(loggingRedirectHandler)
A:
The way I do this, which I think is more elegant, is to create a custom logging Handler subclass that posts its messages to a specific logging frame.
This makes it easier to turn GUI logging on/off at runtime.
| Python: combine logging and wx so that logging stream is redirectet to stdout/stderr frame | Here's the thing:
I'm trying to combine the logging module with wx.App()'s redirect feature. My intention is to log to a file AND to stderr. But I want stderr/stdout redirected to a separate frame as is the feature of wx.App.
My test code:
import logging
import wx
class MyFrame(wx.Frame):
def __init__(self):
self.logger = logging.getLogger("main.MyFrame")
wx.Frame.__init__(self, parent = None, id = wx.ID_ANY, title = "MyFrame")
self.logger.debug("MyFrame.__init__() called.")
def OnExit(self):
self.logger.debug("MyFrame.OnExit() called.")
class MyApp(wx.App):
def __init__(self, redirect):
self.logger = logging.getLogger("main.MyApp")
wx.App.__init__(self, redirect = redirect)
self.logger.debug("MyApp.__init__() called.")
def OnInit(self):
self.frame = MyFrame()
self.frame.Show()
self.SetTopWindow(self.frame)
self.logger.debug("MyApp.OnInit() called.")
return True
def OnExit(self):
self.logger.debug("MyApp.OnExit() called.")
def main():
logger_formatter = logging.Formatter("%(name)s\t%(levelname)s\t%(message)s")
logger_stream_handler = logging.StreamHandler()
logger_stream_handler.setLevel(logging.INFO)
logger_stream_handler.setFormatter(logger_formatter)
logger_file_handler = logging.FileHandler("test.log", mode = "w")
logger_file_handler.setLevel(logging.DEBUG)
logger_file_handler.setFormatter(logger_formatter)
logger = logging.getLogger("main")
logger.setLevel(logging.DEBUG)
logger.addHandler(logger_stream_handler)
logger.addHandler(logger_file_handler)
logger.info("Logger configured.")
app = MyApp(redirect = True)
logger.debug("Created instance of MyApp. Calling MainLoop().")
app.MainLoop()
logger.debug("MainLoop() ended.")
logger.info("Exiting program.")
return 0
if (__name__ == "__main__"):
main()
Expected behavior is:
- a file is created named test.log
- the file contains logging messages with level DEBUG and INFO/ERROR/WARNING/CRITICAL
- messages from type INFO and ERROR/WARNING/CRITICAL are either shown on the console or in a separate frame, depending on where they are created
- logger messages that are not inside MyApp or MyFrame are displayed at the console
- logger messages from inside MyApp or MyFrame are shown in a separate frame
Actual behavior is:
- The file is created and contains:
main INFO Logger configured.
main.MyFrame DEBUG MyFrame.__init__() called.
main.MyFrame INFO MyFrame.__init__() called.
main.MyApp DEBUG MyApp.OnInit() called.
main.MyApp INFO MyApp.OnInit() called.
main.MyApp DEBUG MyApp.__init__() called.
main DEBUG Created instance of MyApp. Calling MainLoop().
main.MyApp DEBUG MyApp.OnExit() called.
main DEBUG MainLoop() ended.
main INFO Exiting program.
- Console output is:
main INFO Logger configured.
main.MyFrame INFO MyFrame.__init__() called.
main.MyApp INFO MyApp.OnInit() called.
main INFO Exiting program.
- No separate frame is opened, although the lines
main.MyFrame INFO MyFrame.__init__() called.
main.MyApp INFO MyApp.OnInit() called.
shouldget displayed within a frame and not on the console.
It seems to me that wx.App can't redirect stderr to a frame as soon as a logger instance uses stderr as output. wxPythons Docs claim the wanted behavior though, see here.
Any ideas?
Uwe
| [
"When wx.App says it will redirect stdout/stderr to a popup window, what it means really is that it will redirect sys.stdout and sys.stderr, so if you directly write to sys.stdout or sys.stderr it will be redirected to a popup window e.g. try this \nprint \"this will go to wx msg frame\"\nsys.stdout.write(\"yes it goes\")\nsys.stderr.write(\"... and this one too\")\n\nProblem here is that if wxApp is created after creating streamhandler, streamhandler is pointing to old(original) sys.stderr and sys.stdout not to the new ones which wxApp has set, so a simpler solution is to create wx.App before creating streap handler e.g. in code move app = MyApp(redirect = True) before logging initialization code.\nAlternatively create a custom logging handler and write data to sys.stdout and sys.stderr or better create you own window and add data there. e.g. try this\nclass LogginRedirectHandler(logging.Handler):\n def __init__(self,):\n # run the regular Handler __init__\n logging.Handler.__init__(self)\n\n def emit(self, record):\n sys.stdout.write(record.message)\n\nloggingRedirectHandler = LogginRedirectHandler()\nlogger_file_handler.setLevel(logging.DEBUG)\nlogger.addHandler(loggingRedirectHandler)\n\n",
"The way I do this, which I think is more elegant, is to create a custom logging Handler subclass that posts its messages to a specific logging frame.\nThis makes it easier to turn GUI logging on/off at runtime.\n"
] | [
1,
1
] | [] | [] | [
"logging",
"python",
"wxpython"
] | stackoverflow_0002689441_logging_python_wxpython.txt |
Q:
Python - Blackjack
def showCards():
#SUM
sum = playerCards[0] + playerCards[1]
#Print cards
print "Player's Hand: " + str(playerCards) + " : " + "sum"
print "Dealer's Hand: " + str(compCards[0]) + " : " + "sum"
compCards = [Deal(),Deal()]
playerCards = [Deal(),Deal()]
How can i add up the integer element of a list containing to values? under #SUM error is can combine lists like ints...
A:
To find the value of a hand here you can just do something like
compSum = sum(compCards)
But it looks like you might have tried that from the second part of your post mentioning #SUM, I don't know what you were trying to say. This will only work if Deal() returns integers.
A:
Aside from the comments mentioned above, sum is actually a built in function in Python that does just what you seem to be looking for - so don't overwrite it and use it as an identifier name! Instead use it.
Also there's a style guide that all Python programmers are meant to follow - it's what helps to further distinguish Python code from the inscrutable sludge often encountered in code written in other languages such as say Perl or PHP. There's a higher standard in Python and you're not meeting it. Style
So here's a rewrite of your code along with some guesses to fill in the missing parts.
from random import randint
CARD_FACES = {1: "Ace", 2: "2", 3: "3", 4: "4", 5: "5", 6: "6", 7: "7", 8: "8",
9: "9", 10: "10", 11: "Jack", 12: "Queen", 13: "King"}
def deal():
"""Deal a card - returns a value indicating a card with the Ace
represented by 1 and the Jack, Queen and King by 11, 12, 13
respectively.
"""
return randint(1, 13)
def _get_hand_value(cards):
"""Get the value of a hand based on the rules for Black Jack."""
val = 0
for card in cards:
if 1 < card <= 10:
val += card # 2 thru 10 are worth their face values
elif card > 10:
val += 10 # Jack, Queen and King are worth 10
# Deal with the Ace if present. Worth 11 if total remains 21 or lower
# otherwise it's worth 1.
if 1 in cards and val + 11 <= 21:
return val + 11
elif 1 in cards:
return val + 1
else:
return val
def show_hand(name, cards):
"""Print a message showing the contents and value of a hand."""
faces = [CARD_FACES[card] for card in cards]
val = _get_hand_value(cards)
if val == 21:
note = "BLACK JACK!"
else:
note = ""
print "%s's hand: %s, %s : %s %s" % (name, faces[0], faces[1], val, note)
# Deal 2 cards to both the dealer and a player and show their hands
for name in ("Dealer", "Player"):
cards = (deal(), deal())
show_hand(name, cards)
Well ok so I got carried away and actually wrote the whole thing. As another poster wrote sum(list_of_values) is the way to go but actually is too simplistic for the Black Jack rules.
| Python - Blackjack | def showCards():
#SUM
sum = playerCards[0] + playerCards[1]
#Print cards
print "Player's Hand: " + str(playerCards) + " : " + "sum"
print "Dealer's Hand: " + str(compCards[0]) + " : " + "sum"
compCards = [Deal(),Deal()]
playerCards = [Deal(),Deal()]
How can i add up the integer element of a list containing to values? under #SUM error is can combine lists like ints...
| [
"To find the value of a hand here you can just do something like\ncompSum = sum(compCards)\n\nBut it looks like you might have tried that from the second part of your post mentioning #SUM, I don't know what you were trying to say. This will only work if Deal() returns integers.\n",
"Aside from the comments mentioned above, sum is actually a built in function in Python that does just what you seem to be looking for - so don't overwrite it and use it as an identifier name! Instead use it. \nAlso there's a style guide that all Python programmers are meant to follow - it's what helps to further distinguish Python code from the inscrutable sludge often encountered in code written in other languages such as say Perl or PHP. There's a higher standard in Python and you're not meeting it. Style\nSo here's a rewrite of your code along with some guesses to fill in the missing parts.\nfrom random import randint\n\nCARD_FACES = {1: \"Ace\", 2: \"2\", 3: \"3\", 4: \"4\", 5: \"5\", 6: \"6\", 7: \"7\", 8: \"8\", \n 9: \"9\", 10: \"10\", 11: \"Jack\", 12: \"Queen\", 13: \"King\"}\n\ndef deal():\n \"\"\"Deal a card - returns a value indicating a card with the Ace\n represented by 1 and the Jack, Queen and King by 11, 12, 13\n respectively.\n \"\"\"\n return randint(1, 13)\n\ndef _get_hand_value(cards):\n \"\"\"Get the value of a hand based on the rules for Black Jack.\"\"\"\n val = 0\n for card in cards:\n if 1 < card <= 10:\n val += card # 2 thru 10 are worth their face values\n elif card > 10:\n val += 10 # Jack, Queen and King are worth 10\n\n # Deal with the Ace if present. Worth 11 if total remains 21 or lower\n # otherwise it's worth 1.\n if 1 in cards and val + 11 <= 21:\n return val + 11\n elif 1 in cards:\n return val + 1\n else:\n return val \n\ndef show_hand(name, cards):\n \"\"\"Print a message showing the contents and value of a hand.\"\"\"\n faces = [CARD_FACES[card] for card in cards]\n val = _get_hand_value(cards)\n\n if val == 21:\n note = \"BLACK JACK!\"\n else:\n note = \"\"\n\n print \"%s's hand: %s, %s : %s %s\" % (name, faces[0], faces[1], val, note)\n\n\n# Deal 2 cards to both the dealer and a player and show their hands\nfor name in (\"Dealer\", \"Player\"):\n cards = (deal(), deal())\n show_hand(name, cards)\n\nWell ok so I got carried away and actually wrote the whole thing. As another poster wrote sum(list_of_values) is the way to go but actually is too simplistic for the Black Jack rules.\n"
] | [
1,
1
] | [] | [] | [
"blackjack",
"python"
] | stackoverflow_0002982133_blackjack_python.txt |
Q:
Output being printed in the same line, Py3k
In Python 2.x, I'd write...
for i in range(5):
print i,
...to get integers from 0 to 4 printed in the same row. How to do that in Python 3.x, since print is a function now?
A:
Use print(x, end = ' '):
From the release notes:
Old: print x, # Trailing comma suppresses newline
New: print(x, end=" ") # Appends a space instead of a newline
| Output being printed in the same line, Py3k | In Python 2.x, I'd write...
for i in range(5):
print i,
...to get integers from 0 to 4 printed in the same row. How to do that in Python 3.x, since print is a function now?
| [
"Use print(x, end = ' '):\nFrom the release notes:\nOld: print x, # Trailing comma suppresses newline \nNew: print(x, end=\" \") # Appends a space instead of a newline\n\n"
] | [
7
] | [] | [] | [
"python",
"python_2.7",
"python_3.x"
] | stackoverflow_0002982295_python_python_2.7_python_3.x.txt |
Q:
'NoneType' object has no attribute 'data'
I am sending a SOAP request to my server and getting the response back. sample of the response string is shown below:
<?xml version = '1.0' ?>
<env:Envelope xmlns:env=http:////www.w3.org/2003/05/soap-envelop
.
..
..
<env:Body>
<epas:get-all-config-resp xmlns:epas="urn:organization:epas:soap"> ^M
...
...
<epas:property name="Tom">12</epas:property>
>
> <epas:property name="Alice">34</epas:property>
>
> <epas:property name="John">56</epas:property>
>
> <epas:property name="Danial">78</epas:property>
>
> <epas:property name="George">90</epas:property>
>
> <epas:property name="Luise">11</epas:property>
...
^M
</env:Body?
</env:Envelop>
What I noticed in the response is that there is an extra character shown in the body which is "^M". Not sure if this could be the issue. Note the ^M shown!
when I tried parsing the string returned from the server to get the names and values using the code sample:
elements = minidom.parseString(xmldoc).getElementsByTagName("property")
myDict = {}
for element in elements:
myDict[element.getAttribute('name')] = element.firstChild.data
But, I am getting this error: 'NoneType' object has no attribute 'data'. May be its something to do with the "^M" shown on the xml response back!
Any ideas/comments would be appreciated,
Cheers
A:
[Edited to make clearer, and to suggest looking for an empty element]
Apparently, some of the elements returned by getElementsByTagName don't have a firstChild. This happens when the element is empty, as in
<epas:property name="Empty"></epas:property>
When minidom encounters that situation, it'll set "element.firstChild" to None. This is very likely what's happening to you.
Otherwise, it's hard to say what's happening, exactly, with only a fragment of the XML (and a broken one, at that), but you could try catching the exception and inspecting the element in question:
for element in elements:
try:
myDict[element.getAttribute('name')] = element.firstChild.data
except AttributeError:
print element, element.firstChild
Or, instead of simply printing the element, you could call the debugger (import pdb; pdb.set_trace()). Then you can see the element, and understand why it's giving you trouble.
BTW, the "^M" is simply a windows-style carriage-return. I adapted the xml fragment you pasted, to test locally, and the "^M" makes no difference whatsoever, minidom takes care of it.
So, check for an empty element, or use the try/except as I suggested. If you still can't tell what's going on, paste the complete XML string (at http://pastebin.com/, for example), I might be able to help.
Also, on a related note: once you've sorted out this issue, you can construct the dictionary with a list comprehension:
myDict = dict((element.getAttribute('name'), element.firstChild.data) for element in elements)
And, if you've determined that it is a matter of empty elements, you can skip them thusly:
myDict = dict((element.getAttribute('name'), element.firstChild.data) for element in elements if element.firstChild is not None)
A:
You could filter elements which the first child is None, it seems to be about the ^M indeed, it is probably being turned into a TextNode object, a blank one without data.
elements = minidom.parseString(xmldoc).getElementsByTagName("property")
myDict = {}
for element in elements:
if element.firstChild:
myDict[element.getAttribute('name')] = element.firstChild.data
A:
The error indicates that one of your elements has a firstChild that has been set to None, and therefore trying to access its .data results in an error.
I suspect that the trailing ^M is the problem. The element produced for the ^M is invalid, and/so it has no firstChild element.
You can get around this by checking to see whether firstChild is equal to None before you try to extract its .data member.
A:
element.firstChild is None. Are you sure you don't want element.data? I would guess firstChild refers to the first child element of element rather than a text node.
| 'NoneType' object has no attribute 'data' | I am sending a SOAP request to my server and getting the response back. sample of the response string is shown below:
<?xml version = '1.0' ?>
<env:Envelope xmlns:env=http:////www.w3.org/2003/05/soap-envelop
.
..
..
<env:Body>
<epas:get-all-config-resp xmlns:epas="urn:organization:epas:soap"> ^M
...
...
<epas:property name="Tom">12</epas:property>
>
> <epas:property name="Alice">34</epas:property>
>
> <epas:property name="John">56</epas:property>
>
> <epas:property name="Danial">78</epas:property>
>
> <epas:property name="George">90</epas:property>
>
> <epas:property name="Luise">11</epas:property>
...
^M
</env:Body?
</env:Envelop>
What I noticed in the response is that there is an extra character shown in the body which is "^M". Not sure if this could be the issue. Note the ^M shown!
when I tried parsing the string returned from the server to get the names and values using the code sample:
elements = minidom.parseString(xmldoc).getElementsByTagName("property")
myDict = {}
for element in elements:
myDict[element.getAttribute('name')] = element.firstChild.data
But, I am getting this error: 'NoneType' object has no attribute 'data'. May be its something to do with the "^M" shown on the xml response back!
Any ideas/comments would be appreciated,
Cheers
| [
"[Edited to make clearer, and to suggest looking for an empty element]\nApparently, some of the elements returned by getElementsByTagName don't have a firstChild. This happens when the element is empty, as in\n<epas:property name=\"Empty\"></epas:property>\n\nWhen minidom encounters that situation, it'll set \"element.firstChild\" to None. This is very likely what's happening to you.\nOtherwise, it's hard to say what's happening, exactly, with only a fragment of the XML (and a broken one, at that), but you could try catching the exception and inspecting the element in question:\nfor element in elements:\n try:\n myDict[element.getAttribute('name')] = element.firstChild.data\n except AttributeError:\n print element, element.firstChild\n\nOr, instead of simply printing the element, you could call the debugger (import pdb; pdb.set_trace()). Then you can see the element, and understand why it's giving you trouble.\nBTW, the \"^M\" is simply a windows-style carriage-return. I adapted the xml fragment you pasted, to test locally, and the \"^M\" makes no difference whatsoever, minidom takes care of it.\nSo, check for an empty element, or use the try/except as I suggested. If you still can't tell what's going on, paste the complete XML string (at http://pastebin.com/, for example), I might be able to help.\nAlso, on a related note: once you've sorted out this issue, you can construct the dictionary with a list comprehension:\nmyDict = dict((element.getAttribute('name'), element.firstChild.data) for element in elements)\n\nAnd, if you've determined that it is a matter of empty elements, you can skip them thusly:\nmyDict = dict((element.getAttribute('name'), element.firstChild.data) for element in elements if element.firstChild is not None)\n\n",
"You could filter elements which the first child is None, it seems to be about the ^M indeed, it is probably being turned into a TextNode object, a blank one without data.\nelements = minidom.parseString(xmldoc).getElementsByTagName(\"property\") \nmyDict = {}\nfor element in elements:\n if element.firstChild:\n myDict[element.getAttribute('name')] = element.firstChild.data\n\n",
"The error indicates that one of your elements has a firstChild that has been set to None, and therefore trying to access its .data results in an error.\nI suspect that the trailing ^M is the problem. The element produced for the ^M is invalid, and/so it has no firstChild element.\nYou can get around this by checking to see whether firstChild is equal to None before you try to extract its .data member.\n",
"element.firstChild is None. Are you sure you don't want element.data? I would guess firstChild refers to the first child element of element rather than a text node.\n"
] | [
3,
1,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0002976838_python.txt |
Q:
Specifying formatting for csv.writer in Python
I am using csv.DictWriter to output csv files from a set of dictionaries. I use the following function:
def dictlist2file(dictrows, filename, fieldnames, delimiter='\t',
lineterminator='\n'):
out_f = open(filename, 'w')
# Write out header
header = delimiter.join(fieldnames) + lineterminator
out_f.write(header)
# Write out dictionary
data = csv.DictWriter(out_f, fieldnames,
delimiter=delimiter,
lineterminator=lineterminator)
data.writerows(dictrows)
out_f.close()
where dictrows is a list of dictionaries, and fieldnames provides the headers that should be serialized to file.
Some of the values in my dictionary list (dictrows) are numeric -- e.g. floats, and I'd like to specify the formatting of these. For example, I might want floats to be serialized with "%.2f" rather than full precision. Ideally, I'd like to specify some kind of mapping that says how to format each type, e.g.
{float: "%.2f"}
that says that if you see a float, format it with %.2f. Is there an easy way to do this? I don't want to subclass DictWriter or anything complicated like that -- this seems like very generic functionality.
How can this be done?
The only other solution I can think of is: instead of messing with the formatting of DictWriter, just use the decimal package to specify the decimal precision of floats to be %.2 which will cause to be serialized as such. Don't know if this is a better solution?
thanks very much for your help.
A:
class TypedWriter:
"""
A CSV writer which will write rows to CSV file "f",
which uses "fieldformats" to format fields.
"""
def __init__(self, f, fieldnames, fieldformats, **kwds):
self.writer = csv.DictWriter(f, fieldnames, **kwds)
self.formats = fieldformats
def writerow(self, row):
self.writer.writerow(dict((k, self.formats[k] % v)
for k, v in row.iteritems()))
def writerows(self, rows):
for row in rows:
self.writerow(row)
Not tested.
| Specifying formatting for csv.writer in Python | I am using csv.DictWriter to output csv files from a set of dictionaries. I use the following function:
def dictlist2file(dictrows, filename, fieldnames, delimiter='\t',
lineterminator='\n'):
out_f = open(filename, 'w')
# Write out header
header = delimiter.join(fieldnames) + lineterminator
out_f.write(header)
# Write out dictionary
data = csv.DictWriter(out_f, fieldnames,
delimiter=delimiter,
lineterminator=lineterminator)
data.writerows(dictrows)
out_f.close()
where dictrows is a list of dictionaries, and fieldnames provides the headers that should be serialized to file.
Some of the values in my dictionary list (dictrows) are numeric -- e.g. floats, and I'd like to specify the formatting of these. For example, I might want floats to be serialized with "%.2f" rather than full precision. Ideally, I'd like to specify some kind of mapping that says how to format each type, e.g.
{float: "%.2f"}
that says that if you see a float, format it with %.2f. Is there an easy way to do this? I don't want to subclass DictWriter or anything complicated like that -- this seems like very generic functionality.
How can this be done?
The only other solution I can think of is: instead of messing with the formatting of DictWriter, just use the decimal package to specify the decimal precision of floats to be %.2 which will cause to be serialized as such. Don't know if this is a better solution?
thanks very much for your help.
| [
"class TypedWriter:\n \"\"\"\n A CSV writer which will write rows to CSV file \"f\",\n which uses \"fieldformats\" to format fields.\n \"\"\"\n\n def __init__(self, f, fieldnames, fieldformats, **kwds):\n self.writer = csv.DictWriter(f, fieldnames, **kwds)\n self.formats = fieldformats\n\n def writerow(self, row):\n self.writer.writerow(dict((k, self.formats[k] % v) \n for k, v in row.iteritems()))\n\n def writerows(self, rows):\n for row in rows:\n self.writerow(row)\n\nNot tested.\n"
] | [
5
] | [] | [] | [
"csv",
"parsing",
"python"
] | stackoverflow_0002982642_csv_parsing_python.txt |
Q:
Django admin add page, how to, autofill with latest data(0002)+1=0003
When adding a new data, can we automatically add a dynamic default data where the value is previous recorded data(0002)+1=0003
A:
Not reliably. What will happen if multiple people access it at the same time is that data will be overwritten. Let the PK serve its purpose, behind the scenes.
| Django admin add page, how to, autofill with latest data(0002)+1=0003 | When adding a new data, can we automatically add a dynamic default data where the value is previous recorded data(0002)+1=0003
| [
"Not reliably. What will happen if multiple people access it at the same time is that data will be overwritten. Let the PK serve its purpose, behind the scenes.\n"
] | [
1
] | [] | [] | [
"admin",
"django",
"python"
] | stackoverflow_0002982708_admin_django_python.txt |
Q:
Python program for NIST randomness equation
There is a recurrence equation on page 1789 of this paper and I need some help making a python program to calculate pi_i. I have no idea what is going on here.
Other references:original paper, pages (according to adobe, not the physical pages) 43 and 86
edit and i had already deleted what i wrote because all the answers i got were 0, even though all the values were floats. i believe what i had looked somewhat like the code posted below
A:
Here's a pseudocode/VBAish answer:
Function T(i as Integer, n as Integer, m as Integer) As Double
Dim j As Integer, temp As Double
Select Case i
Case 0
If n < 1 Then
n = 1
Else
If n < m Then
T = 2 * T(0,n-1)
Else
T = 2 * T(0,n-1) - T(0,n-m-1)
End If
End If
Case 1
If n < m Then
T = 0
Else
If n = m Then
T = 1
Else
If n = m + 1 Then
T = 2
Else
temp = 0
For j = -1 to n-m-1
temp = temp + T(0,j) * T(0,n-m-2-j)
Next j
T = temp
End If
End If
End If
Case 2 to 9999999
temp = 0
For j = -1 to n-2*m-i
temp = temp + T(0,j) * T(i-1,n-m-2-j)
Next j
T = T(i-1,n-1) + temp
End Case
End Function
A:
you will need to calculate the intermediate values as described in the paper, then loop on them to add them where you see the big summation signs...
| Python program for NIST randomness equation | There is a recurrence equation on page 1789 of this paper and I need some help making a python program to calculate pi_i. I have no idea what is going on here.
Other references:original paper, pages (according to adobe, not the physical pages) 43 and 86
edit and i had already deleted what i wrote because all the answers i got were 0, even though all the values were floats. i believe what i had looked somewhat like the code posted below
| [
"Here's a pseudocode/VBAish answer:\nFunction T(i as Integer, n as Integer, m as Integer) As Double\n\nDim j As Integer, temp As Double\n\nSelect Case i\n Case 0\n If n < 1 Then\n n = 1\n Else\n If n < m Then\n T = 2 * T(0,n-1)\n Else\n T = 2 * T(0,n-1) - T(0,n-m-1)\n End If\n End If\n Case 1\n If n < m Then\n T = 0\n Else\n If n = m Then\n T = 1\n Else\n If n = m + 1 Then\n T = 2\n Else\n temp = 0\n For j = -1 to n-m-1\n temp = temp + T(0,j) * T(0,n-m-2-j)\n Next j\n T = temp\n End If\n End If\n End If\n Case 2 to 9999999\n temp = 0\n For j = -1 to n-2*m-i\n temp = temp + T(0,j) * T(i-1,n-m-2-j)\n Next j\n T = T(i-1,n-1) + temp\nEnd Case\n\nEnd Function\n\n",
"you will need to calculate the intermediate values as described in the paper, then loop on them to add them where you see the big summation signs...\n"
] | [
1,
0
] | [] | [] | [
"equation",
"math",
"python"
] | stackoverflow_0002982604_equation_math_python.txt |
Q:
How to control Microsoft Speech Recognition app?
I want to know if it's possible to control "Microsoft Speech Recognition" using c#.
(source: yfrog.com)
Is it possible, for instance, to simulate the click on "On: Listen to everything I say" programmatically using c# or python?
A:
JRobert had the right idea.
If you were using C++, then you would call ISpRecognizer::SetRecoState(SPRST_ACTIVE), and then, if you're running on Windows 7, QI the ISpRecognizer for ISpRecognizer3 and call ISpRecognizer3::SetActiveCategory(NULL) to force the recognizer into the ON state.
But, since you're using C#, you should use System.Speech.Recognition.SpeechRecognizer and set the State property to Listening. (Note that this will not, as far as I know, switch from Sleep to On.)
| How to control Microsoft Speech Recognition app? | I want to know if it's possible to control "Microsoft Speech Recognition" using c#.
(source: yfrog.com)
Is it possible, for instance, to simulate the click on "On: Listen to everything I say" programmatically using c# or python?
| [
"JRobert had the right idea. \nIf you were using C++, then you would call ISpRecognizer::SetRecoState(SPRST_ACTIVE), and then, if you're running on Windows 7, QI the ISpRecognizer for ISpRecognizer3 and call ISpRecognizer3::SetActiveCategory(NULL) to force the recognizer into the ON state.\nBut, since you're using C#, you should use System.Speech.Recognition.SpeechRecognizer and set the State property to Listening. (Note that this will not, as far as I know, switch from Sleep to On.)\n"
] | [
0
] | [
"Here's Microsoft's Speech API documentation, and an \nexample in Python.\n"
] | [
-1
] | [
"c#",
"python",
"speech_recognition"
] | stackoverflow_0002972889_c#_python_speech_recognition.txt |
Q:
Python - excel - xlwt: colouring every second row
i just finish some MYSQL to excel script with xlwt and I need to colour every second row for easy reading.
I have tried this:
row = easyxf('pattern: pattern solid, fore_colour blue')
for i in range(0,10,2):
ws0.row(i).set_style(row)
Alone this colouring is fine, but when when I write my data rows are again white.
Can some please show me some example 'cuz I m lost in coding :/
Best Regards.
A:
I've only ever applied color to rows using the write() method.
Does something like this work for you? (adapted from this excellent example):
mystyle = easyxf('pattern: pattern solid, fore_colour blue')
for row in data:
rowx += 1
for colx, value in enumerate(row):
if rowx % 2 == 0:
# apply style for even-numbered rows
ws0.write(rowx, colx, value, mystyle)
else:
# no style for odd-numbered rows
ws0.write(rowx, colx, value)
A:
See the answer to your identical question in the python-excel google-group.
| Python - excel - xlwt: colouring every second row | i just finish some MYSQL to excel script with xlwt and I need to colour every second row for easy reading.
I have tried this:
row = easyxf('pattern: pattern solid, fore_colour blue')
for i in range(0,10,2):
ws0.row(i).set_style(row)
Alone this colouring is fine, but when when I write my data rows are again white.
Can some please show me some example 'cuz I m lost in coding :/
Best Regards.
| [
"I've only ever applied color to rows using the write() method.\nDoes something like this work for you? (adapted from this excellent example):\nmystyle = easyxf('pattern: pattern solid, fore_colour blue')\n\nfor row in data:\n rowx += 1\n for colx, value in enumerate(row):\n if rowx % 2 == 0:\n # apply style for even-numbered rows\n ws0.write(rowx, colx, value, mystyle)\n else:\n # no style for odd-numbered rows\n ws0.write(rowx, colx, value)\n\n",
"See the answer to your identical question in the python-excel google-group.\n"
] | [
4,
1
] | [] | [] | [
"excel",
"python",
"xlwt"
] | stackoverflow_0002981293_excel_python_xlwt.txt |
Q:
How to query an input in Python without outputting a new line
The title describes the question pretty much.
A:
The input function, which does the query, does not emit a newline:
>>> input('tell me: ')
tell me: what?
'what?'
>>>
as you see, the prompt is output without any newline, and what the user types after that appears on the same line as the prompt. Of course, the user is also typing a newline, and (like everything else the user types), that newline is echoed (so further results are on following lines). Is that your problem?
If so, then you need to switch to platform-specific approaches, such as curses on just about any machine except Windows, and msvcrt on Windows (or, you could look for a curses port on Windows, but I don't know if there's one for Python 3). The two modules are very different, and you haven't clarified your platform (or your exact needs -- my previous paragraph is an attempt at an educated guess;-), so I'll just wait for you to clarify needs and platforms rather than launching into long essays that may not prove helpful.
A:
If you use raw_input it does not insert a new line automatically.
>>> name = raw_input ("What...is your name? ")
What...is your name? Arthur, King of the Britons!
| How to query an input in Python without outputting a new line | The title describes the question pretty much.
| [
"The input function, which does the query, does not emit a newline:\n>>> input('tell me: ')\ntell me: what?\n'what?'\n>>> \n\nas you see, the prompt is output without any newline, and what the user types after that appears on the same line as the prompt. Of course, the user is also typing a newline, and (like everything else the user types), that newline is echoed (so further results are on following lines). Is that your problem?\nIf so, then you need to switch to platform-specific approaches, such as curses on just about any machine except Windows, and msvcrt on Windows (or, you could look for a curses port on Windows, but I don't know if there's one for Python 3). The two modules are very different, and you haven't clarified your platform (or your exact needs -- my previous paragraph is an attempt at an educated guess;-), so I'll just wait for you to clarify needs and platforms rather than launching into long essays that may not prove helpful.\n",
"If you use raw_input it does not insert a new line automatically.\n>>> name = raw_input (\"What...is your name? \") \nWhat...is your name? Arthur, King of the Britons!\n\n"
] | [
3,
1
] | [] | [] | [
"input",
"newline",
"python",
"python_3.x"
] | stackoverflow_0002982964_input_newline_python_python_3.x.txt |
Q:
How to query an input in Python without outputting a new line (cont.)
I already posted this, but here is the exact code:
x1 = input("")
x2 = input("-")
x3 = input("-")
x4 = input("-")
So, how would I do it so that there are no spaces between the first input and the next "-"?
Example:
1234-5678-9101-1121
A:
Ugly, but you could use terminal escape sequences to delete the newline created by the user ending input between each successive call of input().
The appropriate sequences of escapes would be <Esc>[2K to erase the current line, and then possibly <Esc>[nC to move forwards n characters where n is calculated by retrieving the length of the string that the last input() call returned.
A:
>>> x1, x2, x3, x4 = raw_input("Input number as xxxx-xxxx-xxxx-xxxx").split('-')
If you're using Python version 3.x, replace rawinput with input.
A:
You can use the code in Python read a single character from the user to read stdin character by character (basically reading it in unbuffered mode so that the user doesn't have to type enter before you can see the input).
| How to query an input in Python without outputting a new line (cont.) | I already posted this, but here is the exact code:
x1 = input("")
x2 = input("-")
x3 = input("-")
x4 = input("-")
So, how would I do it so that there are no spaces between the first input and the next "-"?
Example:
1234-5678-9101-1121
| [
"Ugly, but you could use terminal escape sequences to delete the newline created by the user ending input between each successive call of input().\nThe appropriate sequences of escapes would be <Esc>[2K to erase the current line, and then possibly <Esc>[nC to move forwards n characters where n is calculated by retrieving the length of the string that the last input() call returned.\n",
">>> x1, x2, x3, x4 = raw_input(\"Input number as xxxx-xxxx-xxxx-xxxx\").split('-')\n\nIf you're using Python version 3.x, replace rawinput with input.\n",
"You can use the code in Python read a single character from the user to read stdin character by character (basically reading it in unbuffered mode so that the user doesn't have to type enter before you can see the input).\n"
] | [
1,
0,
0
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0002983090_python_python_3.x.txt |
Q:
How do I process a nested list?
Suppose I have a bulleted list like this:
* list item 1
* list item 2 (a parent)
** list item 3 (a child of list item 2)
** list item 4 (a child of list item 2 as well)
*** list item 5 (a child of list item 4 and a grand-child of list item 2)
* list item 6
I'd like to parse that into a nested list or some other data structure which makes the parent-child relationship between elements explicit (rather than depending on their contents and relative position). For example, here's a list of tuples containing an item and a list of its children (and so forth):
Edit: Hopefully, a more correct list example, where each element in the list is a tuple containing: a bullet's text and, if applicable, a list of children (in the same form).
[('list item 1',),
('list item 2', [('list item 3',), ('list item 4', [('list item 5',)])]
('list item 6',)]
[('list item 1',),
('list item 2', [('list item 3',), ('list item 4', [('list item 5',)])]),
('list item 6',)]
I've attempted to do this with plain Python and some experimentation with Pyparsing, but I'm not making progress. I'm left with two major questions:
What's the strategy I need to employ to make this work? I know recursion is part of the solution, but I'm having a hard time making the connection between this and, say, a Fibonacci sequence.
I'm certain I'm not the first person to have done this, but I don't know the terminology of the problem to make fruitful searches for more information on this topic. What problems are related to this so that I can learn more about solving these kinds of problems in general?
A:
In the view of search algorithm, the bullet you give is actually a sequence generated by Depth-First-Search. So my strategy is just to rebuild the tree structure with the dfs-sequence.
Following is the python code:
from collections import deque
def dfsBullet(bullet,depth):
"""
parse the subtree with depth and the startnode of bullet[0]
"""
li = []
if depth != 0:
item = bullet.popleft()
li.append(item.split(' ',1)[1])
while (len(bullet) != 0):
item = bullet[0]
#apply same algo to the child node
if len(item.split(' ',1)[0]) > depth:
sublist = dfsBullet(bullet, len(item.split(' ')[0]))
#we have traverse all childnode, so go back
else:
return li
#add child tree to the list
li.append(sublist)
return li
A:
I can't parse your desired result -- it seems to have more open parentheses than corresponding closed ones and I don't understand the logic behind it.
To make a tree structure explicit, what about, e.g.:
data = '''* list item 1
* list item 2
** list item 3
** list item 4
*** list item 5
* list item 6'''.splitlines()
class Node(object):
def __init__(self, payload):
self.payload = payload
self.children = []
def show(self, indent):
print ' '*indent, self.payload
for c in self.children:
c.show(indent+2)
def makenest(linelist):
rootnode = Node(None)
stack = [(rootnode, 0)]
for line in linelist:
for i, c in enumerate(line):
if c != '*': break
stars, payload = line[:i], line[i:].strip()
curlev = len(stars)
curnod = Node(payload)
while True:
parent, level = stack[-1]
if curlev > level: break
del stack[-1]
# a child node of the current top-of-stack
parent.children.append(curnod)
stack.append((curnod, curlev))
rootnode.show(0)
makenest(data)
The show method of course exists just for the purpose of verifying that the part about parsing the strings and creating the tree has worked correctly. If you can specify more precisely exactly how it is that you want to transform your tree into nested tuples and lists, I'm sure it will be easy to add to class Node the appropriate (and probably recursive) method -- so, could you please give this missing specification...?
Edit: since the OP has clarified now, it does, as predicted, become easy to satisfy the spec. Just add to class Node the following method:
def emit(self):
if self.children:
return (self.payload,
[c.emit() for c in self.children])
else:
return (self.payload,)
and change the last three lines of the code (last one of makenest, a blank one, and the module-level call to makenest) to:
return [c.emit() for c in rootnode.children]
print(makenest(data))
(The parentheses after print are redundant but innocuous in Python 2, required in Python 3, so I put them there just in case;-).
With these tiny changes, my code runs as requested, now emitting
[('list item 1',), ('list item 2', [('list item 3',), ('list item 4', [('list item 5',)])]), ('list item 6',)]
A:
Keep track of the current "depth" you're parsing at.
If the depth of the next line is more than the current depth, recursively call the parser with the new depth, then add the result from that call to the current list.
If the depth of the next line is equal to the current depth, add it to the current list.
If the depth of the next line is less than the current depth, return the current list.
| How do I process a nested list? | Suppose I have a bulleted list like this:
* list item 1
* list item 2 (a parent)
** list item 3 (a child of list item 2)
** list item 4 (a child of list item 2 as well)
*** list item 5 (a child of list item 4 and a grand-child of list item 2)
* list item 6
I'd like to parse that into a nested list or some other data structure which makes the parent-child relationship between elements explicit (rather than depending on their contents and relative position). For example, here's a list of tuples containing an item and a list of its children (and so forth):
Edit: Hopefully, a more correct list example, where each element in the list is a tuple containing: a bullet's text and, if applicable, a list of children (in the same form).
[('list item 1',),
('list item 2', [('list item 3',), ('list item 4', [('list item 5',)])]
('list item 6',)]
[('list item 1',),
('list item 2', [('list item 3',), ('list item 4', [('list item 5',)])]),
('list item 6',)]
I've attempted to do this with plain Python and some experimentation with Pyparsing, but I'm not making progress. I'm left with two major questions:
What's the strategy I need to employ to make this work? I know recursion is part of the solution, but I'm having a hard time making the connection between this and, say, a Fibonacci sequence.
I'm certain I'm not the first person to have done this, but I don't know the terminology of the problem to make fruitful searches for more information on this topic. What problems are related to this so that I can learn more about solving these kinds of problems in general?
| [
"In the view of search algorithm, the bullet you give is actually a sequence generated by Depth-First-Search. So my strategy is just to rebuild the tree structure with the dfs-sequence. \nFollowing is the python code:\nfrom collections import deque\ndef dfsBullet(bullet,depth):\n \"\"\"\n parse the subtree with depth and the startnode of bullet[0]\n \"\"\"\n li = []\n if depth != 0:\n item = bullet.popleft()\n li.append(item.split(' ',1)[1])\n while (len(bullet) != 0):\n item = bullet[0]\n #apply same algo to the child node\n if len(item.split(' ',1)[0]) > depth:\n sublist = dfsBullet(bullet, len(item.split(' ')[0]))\n #we have traverse all childnode, so go back \n else:\n return li\n #add child tree to the list\n li.append(sublist)\n return li\n\n",
"I can't parse your desired result -- it seems to have more open parentheses than corresponding closed ones and I don't understand the logic behind it.\nTo make a tree structure explicit, what about, e.g.:\ndata = '''* list item 1\n* list item 2\n** list item 3\n** list item 4\n*** list item 5\n* list item 6'''.splitlines()\n\nclass Node(object):\n def __init__(self, payload):\n self.payload = payload\n self.children = []\n def show(self, indent):\n print ' '*indent, self.payload\n for c in self.children:\n c.show(indent+2)\n\ndef makenest(linelist):\n rootnode = Node(None)\n stack = [(rootnode, 0)]\n for line in linelist:\n for i, c in enumerate(line):\n if c != '*': break\n stars, payload = line[:i], line[i:].strip()\n curlev = len(stars)\n curnod = Node(payload)\n while True:\n parent, level = stack[-1]\n if curlev > level: break\n del stack[-1]\n # a child node of the current top-of-stack\n parent.children.append(curnod)\n stack.append((curnod, curlev))\n rootnode.show(0)\n\nmakenest(data)\n\nThe show method of course exists just for the purpose of verifying that the part about parsing the strings and creating the tree has worked correctly. If you can specify more precisely exactly how it is that you want to transform your tree into nested tuples and lists, I'm sure it will be easy to add to class Node the appropriate (and probably recursive) method -- so, could you please give this missing specification...?\nEdit: since the OP has clarified now, it does, as predicted, become easy to satisfy the spec. Just add to class Node the following method:\n def emit(self):\n if self.children:\n return (self.payload,\n [c.emit() for c in self.children])\n else:\n return (self.payload,)\n\nand change the last three lines of the code (last one of makenest, a blank one, and the module-level call to makenest) to:\n return [c.emit() for c in rootnode.children]\n\nprint(makenest(data))\n\n(The parentheses after print are redundant but innocuous in Python 2, required in Python 3, so I put them there just in case;-).\nWith these tiny changes, my code runs as requested, now emitting\n[('list item 1',), ('list item 2', [('list item 3',), ('list item 4', [('list item 5',)])]), ('list item 6',)]\n\n",
"Keep track of the current \"depth\" you're parsing at.\n\nIf the depth of the next line is more than the current depth, recursively call the parser with the new depth, then add the result from that call to the current list.\nIf the depth of the next line is equal to the current depth, add it to the current list.\nIf the depth of the next line is less than the current depth, return the current list.\n\n"
] | [
5,
2,
1
] | [] | [] | [
"list",
"parsing",
"python"
] | stackoverflow_0002982992_list_parsing_python.txt |
Q:
python simple function error?
I have a simple function to do simple math operations. If I call this from another script using import, I get no output. If I remove def function, everything is working fine. What's the problem with defining this function? I'm new to Python.
def calci(a, op, b):
if op == '+':
c = a + b
elif op == '-':
c = a-b
elif op == '*':
c= a*b
elif op =='/':
if(b == 0):
print('can\'t divide')
c = a/b
print('value is',c)
return c
result = calci(12,'+', 12)
print(result)
A:
Do you want to return the result to the calling function or print it? The only path through your program that results in a return is division, and when you do this you'll never reach the print statement.
If you want to do both, you should dedent the part:
print('value is',c)
return c
...to the level of the if and elif statements. Don't forget to remove your testing code (result = calci(...) etc).
The reason is that once your code hits a return statement, that's it for the function — nothing else in it will be executed (not quite true, there is an exception handling mechanism called a finally block which is an exception to this, but that's not an issue here).
Added: since you want to just print it, remove the return statement and dedent the print statement.
A:
Your indentation at the end of the function appears to be wrong; the print and return c are only happening if op == '/', and you're only assigning to c if b == 0. The end should be:
elif op =='/':
if(b == 0):
print('can\'t divide') # You should probably return here instead of falling through to the assignment
c = a/b
print('value is',c)
return c
A:
Your function only returns if op=='/'.
Remove a couple of tabs from those two lines and it'll work.
i.e.
def calci(a, op, b):
...
print('value is',c)
return c
A:
The indentation of the return part is incorrect, it should be lower-one-level. (This is so hard to describe... a flaw of Python's indentation syntax)
Here is the correct code:
def calci(a, op, b):
if op == '+':
c = a + b
elif op == '-':
c = a-b
elif op == '*':
c= a*b
elif op =='/':
if(b == 0):
print('can\'t divide')
return 0
c = a/b
print('value is',c)
return c
result = calci(12,'+', 12)
print(result)
| python simple function error? | I have a simple function to do simple math operations. If I call this from another script using import, I get no output. If I remove def function, everything is working fine. What's the problem with defining this function? I'm new to Python.
def calci(a, op, b):
if op == '+':
c = a + b
elif op == '-':
c = a-b
elif op == '*':
c= a*b
elif op =='/':
if(b == 0):
print('can\'t divide')
c = a/b
print('value is',c)
return c
result = calci(12,'+', 12)
print(result)
| [
"Do you want to return the result to the calling function or print it? The only path through your program that results in a return is division, and when you do this you'll never reach the print statement.\nIf you want to do both, you should dedent the part:\nprint('value is',c)\nreturn c\n\n...to the level of the if and elif statements. Don't forget to remove your testing code (result = calci(...) etc).\nThe reason is that once your code hits a return statement, that's it for the function — nothing else in it will be executed (not quite true, there is an exception handling mechanism called a finally block which is an exception to this, but that's not an issue here).\nAdded: since you want to just print it, remove the return statement and dedent the print statement.\n",
"Your indentation at the end of the function appears to be wrong; the print and return c are only happening if op == '/', and you're only assigning to c if b == 0. The end should be:\nelif op =='/':\n if(b == 0):\n print('can\\'t divide') # You should probably return here instead of falling through to the assignment\n\n c = a/b\n\n\nprint('value is',c)\nreturn c\n\n",
"Your function only returns if op=='/'. \nRemove a couple of tabs from those two lines and it'll work.\ni.e.\ndef calci(a, op, b): \n\n ...\n\n print('value is',c)\n return c\n\n",
"The indentation of the return part is incorrect, it should be lower-one-level. (This is so hard to describe... a flaw of Python's indentation syntax)\nHere is the correct code:\ndef calci(a, op, b): \n\n if op == '+':\n c = a + b\n\n elif op == '-':\n c = a-b\n\n elif op == '*':\n c= a*b\n\n elif op =='/':\n if(b == 0):\n print('can\\'t divide')\n return 0\n\n c = a/b\n\n\n print('value is',c)\n return c\n\nresult = calci(12,'+', 12)\n\nprint(result)\n\n"
] | [
3,
3,
1,
1
] | [] | [] | [
"function",
"python"
] | stackoverflow_0002983215_function_python.txt |
Q:
QWebView: is it possible to highlight terms and do keyboard navigation?
I am using QWebView from PyQT4. I'd like to
highlight terms of a webpage.
do a keyboard navigation inside a webpage (for example Ctrl-N move to next link)
is it possible?
A:
Have a look to Qwebview findText() method.
bool QWebView::findText ( const QString & subString,QWebPage::FindFlags options = 0 )
Finds the specified string, subString,
in the page, using the given options.
If the HighlightAllOccurrences flag is
passed, the function will highlight
all occurrences that exist in the
page. All subsequent calls will extend
the highlight, rather than replace it,
with occurrences of the new string.
A:
not trivial, but possible. You could use the toHtml method of your QWebView instance, parse it e.g. with BeautifulSoup (be sure to stick with 3.0.9!-), insert a <span class="myhilite">...</span> around whatever terms you like (as well as the CSS to define exactly what visual effects class myhilite is going to have), and put the modified HTML back with the setHtml -- phew;-).
I guess you could, by using the appropriate functionality that QWebView inherits from QWidget (I don't think QWebView adds any extra relevant functionality of its own), e.g. grabKeyboard if you want to grab all keyboard events, or maybe addAction with an appropriate shortcut -- but I'm not sure exactly what you want to happen when control-N is pressed, so this one is iffier. Maybe you can clarify in terms of the many possible methods of QWebView, QWidget, etc...?
| QWebView: is it possible to highlight terms and do keyboard navigation? | I am using QWebView from PyQT4. I'd like to
highlight terms of a webpage.
do a keyboard navigation inside a webpage (for example Ctrl-N move to next link)
is it possible?
| [
"Have a look to Qwebview findText() method.\n bool QWebView::findText ( const QString & subString,QWebPage::FindFlags options = 0 )\n\n\nFinds the specified string, subString,\n in the page, using the given options.\nIf the HighlightAllOccurrences flag is\n passed, the function will highlight\n all occurrences that exist in the\n page. All subsequent calls will extend\n the highlight, rather than replace it,\n with occurrences of the new string.\n\n",
"\nnot trivial, but possible. You could use the toHtml method of your QWebView instance, parse it e.g. with BeautifulSoup (be sure to stick with 3.0.9!-), insert a <span class=\"myhilite\">...</span> around whatever terms you like (as well as the CSS to define exactly what visual effects class myhilite is going to have), and put the modified HTML back with the setHtml -- phew;-).\nI guess you could, by using the appropriate functionality that QWebView inherits from QWidget (I don't think QWebView adds any extra relevant functionality of its own), e.g. grabKeyboard if you want to grab all keyboard events, or maybe addAction with an appropriate shortcut -- but I'm not sure exactly what you want to happen when control-N is pressed, so this one is iffier. Maybe you can clarify in terms of the many possible methods of QWebView, QWidget, etc...?\n\n"
] | [
3,
2
] | [] | [] | [
"pyqt",
"python",
"qt"
] | stackoverflow_0002940232_pyqt_python_qt.txt |
Q:
How do you iterate through each email in your inbox using python?
I'm completely new to programming and I'm trying to build an autorespoder to send a msg to a specific email address.
Using an if statement, I can check if there is an email from a certain address in the inbox and I can send an email, but if there are multiple emails from that address, how can I make a for loop to send an email for every email from that specific address.
I tried to do use this as a loop:
for M.search(None, 'From', address) in M.select():
but I get the error: "can't assign to function call" on that line
A:
As you claim to be new to programming, my best advice is: Always read the documentation.
And maybe you should read a tutorial first.
The documentation provides an example:
import getpass, imaplib
M = imaplib.IMAP4()
M.login(getpass.getuser(), getpass.getpass())
M.select()
typ, data = M.search(None, 'ALL')
for num in data[0].split():
typ, data = M.fetch(num, '(RFC822)')
print 'Message %s\n%s\n' % (num, data[0][3])
M.close()
M.logout()
Have you tried?
Regarding your code:
When you define a for loop, it should be like:
for x in some_data_set:
x is a variable, that holds the value of one item at a time (and is accessible only in the for loop body (with one exception, but this is not important here)).
What you are doing is not related to the imaplib module but just wrong syntax.
Btw. .select() selects a mailbox and only returns the number of messages in the mailbox. I.e. just a scalar value, no sequence you could iterate over:
IMAP4.select([mailbox[, readonly]])
Select a mailbox. Returned data is the count of messages in mailbox (EXISTS response). The default mailbox is 'INBOX'. If the readonly flag is set, modifications to the mailbox are not allowed.
(This is indeed related to imaplib module ;))
| How do you iterate through each email in your inbox using python? | I'm completely new to programming and I'm trying to build an autorespoder to send a msg to a specific email address.
Using an if statement, I can check if there is an email from a certain address in the inbox and I can send an email, but if there are multiple emails from that address, how can I make a for loop to send an email for every email from that specific address.
I tried to do use this as a loop:
for M.search(None, 'From', address) in M.select():
but I get the error: "can't assign to function call" on that line
| [
"As you claim to be new to programming, my best advice is: Always read the documentation.\nAnd maybe you should read a tutorial first.\n\nThe documentation provides an example:\nimport getpass, imaplib\n\nM = imaplib.IMAP4()\nM.login(getpass.getuser(), getpass.getpass())\nM.select()\ntyp, data = M.search(None, 'ALL')\nfor num in data[0].split():\n typ, data = M.fetch(num, '(RFC822)')\n print 'Message %s\\n%s\\n' % (num, data[0][3])\nM.close()\nM.logout()\n\nHave you tried?\n\nRegarding your code:\nWhen you define a for loop, it should be like:\nfor x in some_data_set:\n\nx is a variable, that holds the value of one item at a time (and is accessible only in the for loop body (with one exception, but this is not important here)).\nWhat you are doing is not related to the imaplib module but just wrong syntax.\nBtw. .select() selects a mailbox and only returns the number of messages in the mailbox. I.e. just a scalar value, no sequence you could iterate over:\n\nIMAP4.select([mailbox[, readonly]])\n Select a mailbox. Returned data is the count of messages in mailbox (EXISTS response). The default mailbox is 'INBOX'. If the readonly flag is set, modifications to the mailbox are not allowed.\n\n(This is indeed related to imaplib module ;))\n"
] | [
7
] | [] | [] | [
"email",
"imaplib",
"iterator",
"python"
] | stackoverflow_0002983647_email_imaplib_iterator_python.txt |
Q:
ctypes for static libraries?
I'm attempting to write a Python wrapper for poker-eval, a c static library. All the documentation I can find on ctypes indicates that it works on shared/dynamic libraries. Is there a ctypes for static libraries?
I know about cython, but should I use that or recompile the poker-eval into a dynamic library so that I can use ctypes?
Thanks,
Mike
A:
The choice is really up to you. If you have the ability to recompile the library as a shared object, I would suggest that, because it will minimize the non-python code you have to maintain. Otherwise, you'll want to build a python extension module that links to the static library and wraps the functions it exposes.
You mentioned Cython; here's the relevant manual page if you go that route:
http://docs.cython.org/src/tutorial/clibraries.html
There's also SWIG and Pyrex.
A:
I can't say for sure there are no modules out there, but the advantages of dynamic libraries (uses less space, can update without recompiling dependent programs) are such that you're probably better off doing just that.
| ctypes for static libraries? | I'm attempting to write a Python wrapper for poker-eval, a c static library. All the documentation I can find on ctypes indicates that it works on shared/dynamic libraries. Is there a ctypes for static libraries?
I know about cython, but should I use that or recompile the poker-eval into a dynamic library so that I can use ctypes?
Thanks,
Mike
| [
"The choice is really up to you. If you have the ability to recompile the library as a shared object, I would suggest that, because it will minimize the non-python code you have to maintain. Otherwise, you'll want to build a python extension module that links to the static library and wraps the functions it exposes. \nYou mentioned Cython; here's the relevant manual page if you go that route:\nhttp://docs.cython.org/src/tutorial/clibraries.html\nThere's also SWIG and Pyrex.\n",
"I can't say for sure there are no modules out there, but the advantages of dynamic libraries (uses less space, can update without recompiling dependent programs) are such that you're probably better off doing just that.\n"
] | [
9,
1
] | [] | [] | [
"ctypes",
"python",
"static_libraries"
] | stackoverflow_0002983649_ctypes_python_static_libraries.txt |
Q:
How to limit requests per minute per user?
I Have several forms in my website and I have several pages with intense database activity. I want to set a cap on requests per user. For example, I don't want people to make over 10 requests in less than 10 seconds.
Is there way to do this in Django?
A:
You can likely do this with custom middleware. You'll need to keep the data somewhere (db?). See the docs for how to write your own middleware. Here's what's available to you in the request object.
I'd recommend doing this on apache/nginx/whatever you're using, though.
A:
Have a look at Simon Willison's ratelimitcache. Like the name implies, it uses Django's cache framework to store the state of the rate limiter. There's also a blog post that details how the rate limiter works.
| How to limit requests per minute per user? | I Have several forms in my website and I have several pages with intense database activity. I want to set a cap on requests per user. For example, I don't want people to make over 10 requests in less than 10 seconds.
Is there way to do this in Django?
| [
"You can likely do this with custom middleware. You'll need to keep the data somewhere (db?). See the docs for how to write your own middleware. Here's what's available to you in the request object.\nI'd recommend doing this on apache/nginx/whatever you're using, though.\n",
"Have a look at Simon Willison's ratelimitcache. Like the name implies, it uses Django's cache framework to store the state of the rate limiter. There's also a blog post that details how the rate limiter works.\n"
] | [
3,
2
] | [] | [] | [
"django",
"python",
"web_applications"
] | stackoverflow_0002983121_django_python_web_applications.txt |
Q:
In-document schema declarations and lxml
As per the official documentation of lxml, if one wants to validate a xml document against a xml schema document, one has to
construct the XMLSchema object (basically, parse the schema document)
construct the XMLParser, passing the XMLSchema object as its schema argument
parse the actual xml document (instance document) using the constructed parser
There can be variations, but the essense is pretty much the same no matter how you do it, - the schema is specified 'externally' (as opposed to specifying it inside the actual xml document).
If you follow this procedure, the validation occurs, sure enough, but if I understand it correctly, that completely ignores the whole idea of the schemaLocation and noNamespaceSchemaLocation attributes from xsi
This introduces a whole bunch of limitations, starting with the fact, that you have to deal with instance<->schema relation all by yourself (either store it externally or write some hack to retrieve the schema location from the root element of the instance document), you can not validate the document using multiple schemata (say, when each schema governs its own namespace) and so on.
So the question is: maybe I am missing something completely trivial or doing it wrong? Or are my statements about lxml's limitations regarding schema validation true?
To recap, I'd like to be able to:
have the parser use the schema location declarations in the instance document at parse/validation time
use multiple schemata to validate a xml document
declare schema locations on non-root elements (not of extreme importance)
Maybe I should look for a different library? Although, that'd be a real shame, - lxml is a de-facto xml processing library for python and is regarded by everyone as the best one in terms of performace/features/convenience (and rightfully so, to a certain extent)
A:
Caution: this is not the full answer to this, because I don't know all that much about lxml in particular.
In can just tell you that:
Ignoring schemalocations in documents and instead managing a namespace -> schema file mapping in an application is almost always better, unless you can guarantee that the schema will be in a very specific location compared to the file. If you want to move it out of code, use a catalogue or come up with a configuration file.
If you do want to use schemaLocation, and want to validate multiple schemas, just include them all in one schemaLocation attribute, separated by spaces, in namespace URI/location pairs: xsi:schemaLocation="urn:schema1 schema1.xsd urn:schema2 schema2.xsd.
Finally, I don't think any processor will find schemaLocation attributes declared on non-root elements. Not that it matters: just put them all on the root.
| In-document schema declarations and lxml | As per the official documentation of lxml, if one wants to validate a xml document against a xml schema document, one has to
construct the XMLSchema object (basically, parse the schema document)
construct the XMLParser, passing the XMLSchema object as its schema argument
parse the actual xml document (instance document) using the constructed parser
There can be variations, but the essense is pretty much the same no matter how you do it, - the schema is specified 'externally' (as opposed to specifying it inside the actual xml document).
If you follow this procedure, the validation occurs, sure enough, but if I understand it correctly, that completely ignores the whole idea of the schemaLocation and noNamespaceSchemaLocation attributes from xsi
This introduces a whole bunch of limitations, starting with the fact, that you have to deal with instance<->schema relation all by yourself (either store it externally or write some hack to retrieve the schema location from the root element of the instance document), you can not validate the document using multiple schemata (say, when each schema governs its own namespace) and so on.
So the question is: maybe I am missing something completely trivial or doing it wrong? Or are my statements about lxml's limitations regarding schema validation true?
To recap, I'd like to be able to:
have the parser use the schema location declarations in the instance document at parse/validation time
use multiple schemata to validate a xml document
declare schema locations on non-root elements (not of extreme importance)
Maybe I should look for a different library? Although, that'd be a real shame, - lxml is a de-facto xml processing library for python and is regarded by everyone as the best one in terms of performace/features/convenience (and rightfully so, to a certain extent)
| [
"Caution: this is not the full answer to this, because I don't know all that much about lxml in particular.\nIn can just tell you that:\n\nIgnoring schemalocations in documents and instead managing a namespace -> schema file mapping in an application is almost always better, unless you can guarantee that the schema will be in a very specific location compared to the file. If you want to move it out of code, use a catalogue or come up with a configuration file.\nIf you do want to use schemaLocation, and want to validate multiple schemas, just include them all in one schemaLocation attribute, separated by spaces, in namespace URI/location pairs: xsi:schemaLocation=\"urn:schema1 schema1.xsd urn:schema2 schema2.xsd.\nFinally, I don't think any processor will find schemaLocation attributes declared on non-root elements. Not that it matters: just put them all on the root.\n\n"
] | [
3
] | [] | [] | [
"lxml",
"python",
"xml",
"xsd"
] | stackoverflow_0002979824_lxml_python_xml_xsd.txt |
Q:
Splitting a list in python
I'm writing a parser in Python. I've converted an input string into a list of tokens, such as:
['(', '2', '.', 'x', '.', '(', '3', '-', '1', ')', '+', '4', ')', '/', '3', '.', 'x', '^', '2']
I want to be able to split the list into multiple lists, like the str.split('+') function. But there doesn't seem to be a way to do my_list.split('+'). Any ideas?
Thanks!
A:
You can write your own split function for lists quite easily by using yield:
def split_list(l, sep):
current = []
for x in l:
if x == sep:
yield current
current = []
else:
current.append(x)
yield current
An alternative way is to use list.index and catch the exception:
def split_list(l, sep):
i = 0
try:
while True:
j = l.index(sep, i)
yield l[i:j]
i = j + 1
except ValueError:
yield l[i:]
Either way you can call it like this:
l = ['(', '2', '.', 'x', '.', '(', '3', '-', '1', ')', '+', '4', ')',
'/', '3', '.', 'x', '^', '2']
for r in split_list(l, '+'):
print r
Result:
['(', '2', '.', 'x', '.', '(', '3', '-', '1', ')']
['4', ')', '/', '3', '.', 'x', '^', '2']
For parsing in Python you might also want to look at something like pyparsing.
A:
quick hack, you can first use the .join() method to join create a string out of your list, split it at '+', re-split (this creates a matrix), then use the list() method to further split each element in the matrix to individual tokens
a = ['(', '2', '.', 'x', '.', '(', '3', '-', '1', ')', '+', '4', ')', '/', '3', '.', 'x', '^', '2']
b = ''.join(a).split('+')
c = []
for el in b:
c.append(list(el))
print(c)
result:
[['(', '2', '.', 'x', '.', '(', '3', '-', '1', ')'], ['4', ')', '/', '3', '.', 'x', '^', '2']]
| Splitting a list in python | I'm writing a parser in Python. I've converted an input string into a list of tokens, such as:
['(', '2', '.', 'x', '.', '(', '3', '-', '1', ')', '+', '4', ')', '/', '3', '.', 'x', '^', '2']
I want to be able to split the list into multiple lists, like the str.split('+') function. But there doesn't seem to be a way to do my_list.split('+'). Any ideas?
Thanks!
| [
"You can write your own split function for lists quite easily by using yield:\ndef split_list(l, sep):\n current = []\n for x in l:\n if x == sep:\n yield current\n current = []\n else:\n current.append(x)\n yield current\n\nAn alternative way is to use list.index and catch the exception:\ndef split_list(l, sep):\n i = 0\n try:\n while True:\n j = l.index(sep, i)\n yield l[i:j]\n i = j + 1\n except ValueError:\n yield l[i:]\n\nEither way you can call it like this:\nl = ['(', '2', '.', 'x', '.', '(', '3', '-', '1', ')', '+', '4', ')',\n '/', '3', '.', 'x', '^', '2']\n\nfor r in split_list(l, '+'):\n print r\n\nResult:\n['(', '2', '.', 'x', '.', '(', '3', '-', '1', ')']\n['4', ')', '/', '3', '.', 'x', '^', '2']\n\nFor parsing in Python you might also want to look at something like pyparsing.\n",
"quick hack, you can first use the .join() method to join create a string out of your list, split it at '+', re-split (this creates a matrix), then use the list() method to further split each element in the matrix to individual tokens\na = ['(', '2', '.', 'x', '.', '(', '3', '-', '1', ')', '+', '4', ')', '/', '3', '.', 'x', '^', '2']\n\nb = ''.join(a).split('+')\nc = []\n\nfor el in b:\n c.append(list(el))\n\nprint(c)\n\nresult:\n[['(', '2', '.', 'x', '.', '(', '3', '-', '1', ')'], ['4', ')', '/', '3', '.', 'x', '^', '2']]\n\n"
] | [
8,
1
] | [] | [] | [
"list",
"parsing",
"python"
] | stackoverflow_0002983959_list_parsing_python.txt |
Q:
Problem with a Python function
Well I have a little problem. I want to get the sum of all numbers below to 1000000, and who has 4 divisors...
I try, but i have a problem because the GetTheSum(n) function always returns the number "6"...
This is my Code :
http://pastebin.com/bhiDb5fe
A:
The problem seems to be that you return as soon as you find the first number (which is 6).
You have this:
def GetTheSum(n):
k = 0
for d in range(1,n):
if NumberOfDivisors(d) == 4:
k += d
return k
But you have probably meant this:
def GetTheSum(n):
k = 0
for d in range(1,n):
if NumberOfDivisors(d) == 4:
k += d
return k
| Problem with a Python function | Well I have a little problem. I want to get the sum of all numbers below to 1000000, and who has 4 divisors...
I try, but i have a problem because the GetTheSum(n) function always returns the number "6"...
This is my Code :
http://pastebin.com/bhiDb5fe
| [
"The problem seems to be that you return as soon as you find the first number (which is 6).\nYou have this:\ndef GetTheSum(n):\n k = 0\n for d in range(1,n):\n if NumberOfDivisors(d) == 4:\n k += d\n return k\n\nBut you have probably meant this:\ndef GetTheSum(n):\n k = 0\n for d in range(1,n):\n if NumberOfDivisors(d) == 4:\n k += d\n return k\n\n"
] | [
2
] | [] | [] | [
"python"
] | stackoverflow_0002984305_python.txt |
Q:
Google Application Engine slow in case of Python
I am reading a "table" in Python in GAE that has 1000 rows and the program stops because the time limit is reached. (So it takes at least 20 seconds.)(
Is that possible that GAE is that slow? Is there a way to fix that?
Is this because I use free service and I do not pay for it?
Thank you.
The code itself is this:
liststocks=[]
userall=user.all() # this has three fields username... trying to optimise by this line
stocknamesall=stocknames.all() # 1 field, name of the stocks trying to optimise by this line too
for u in userall: # userall has 1000 users
for stockname in stocknamesall: # 4 stocks
astock= stocksowned() #it is also a "table", no relevance I think
astock.quantity = random.randint(1,100)
astock.nameid = u.key()
astock.stockid = stockname.key()
liststocks.append(astock);
A:
GAE is slow when used inefficiently. Like any framework, sometimes you have to know a little bit about how it works in order to efficiently use it. Luckily, I think there is an easy improvement that will help your code a lot.
It is faster to use fetch() explicitly instead of using the iterator. The iterator causes entities to be fetched in "small batches" - each "small batch" results in a round-trip to the datastore to get more data. If you use fetch(), then you'll get all the data at once with just one round-trip to the datastore. In short, use fetch() if you know you are going to need lots of results.
In this case, using fetch() will help a lot - you can easily get all your users and stocknames in one round-trip to the datastore each. Right now you're making lots of extra round-trips to the datastore and re-fetching stockname entities too!
Try this (you said your table has 1000 rows, so I use fetch(1000) to make sure you get all the results; use a larger number if needed):
userall=user.all().fetch(1000)
stocknamesall=stocknames.all().fetch(1000)
# rest of the code as-is
To see where you could make additional improvements, please try out AppStats so you can see exactly why your request is taking so long. You might even consider posting a screenshot (like this) of the appstats info about your request along with your post.
| Google Application Engine slow in case of Python | I am reading a "table" in Python in GAE that has 1000 rows and the program stops because the time limit is reached. (So it takes at least 20 seconds.)(
Is that possible that GAE is that slow? Is there a way to fix that?
Is this because I use free service and I do not pay for it?
Thank you.
The code itself is this:
liststocks=[]
userall=user.all() # this has three fields username... trying to optimise by this line
stocknamesall=stocknames.all() # 1 field, name of the stocks trying to optimise by this line too
for u in userall: # userall has 1000 users
for stockname in stocknamesall: # 4 stocks
astock= stocksowned() #it is also a "table", no relevance I think
astock.quantity = random.randint(1,100)
astock.nameid = u.key()
astock.stockid = stockname.key()
liststocks.append(astock);
| [
"GAE is slow when used inefficiently. Like any framework, sometimes you have to know a little bit about how it works in order to efficiently use it. Luckily, I think there is an easy improvement that will help your code a lot.\nIt is faster to use fetch() explicitly instead of using the iterator. The iterator causes entities to be fetched in \"small batches\" - each \"small batch\" results in a round-trip to the datastore to get more data. If you use fetch(), then you'll get all the data at once with just one round-trip to the datastore. In short, use fetch() if you know you are going to need lots of results.\nIn this case, using fetch() will help a lot - you can easily get all your users and stocknames in one round-trip to the datastore each. Right now you're making lots of extra round-trips to the datastore and re-fetching stockname entities too!\nTry this (you said your table has 1000 rows, so I use fetch(1000) to make sure you get all the results; use a larger number if needed):\nuserall=user.all().fetch(1000)\nstocknamesall=stocknames.all().fetch(1000)\n# rest of the code as-is\n\nTo see where you could make additional improvements, please try out AppStats so you can see exactly why your request is taking so long. You might even consider posting a screenshot (like this) of the appstats info about your request along with your post.\n"
] | [
8
] | [] | [] | [
"google_app_engine",
"python"
] | stackoverflow_0002984444_google_app_engine_python.txt |
Q:
How to output an index while iterating over an array in python
I am iterating over an array in python:
for g in [ games[0:4] ]:
g.output()
Can I also initialise and increment an index in that for loop and pass it to g.output()?
such that g.output(2) results in:
Game 2 - ... stuff relating to the object `g` here.
A:
Like this:
for index, g in enumerate(games[0:4]):
g.output(index)
A:
Use the built-in enumerate method:
for i,a in enumerate(['cat', 'dog']):
print '%s is %d' % (a, i)
# output:
# cat is 0
# dog is 1
| How to output an index while iterating over an array in python | I am iterating over an array in python:
for g in [ games[0:4] ]:
g.output()
Can I also initialise and increment an index in that for loop and pass it to g.output()?
such that g.output(2) results in:
Game 2 - ... stuff relating to the object `g` here.
| [
"Like this:\nfor index, g in enumerate(games[0:4]):\n g.output(index)\n\n",
"Use the built-in enumerate method:\nfor i,a in enumerate(['cat', 'dog']):\n print '%s is %d' % (a, i)\n\n# output:\n# cat is 0\n# dog is 1\n\n"
] | [
37,
15
] | [] | [] | [
"python"
] | stackoverflow_0002984566_python.txt |
Q:
How Can I Populate Default Form Data with a ManyToMany Field?
Ok, I've been crawling google and Django documentation for over 2 hours now (as well as the IRC channel on freenode), and haven't been able to figure this one out.
Basically, I have a model called Room, which is displayed below:
class Room(models.Model):
"""
A `Partyline` room. Rooms on the `Partyline`s are like mini-chatrooms. Each
room has a variable amount of `Caller`s, and usually a moderator of some
sort. Each `Partyline` has many rooms, and it is common for `Caller`s to
join multiple rooms over the duration of their call.
"""
LIVE = 0
PRIVATE = 1
ONE_ON_ONE = 2
UNCENSORED = 3
BULLETIN_BOARD = 4
CHILL = 5
PHONE_BOOTH = 6
TYPE_CHOICES = (
('LR', 'Live Room'),
('PR', 'Private Room'),
('UR', 'Uncensored Room'),
)
type = models.CharField('Room Type', max_length=2, choices=TYPE_CHOICES)
number = models.IntegerField('Room Number')
partyline = models.ForeignKey(Partyline)
owner = models.ForeignKey(User, blank=True, null=True)
bans = models.ManyToManyField(Caller, blank=True, null=True)
def __unicode__(self):
return "%s - %s %d" % (self.partyline.name, self.type, self.number)
I've also got a forms.py which has the following ModelForm to represent my Room model:
from django.forms import ModelForm
from partyline_portal.rooms.models import Room
class RoomForm(ModelForm):
class Meta:
model = Room
I'm creating a view which allows administrators to edit a given Room object. Here's my view (so far):
def edit_room(request, id=None):
"""
Edit various attributes of a specific `Room`. Room owners do not have
access to this page. They cannot edit the attributes of the `Room`(s) that
they control.
"""
room = get_object_or_404(Room, id=id)
if not room.is_owner(request.user):
return HttpResponseForbidden('Forbidden.')
if is_user_type(request.user, ['admin']):
form_type = RoomForm
elif is_user_type(request.user, ['lm']):
form_type = LineManagerEditRoomForm
elif is_user_type(request.user, ['lo']):
form_type = LineOwnerEditRoomForm
if request.method == 'POST':
form = form_type(request.POST, instance=room)
if form.is_valid():
if 'owner' in form.cleaned_data:
room.owner = form.cleaned_data['owner']
room.save()
else:
defaults = {'type': room.type, 'number': room.number, 'partyline': room.partyline.id}
if room.owner:
defaults['owner'] = room.owner.id
if room.bans:
defaults['bans'] = room.bans.all() ### this does not work properly!
form = form_type(defaults, instance=room)
variables = RequestContext(request, {'form': form, 'room': room})
return render_to_response('portal/rooms/edit.html', variables)
Now, this view works fine when I view the page. It shows all of the form attributes, and all of the default values are filled in (when users do a GET)... EXCEPT for the default values for the ManyToMany field 'bans'.
Basically, if an admins clicks on a Room object to edit, the page they go to will show all of the Rooms default values except for the 'bans'. No matter what I do, I can't find a way to get Django to display the currently 'banned users' for the Room object. Here is the line of code that needs to be changed (from the view):
defaults = {'type': room.type, 'number': room.number, 'partyline': room.partyline.id}
if room.owner:
defaults['owner'] = room.owner.id
if room.bans:
defaults['bans'] = room.bans.all() ### this does not work properly!
There must be some other syntax I have to use to specify the default value for the 'bans' field. I've really been pulling my hair out on this one, and would definitely appreciate some help.
Thanks!
UPDATE
lazerscience actually helped me find the solution in one of his comments. Basically, the way it works is if you pass a list of primary keys. To make it work I had to change:
if room.bans:
defaults['bans'] = room.bans.all() ### this does not work properly!
to
if room.bans:
defaults['bans'] = [b.pk for b in room.bans.all()]
And bam, it instantly started working (when I view the page, it will show a selectable list of Callers, with the already banned callers already highlighted (selected).
A:
You probably need to use "initial": Django set default form values
| How Can I Populate Default Form Data with a ManyToMany Field? | Ok, I've been crawling google and Django documentation for over 2 hours now (as well as the IRC channel on freenode), and haven't been able to figure this one out.
Basically, I have a model called Room, which is displayed below:
class Room(models.Model):
"""
A `Partyline` room. Rooms on the `Partyline`s are like mini-chatrooms. Each
room has a variable amount of `Caller`s, and usually a moderator of some
sort. Each `Partyline` has many rooms, and it is common for `Caller`s to
join multiple rooms over the duration of their call.
"""
LIVE = 0
PRIVATE = 1
ONE_ON_ONE = 2
UNCENSORED = 3
BULLETIN_BOARD = 4
CHILL = 5
PHONE_BOOTH = 6
TYPE_CHOICES = (
('LR', 'Live Room'),
('PR', 'Private Room'),
('UR', 'Uncensored Room'),
)
type = models.CharField('Room Type', max_length=2, choices=TYPE_CHOICES)
number = models.IntegerField('Room Number')
partyline = models.ForeignKey(Partyline)
owner = models.ForeignKey(User, blank=True, null=True)
bans = models.ManyToManyField(Caller, blank=True, null=True)
def __unicode__(self):
return "%s - %s %d" % (self.partyline.name, self.type, self.number)
I've also got a forms.py which has the following ModelForm to represent my Room model:
from django.forms import ModelForm
from partyline_portal.rooms.models import Room
class RoomForm(ModelForm):
class Meta:
model = Room
I'm creating a view which allows administrators to edit a given Room object. Here's my view (so far):
def edit_room(request, id=None):
"""
Edit various attributes of a specific `Room`. Room owners do not have
access to this page. They cannot edit the attributes of the `Room`(s) that
they control.
"""
room = get_object_or_404(Room, id=id)
if not room.is_owner(request.user):
return HttpResponseForbidden('Forbidden.')
if is_user_type(request.user, ['admin']):
form_type = RoomForm
elif is_user_type(request.user, ['lm']):
form_type = LineManagerEditRoomForm
elif is_user_type(request.user, ['lo']):
form_type = LineOwnerEditRoomForm
if request.method == 'POST':
form = form_type(request.POST, instance=room)
if form.is_valid():
if 'owner' in form.cleaned_data:
room.owner = form.cleaned_data['owner']
room.save()
else:
defaults = {'type': room.type, 'number': room.number, 'partyline': room.partyline.id}
if room.owner:
defaults['owner'] = room.owner.id
if room.bans:
defaults['bans'] = room.bans.all() ### this does not work properly!
form = form_type(defaults, instance=room)
variables = RequestContext(request, {'form': form, 'room': room})
return render_to_response('portal/rooms/edit.html', variables)
Now, this view works fine when I view the page. It shows all of the form attributes, and all of the default values are filled in (when users do a GET)... EXCEPT for the default values for the ManyToMany field 'bans'.
Basically, if an admins clicks on a Room object to edit, the page they go to will show all of the Rooms default values except for the 'bans'. No matter what I do, I can't find a way to get Django to display the currently 'banned users' for the Room object. Here is the line of code that needs to be changed (from the view):
defaults = {'type': room.type, 'number': room.number, 'partyline': room.partyline.id}
if room.owner:
defaults['owner'] = room.owner.id
if room.bans:
defaults['bans'] = room.bans.all() ### this does not work properly!
There must be some other syntax I have to use to specify the default value for the 'bans' field. I've really been pulling my hair out on this one, and would definitely appreciate some help.
Thanks!
UPDATE
lazerscience actually helped me find the solution in one of his comments. Basically, the way it works is if you pass a list of primary keys. To make it work I had to change:
if room.bans:
defaults['bans'] = room.bans.all() ### this does not work properly!
to
if room.bans:
defaults['bans'] = [b.pk for b in room.bans.all()]
And bam, it instantly started working (when I view the page, it will show a selectable list of Callers, with the already banned callers already highlighted (selected).
| [
"You probably need to use \"initial\": Django set default form values\n"
] | [
0
] | [] | [] | [
"django",
"django_forms",
"python"
] | stackoverflow_0002983183_django_django_forms_python.txt |
Q:
How to print an Objectified Element?
I have xml of the format:
<channel>
<games>
<game slot='1'>
<id>Bric A Bloc</id>
<title-text>BricABloc Hoorah</title-text>
<link>Fruit Splat</link>
</game>
</games>
</channel>
I've parsed this xml using lxml.objectify, via:
tree = objectify.parse(file)
There will potentially be a number of <game>s underneath <games>.
I understand that I can generate a list of <game> objects via:
[ tree.games[0].game[0:4] ]
My question is, what class are those objects and is there a function to print any object of whatever class these objects belong to?
A:
Perhaps use
for game in tree.games[0].game[0:4]:
print(lxml.objectify.dump(game))
which yields
game = None [ObjectifiedElement]
* slot = '1'
id = 'Bric A Bloc' [StringElement]
title-text = 'BricABloc Hoorah' [StringElement]
link = 'Fruit Splat' [StringElement]
print(game) shows that each game is an lxml.ojectify.ObjectifiedElement.
| How to print an Objectified Element? | I have xml of the format:
<channel>
<games>
<game slot='1'>
<id>Bric A Bloc</id>
<title-text>BricABloc Hoorah</title-text>
<link>Fruit Splat</link>
</game>
</games>
</channel>
I've parsed this xml using lxml.objectify, via:
tree = objectify.parse(file)
There will potentially be a number of <game>s underneath <games>.
I understand that I can generate a list of <game> objects via:
[ tree.games[0].game[0:4] ]
My question is, what class are those objects and is there a function to print any object of whatever class these objects belong to?
| [
"Perhaps use \nfor game in tree.games[0].game[0:4]:\n print(lxml.objectify.dump(game))\n\nwhich yields\ngame = None [ObjectifiedElement]\n * slot = '1'\n id = 'Bric A Bloc' [StringElement]\n title-text = 'BricABloc Hoorah' [StringElement]\n link = 'Fruit Splat' [StringElement]\n\nprint(game) shows that each game is an lxml.ojectify.ObjectifiedElement.\n"
] | [
4
] | [] | [] | [
"lxml",
"python"
] | stackoverflow_0002984665_lxml_python.txt |
Q:
python+gae compatible web ui toolkit with ajax but degrades gracefully when there is no js?
I am searching for a web toolkit that is
Python compatible
social/db/wiki like
google-appengine compatible
has built in pagination
handles 'relationships' between entities
uses ajax
modal dialogs
but degrades very gracefully on browsers that dont have js
good ui decisions that make it gracefully degrade even on mobile, text and braille/speech interfaces.
A:
Django and JQuery.
They aren't exactly a unified framework, but I don't really know of any frameworks comprised of Python and JS together that fit your description.
There are also gigantic communities behind both Django and JQuery, which will help you immensely should you ever encounter any problems.
Django is more or less GAE compatible, and is getting more compatible every day, particularly with the django-nonrel project.
JQuery is probably the most used JS library out there, with tons of documentation and third-party plugins to accomplish basically anything you may want to do.
| python+gae compatible web ui toolkit with ajax but degrades gracefully when there is no js? | I am searching for a web toolkit that is
Python compatible
social/db/wiki like
google-appengine compatible
has built in pagination
handles 'relationships' between entities
uses ajax
modal dialogs
but degrades very gracefully on browsers that dont have js
good ui decisions that make it gracefully degrade even on mobile, text and braille/speech interfaces.
| [
"Django and JQuery.\nThey aren't exactly a unified framework, but I don't really know of any frameworks comprised of Python and JS together that fit your description.\nThere are also gigantic communities behind both Django and JQuery, which will help you immensely should you ever encounter any problems.\nDjango is more or less GAE compatible, and is getting more compatible every day, particularly with the django-nonrel project.\nJQuery is probably the most used JS library out there, with tons of documentation and third-party plugins to accomplish basically anything you may want to do.\n"
] | [
1
] | [] | [] | [
"google_app_engine",
"python"
] | stackoverflow_0002984590_google_app_engine_python.txt |
Q:
Decorators vs. classes in python web development
I've noticed three main ways Python web frameworks deal request handing: decorators, controller classes with methods for individual requests, and request classes with methods for GET/POST.
I'm curious about the virtues of these three approaches. Are there major advantages or disadvantages to any of these approaches? To fix ideas, here are three examples.
Bottle uses decorators:
@route('/')
def index():
return 'Hello World!'
Pylons uses controller classes:
class HelloController(BaseController):
def index(self):
return 'Hello World'
Tornado uses request handler classes with methods for types:
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write("Hello, world")
Which style is the best practice?
A:
There's actually a reason for each of the three methods you listed, specific to each project.
Bottle tries to keep things as
simple/straightforward as possible
for the programmer. With decorators
for routes you don't have to worry
about the developer understanding OOP.
Pylons development goal is to make
code re-usable and to be easily
integrated with WSGI-style HTTP
process routing. As such, they have
chosen a very OOP way of organizing
routes. As an example, you could
copy & paste HelloController into any
Pylons app and it should just
magically work. Even if said app is
being served up via WSGI in some
complicated fashion.
Tornado has yet another reason for
doing things the way it does:
Tornado's epoll-based IOLoop (in conjunction with tornado.web.Application)
instantiates each RequestHandler as
requests come in. By keeping each
RequestHandler limited to a specific
GET or POST this allows IOLoop to
quickly instantiate the class,
process the request, and finally let
it get garbage collected. This keeps
it fast and efficient with a small
memory footprint regardless of how
many RequestHandlers your application
has. This is also the reason why Tornado can handle so many more simultaneous requests than other Python-based web servers (each request gets its own instance).
Now, having said all that you should know that you can always override the default framework behavior. For example, I wrote a MethodDispatcher for Tornado that makes it work more like Pylons (well, I had CherryPy in mind when I wrote it). It slows down Tornado a tiny amount (and increases the memory footprint slightly) due to having one large RequestHandler (as opposed to a lot of small ones) but it can reduce the amount of code in your app and make it a little easier to read (In my biased opinion, of course =).
A:
The various frameworks are trying to achieve the best performance through the best code (for writing and reading). They each adopt different strategies based on or around MVC or MVT.
What you're focussing on will probably come down to personal taste. And so will my answer. I'm trying very hard to avoid any sort of holy war because there may be valid technical arguments that I just don't know about.
But I personally prefer to keep the routing separate from the controller (django's view) and templating separate from that. It makes reusing controllers really simple. Yeah, I'm a Django user.
As such, I'm really not a big fan of Bottle's decorators or wrapping things in great big hulking classes. I used to when I was an ASP.NET dev but Django set me free.
| Decorators vs. classes in python web development | I've noticed three main ways Python web frameworks deal request handing: decorators, controller classes with methods for individual requests, and request classes with methods for GET/POST.
I'm curious about the virtues of these three approaches. Are there major advantages or disadvantages to any of these approaches? To fix ideas, here are three examples.
Bottle uses decorators:
@route('/')
def index():
return 'Hello World!'
Pylons uses controller classes:
class HelloController(BaseController):
def index(self):
return 'Hello World'
Tornado uses request handler classes with methods for types:
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write("Hello, world")
Which style is the best practice?
| [
"There's actually a reason for each of the three methods you listed, specific to each project.\n\nBottle tries to keep things as\nsimple/straightforward as possible\nfor the programmer. With decorators\nfor routes you don't have to worry\nabout the developer understanding OOP.\nPylons development goal is to make\ncode re-usable and to be easily\nintegrated with WSGI-style HTTP\nprocess routing. As such, they have\nchosen a very OOP way of organizing\nroutes. As an example, you could\ncopy & paste HelloController into any\nPylons app and it should just\nmagically work. Even if said app is\nbeing served up via WSGI in some\ncomplicated fashion.\nTornado has yet another reason for\ndoing things the way it does: \nTornado's epoll-based IOLoop (in conjunction with tornado.web.Application)\ninstantiates each RequestHandler as\nrequests come in. By keeping each\nRequestHandler limited to a specific\nGET or POST this allows IOLoop to\nquickly instantiate the class,\nprocess the request, and finally let\nit get garbage collected. This keeps\nit fast and efficient with a small\nmemory footprint regardless of how\nmany RequestHandlers your application\nhas. This is also the reason why Tornado can handle so many more simultaneous requests than other Python-based web servers (each request gets its own instance).\n\nNow, having said all that you should know that you can always override the default framework behavior. For example, I wrote a MethodDispatcher for Tornado that makes it work more like Pylons (well, I had CherryPy in mind when I wrote it). It slows down Tornado a tiny amount (and increases the memory footprint slightly) due to having one large RequestHandler (as opposed to a lot of small ones) but it can reduce the amount of code in your app and make it a little easier to read (In my biased opinion, of course =).\n",
"The various frameworks are trying to achieve the best performance through the best code (for writing and reading). They each adopt different strategies based on or around MVC or MVT. \nWhat you're focussing on will probably come down to personal taste. And so will my answer. I'm trying very hard to avoid any sort of holy war because there may be valid technical arguments that I just don't know about.\nBut I personally prefer to keep the routing separate from the controller (django's view) and templating separate from that. It makes reusing controllers really simple. Yeah, I'm a Django user.\nAs such, I'm really not a big fan of Bottle's decorators or wrapping things in great big hulking classes. I used to when I was an ASP.NET dev but Django set me free.\n"
] | [
10,
1
] | [] | [] | [
"bottle",
"django",
"pylons",
"python",
"tornado"
] | stackoverflow_0002985014_bottle_django_pylons_python_tornado.txt |
Q:
Python Code Introspection and Analysis
I am trying to write a Python code analyzer, and I am trying to avoid having to parse bare Python text files. I was hoping that once the Python compiler/interpreter parses the code there's a way to get to the object code or parse tree from within a running Python program.
Is there anyway to do this?
Thank you
A:
A combination of ast and tokenize should provide the necessary framework for what you want to do.
A:
You can take a look at Python's abstract syntax trees.
| Python Code Introspection and Analysis | I am trying to write a Python code analyzer, and I am trying to avoid having to parse bare Python text files. I was hoping that once the Python compiler/interpreter parses the code there's a way to get to the object code or parse tree from within a running Python program.
Is there anyway to do this?
Thank you
| [
"A combination of ast and tokenize should provide the necessary framework for what you want to do.\n",
"You can take a look at Python's abstract syntax trees.\n"
] | [
4,
3
] | [] | [] | [
"compiler_construction",
"interpreter",
"introspection",
"python"
] | stackoverflow_0002985176_compiler_construction_interpreter_introspection_python.txt |
Q:
Do you use Python mostly for its functional or object-oriented features?
I see what seems like a majority of Python developers on StackOverflow endorsing the use of concise functional tools like lambdas, maps, filters, etc., while others say their code is clearer and more maintainable by not using them. What is your preference?
Also, if you are a die-hard functional programmer or hardcore into OO, what other specific programming practices do you use that you think are best for your style?
Thanks in advance for your opinions!
A:
I mostly use Python using object-oriented and procedural styles. Python is actually not particularly well-suited to functional programming.
A lot of people think they are writing functional Python code by using lots of lambda, map, filter, and reduce, but this is a bit over-simplified. The hallmark feature of functional programming is a lack of state or side effects. Important elements of a functional style are pure functions, recursive algorithms, and first class functions.
Here are my thoughts on functional programming and Python:
Pure functions are great. I do my best to make my module-level functions pure.
Pure functions can be tested. Since they do not depend on outside state, they are much easier to test.
Pure functions are able to support other optimizations, such as memoization and trivial parallelization.
Class-based programming can be pure. If you want an equivalent to pure functions using Python classes (which is sometimes but not always what you want),
Make your instances immutable. In particular, this mainly means to make your methods always return new instances of your class rather than changing the current one.
Use dependency injection rather than getting stuff (like imported module) from global scope.
This might not always be exactly what you want.
Don't try to avoid state all together. This isn't a reasonable strategy in Python. For example, use some_list.append(foo) rather than new_list = some_list + [foo], the former of which is more idiomatic and efficient. (Indeed, a ton of the "functional" solutions I see people use in Python are algorithmically suboptimal compared to just-as-simple or simpler solutions that are not functional or are just as functional but don't use the functional-looking tools.)
Learn the best lessons from functional programming, for example mutable state is dangerous. Ask yourself, Do I really want to change this X or do I want a new X?
One really common place this comes up is when processing a list. I would use
foo = [bar(item.baz()) for item in foo]
rather than
for index, _ in enumerate(foo):
foo[index] = bar(foo[index].baz())
and stuff like it. This avoids confusing bugs where the same list object is stored elsewhere and shouldn't be changed. (If it should be changed, then there is a decent chance you have a design error. Mutating some list you have referenced multiple places isn't a great way to share state.)
Don't use map and friends gratuitously. There is nothing more functional about doing this.
map/filter are not more functional than list comprehensions. List comprehensions were borrowed from Haskell, a pure functional language. map and especially filter can be harder to understand than a list comprehension. I would never use map or filter with a lambda but might if I had a function that already existed; I use map a decent bit.
The same goes for itertools.imap/ifilter compared to generator expressions. (These things are somewhat lazy, which is something great we can borrow from the functional world.)
Don't use map and filter for side effects. I see this with map a lot, which both makes hard-to-understand code, unneeded lists, and is decidedly not functional (despite people thinking it must be because of map.) Just use a for loop.
reduce is confusing except for very simple cases. Python has for loops and there is no hurt in using them.
Don't use recursive algorithms. This is one part of functional programming Python just does not support well. CPython (and I think all other Pythons) do not support tail call optimization. Use iteration instead.
Only use lambda when you are defining functions on the fly. Anonymous functions aren't better than named functions, the latter of which are often more robust, maintainable, and documented.
A:
I use the features of the language that get the job done with the shortest, cleanest code possible. If that means that I have to mix the two, which I do quite often, then that's what gets done.
A:
I am both a die-hard OOP and functional programmer and these styles work very well together, mostly because they are completely orthogonal. There are plenty of object-oriented, functional languages and Python is one of them.
So basically, decomposing a application into classes is very helpful when designing a system. When you're doing the actual implementation, FP helps to write correct code.
Also I find it very offensive that you imply that functional programming just means "use folds everywhere". That is probably the biggest and worst misconception about FP. Much has been written of that topic, so I'll just say that the great thing about FP is the idea to combine simple (,correct and reusable) functions into new, more and more complex function. That way it's pretty hard to write "almost correct" code - either the whole thing does exactly what you want, or it breaks completely.
FP in Python mostly revolves around writing generators and their relatives (list comprehensions) and the things in the itertools module. Explicit map/filter/reduce calls are just unneeded.
A:
Python has only marginal functional programming features so I would be surprised if many people would use it especially for that. For example there is no standard way to do function composition and the standard library's reduce() has been deprecated in favor of explicit loops.
Also, I don't think that map() or filter() are generally endorsed. In opposite, usually list comprehensions seem to be preferred.
A:
Most answers on StackOverflow are short, concise answers, and the functional aspects of python make writing that kind of answers easy.
Python's OO-features simply aren't needed in 10-20 line answers, so you don't see them around here as much.
A:
I select Python when I'm taking on a problem that maps well to an OO solution. Python only provides a limited ability to program in a functional manner compared to full blown functional languages.
If I really want functional programming, I use Lisp.
| Do you use Python mostly for its functional or object-oriented features? | I see what seems like a majority of Python developers on StackOverflow endorsing the use of concise functional tools like lambdas, maps, filters, etc., while others say their code is clearer and more maintainable by not using them. What is your preference?
Also, if you are a die-hard functional programmer or hardcore into OO, what other specific programming practices do you use that you think are best for your style?
Thanks in advance for your opinions!
| [
"I mostly use Python using object-oriented and procedural styles. Python is actually not particularly well-suited to functional programming.\nA lot of people think they are writing functional Python code by using lots of lambda, map, filter, and reduce, but this is a bit over-simplified. The hallmark feature of functional programming is a lack of state or side effects. Important elements of a functional style are pure functions, recursive algorithms, and first class functions.\nHere are my thoughts on functional programming and Python:\n\nPure functions are great. I do my best to make my module-level functions pure.\n\nPure functions can be tested. Since they do not depend on outside state, they are much easier to test.\nPure functions are able to support other optimizations, such as memoization and trivial parallelization.\n\nClass-based programming can be pure. If you want an equivalent to pure functions using Python classes (which is sometimes but not always what you want), \n\nMake your instances immutable. In particular, this mainly means to make your methods always return new instances of your class rather than changing the current one.\nUse dependency injection rather than getting stuff (like imported module) from global scope.\nThis might not always be exactly what you want.\n\nDon't try to avoid state all together. This isn't a reasonable strategy in Python. For example, use some_list.append(foo) rather than new_list = some_list + [foo], the former of which is more idiomatic and efficient. (Indeed, a ton of the \"functional\" solutions I see people use in Python are algorithmically suboptimal compared to just-as-simple or simpler solutions that are not functional or are just as functional but don't use the functional-looking tools.)\nLearn the best lessons from functional programming, for example mutable state is dangerous. Ask yourself, Do I really want to change this X or do I want a new X?\n\nOne really common place this comes up is when processing a list. I would use\nfoo = [bar(item.baz()) for item in foo]\n\nrather than\nfor index, _ in enumerate(foo):\n foo[index] = bar(foo[index].baz())\n\nand stuff like it. This avoids confusing bugs where the same list object is stored elsewhere and shouldn't be changed. (If it should be changed, then there is a decent chance you have a design error. Mutating some list you have referenced multiple places isn't a great way to share state.)\n\nDon't use map and friends gratuitously. There is nothing more functional about doing this.\n\nmap/filter are not more functional than list comprehensions. List comprehensions were borrowed from Haskell, a pure functional language. map and especially filter can be harder to understand than a list comprehension. I would never use map or filter with a lambda but might if I had a function that already existed; I use map a decent bit.\nThe same goes for itertools.imap/ifilter compared to generator expressions. (These things are somewhat lazy, which is something great we can borrow from the functional world.)\nDon't use map and filter for side effects. I see this with map a lot, which both makes hard-to-understand code, unneeded lists, and is decidedly not functional (despite people thinking it must be because of map.) Just use a for loop.\nreduce is confusing except for very simple cases. Python has for loops and there is no hurt in using them.\n\nDon't use recursive algorithms. This is one part of functional programming Python just does not support well. CPython (and I think all other Pythons) do not support tail call optimization. Use iteration instead.\nOnly use lambda when you are defining functions on the fly. Anonymous functions aren't better than named functions, the latter of which are often more robust, maintainable, and documented.\n\n",
"I use the features of the language that get the job done with the shortest, cleanest code possible. If that means that I have to mix the two, which I do quite often, then that's what gets done.\n",
"I am both a die-hard OOP and functional programmer and these styles work very well together, mostly because they are completely orthogonal. There are plenty of object-oriented, functional languages and Python is one of them.\nSo basically, decomposing a application into classes is very helpful when designing a system. When you're doing the actual implementation, FP helps to write correct code.\nAlso I find it very offensive that you imply that functional programming just means \"use folds everywhere\". That is probably the biggest and worst misconception about FP. Much has been written of that topic, so I'll just say that the great thing about FP is the idea to combine simple (,correct and reusable) functions into new, more and more complex function. That way it's pretty hard to write \"almost correct\" code - either the whole thing does exactly what you want, or it breaks completely.\nFP in Python mostly revolves around writing generators and their relatives (list comprehensions) and the things in the itertools module. Explicit map/filter/reduce calls are just unneeded.\n",
"Python has only marginal functional programming features so I would be surprised if many people would use it especially for that. For example there is no standard way to do function composition and the standard library's reduce() has been deprecated in favor of explicit loops.\nAlso, I don't think that map() or filter() are generally endorsed. In opposite, usually list comprehensions seem to be preferred.\n",
"Most answers on StackOverflow are short, concise answers, and the functional aspects of python make writing that kind of answers easy.\nPython's OO-features simply aren't needed in 10-20 line answers, so you don't see them around here as much.\n",
"I select Python when I'm taking on a problem that maps well to an OO solution. Python only provides a limited ability to program in a functional manner compared to full blown functional languages.\nIf I really want functional programming, I use Lisp.\n"
] | [
70,
27,
10,
6,
5,
1
] | [] | [] | [
"functional_programming",
"oop",
"python"
] | stackoverflow_0002984460_functional_programming_oop_python.txt |
Q:
module "random" not found when building .exe from IronPython 2.6 script
I am using SharpDevelop to build an executable from my IronPython script. The only hitch is that my script has the line
import random
which works fine when I run the script through ipy.exe, but when I attempt to build and run an exe from the script in SharpDevelop, I always get the message:
IronPython.Runtime.Exceptions.ImportException: No module named random
Why isn't SharpDevelop 'seeing' random? How can I make it see it?
A:
When you run an IronPython script with ipy.exe the path to the Python Standard Library is typically determined from one of the following:
The IRONPYTHONPATH environment variable.
Code in the lib\site.py, next to ipy.exe, that adds the location of the Python Standard Library to the path.
An IronPython executable produced by SharpDevelop will not do these initial setup tasks. So you will need to add some extra startup code before you import the random library. Here are a few ways you can do this:
Add the location of the Python Standard Library to sys.path directly.
import sys
sys.path.append(r'c:\python26\lib')
Get the location of the Python Standard Library from the IRONPYTHONPATH environment variable.
from System import Environment
pythonPath = Environment.GetEnvironmentVariable("IRONPYTHONPATH")
import sys
sys.path.append(pythonPath)
Read the location of the Python Standard Library from the registry (HKLM\Software\Python\PythonCore\2.6\PythonPath).
Read the location of the Python Standard Library from a separate config file that you ship with your application.
Another alternative is to compile the parts of the Python Standard Library your application needs into one or more .NET assemblies. That way you will not need the end user of your application to have the Python Standard Library installed.
| module "random" not found when building .exe from IronPython 2.6 script | I am using SharpDevelop to build an executable from my IronPython script. The only hitch is that my script has the line
import random
which works fine when I run the script through ipy.exe, but when I attempt to build and run an exe from the script in SharpDevelop, I always get the message:
IronPython.Runtime.Exceptions.ImportException: No module named random
Why isn't SharpDevelop 'seeing' random? How can I make it see it?
| [
"When you run an IronPython script with ipy.exe the path to the Python Standard Library is typically determined from one of the following:\n\nThe IRONPYTHONPATH environment variable.\nCode in the lib\\site.py, next to ipy.exe, that adds the location of the Python Standard Library to the path.\n\nAn IronPython executable produced by SharpDevelop will not do these initial setup tasks. So you will need to add some extra startup code before you import the random library. Here are a few ways you can do this:\n\nAdd the location of the Python Standard Library to sys.path directly.\nimport sys\nsys.path.append(r'c:\\python26\\lib')\n\n\nGet the location of the Python Standard Library from the IRONPYTHONPATH environment variable.\nfrom System import Environment\npythonPath = Environment.GetEnvironmentVariable(\"IRONPYTHONPATH\")\nimport sys\nsys.path.append(pythonPath)\n\n\nRead the location of the Python Standard Library from the registry (HKLM\\Software\\Python\\PythonCore\\2.6\\PythonPath).\n\nRead the location of the Python Standard Library from a separate config file that you ship with your application.\n\n\nAnother alternative is to compile the parts of the Python Standard Library your application needs into one or more .NET assemblies. That way you will not need the end user of your application to have the Python Standard Library installed.\n"
] | [
3
] | [] | [] | [
"ironpython",
"python",
"random",
"sharpdevelop"
] | stackoverflow_0002984561_ironpython_python_random_sharpdevelop.txt |
Q:
Which os is better for development : Debian or Ubuntu?
Are there any real differences between them?
I want to program in java and python. And of corse be a normal user: internet, etc
Which one will give me less headaches/more satisfaction ?
And which is better for a server machine ?
Thank you
A:
Since Ubuntu is based on Debian, development is almost exactly the same for both. They're both quite suitable for server machines. The fundamental difference is that Debian follows a Free software ideology, while Ubuntu sacrifices that purity for practicality when no Free equivalent exists for important proprietary software.
If you choose Debian, you will have a choice of distribution series ("unstable" / "testing") that may get you newer releases of pre-packaged software a few months sooner than Ubuntu. Unless your development projects require bleeding-edge kernel or support libraries, this probably won't matter to you at all.
If you choose Ubuntu, certain proprietary software might be easier to install because it will be available through package repositories. For example, nVidia's proprietary video driver. That's not to say you can't make such things work on Debian; they will simply be easier on Ubuntu.
I personally choose Ubuntu, for these reasons:
Ubuntu has a free multi-platform build farm and software hosting system called Personal Package Archives. (Only to be used for freely redistributable software, of course.)
The Ubuntu bug reporting/tracking system is far more user friendly than Debian's.
Software packages I develop are guaranteed to work (with no extra dependency testing) for Ubuntu users, of which there are many.
I'd seriously consider switching to Debian on my workstation if they offered a PPA equivalent. I don't use Ubuntu-centric stuff like Unity desktop anyway, and I no longer need nVidia graphics drivers (I finally got tired of their deeply broken OS support and switched to an AMD card). I already run Debian on my servers.
A:
Both use Debian packages and Ubuntu is based on Debian but is more user friendly. Everything yo can do on one you can do on the other. I'd recommend Ubuntu if your new to linux on a Desktop. Though when it comes to servers I'd recommend Debian as it has less stuff "taken out" basically.
A:
java and python would most likely run the same on both.
With Ubuntu you get additional space of support and active community, and perhaps larger user base.
So if and when you face a particular problem, chances are with Ubuntu, the solution will appear faster.
(although, whatever works on this should work on the other as well in theory)
A:
Ubuntu is the more user-friendly of the two (I think Ubuntu is actually one of the most newbie-friendly Linux distros), so if you are new to Linux, Ubuntu is the way to go. Otherwise, the packages are mostly the same except for branding, so it's pretty much your choice.
A:
In Ubuntu it is a bit easier to install packages for Java development, but it doesn't really matter that much. Remember that Ubuntu is based on Debian, so it works the same. Ubuntu just adds more user-friendly GUI's.
A:
Neither is better. They both support the same tools and libraries. They are both linux. Anything and everything you can do on one you can do on the other.
| Which os is better for development : Debian or Ubuntu? | Are there any real differences between them?
I want to program in java and python. And of corse be a normal user: internet, etc
Which one will give me less headaches/more satisfaction ?
And which is better for a server machine ?
Thank you
| [
"Since Ubuntu is based on Debian, development is almost exactly the same for both. They're both quite suitable for server machines. The fundamental difference is that Debian follows a Free software ideology, while Ubuntu sacrifices that purity for practicality when no Free equivalent exists for important proprietary software.\nIf you choose Debian, you will have a choice of distribution series (\"unstable\" / \"testing\") that may get you newer releases of pre-packaged software a few months sooner than Ubuntu. Unless your development projects require bleeding-edge kernel or support libraries, this probably won't matter to you at all.\nIf you choose Ubuntu, certain proprietary software might be easier to install because it will be available through package repositories. For example, nVidia's proprietary video driver. That's not to say you can't make such things work on Debian; they will simply be easier on Ubuntu.\nI personally choose Ubuntu, for these reasons:\n\nUbuntu has a free multi-platform build farm and software hosting system called Personal Package Archives. (Only to be used for freely redistributable software, of course.)\nThe Ubuntu bug reporting/tracking system is far more user friendly than Debian's.\nSoftware packages I develop are guaranteed to work (with no extra dependency testing) for Ubuntu users, of which there are many.\n\nI'd seriously consider switching to Debian on my workstation if they offered a PPA equivalent. I don't use Ubuntu-centric stuff like Unity desktop anyway, and I no longer need nVidia graphics drivers (I finally got tired of their deeply broken OS support and switched to an AMD card). I already run Debian on my servers.\n",
"Both use Debian packages and Ubuntu is based on Debian but is more user friendly. Everything yo can do on one you can do on the other. I'd recommend Ubuntu if your new to linux on a Desktop. Though when it comes to servers I'd recommend Debian as it has less stuff \"taken out\" basically.\n",
"java and python would most likely run the same on both.\nWith Ubuntu you get additional space of support and active community, and perhaps larger user base. \nSo if and when you face a particular problem, chances are with Ubuntu, the solution will appear faster. \n(although, whatever works on this should work on the other as well in theory) \n",
"Ubuntu is the more user-friendly of the two (I think Ubuntu is actually one of the most newbie-friendly Linux distros), so if you are new to Linux, Ubuntu is the way to go. Otherwise, the packages are mostly the same except for branding, so it's pretty much your choice.\n",
"In Ubuntu it is a bit easier to install packages for Java development, but it doesn't really matter that much. Remember that Ubuntu is based on Debian, so it works the same. Ubuntu just adds more user-friendly GUI's.\n",
"Neither is better. They both support the same tools and libraries. They are both linux. Anything and everything you can do on one you can do on the other. \n"
] | [
14,
4,
2,
2,
1,
1
] | [] | [] | [
"debian",
"java",
"operating_system",
"python",
"ubuntu"
] | stackoverflow_0002985426_debian_java_operating_system_python_ubuntu.txt |
Q:
DeprecationWarning when pushing to Mercurial repo
I'm trying to serve a merurial repository with apache, and when I try to push to the repo I see this in the apache error.log. On the client side I get a 500 error.
How do I get this to go away????
[Sun Jun 06 14:43:25 2010] [error] [client 192.168.1.8] /var/lib/python-support/python2.6/mercurial/hgweb/common.py:24: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
[Sun Jun 06 14:43:25 2010] [error] [client 192.168.1.8] self.message = message
[Sun Jun 06 14:43:25 2010] [error] [client 192.168.1.8] /var/lib/python-support/python2.6/mercurial/hgweb/hgweb_mod.py:104: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
[Sun Jun 06 14:43:25 2010] [error] [client 192.168.1.8] if not inst.message:
[Sun Jun 06 14:43:25 2010] [error] [client 192.168.1.8] /var/lib/python-support/python2.6/mercurial/hgweb/hgweb_mod.py:106: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
[Sun Jun 06 14:43:25 2010] [error] [client 192.168.1.8] return '0\\n%s\\n' % inst.message,
A:
The deprecation warning is a red herring. It's just letting you know that the server code accessed a python exception in a way that will eventually be unsupported. What you really want to find out is what exception was raised in the first place. (Was there an error message along with that 500 error?)
| DeprecationWarning when pushing to Mercurial repo | I'm trying to serve a merurial repository with apache, and when I try to push to the repo I see this in the apache error.log. On the client side I get a 500 error.
How do I get this to go away????
[Sun Jun 06 14:43:25 2010] [error] [client 192.168.1.8] /var/lib/python-support/python2.6/mercurial/hgweb/common.py:24: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
[Sun Jun 06 14:43:25 2010] [error] [client 192.168.1.8] self.message = message
[Sun Jun 06 14:43:25 2010] [error] [client 192.168.1.8] /var/lib/python-support/python2.6/mercurial/hgweb/hgweb_mod.py:104: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
[Sun Jun 06 14:43:25 2010] [error] [client 192.168.1.8] if not inst.message:
[Sun Jun 06 14:43:25 2010] [error] [client 192.168.1.8] /var/lib/python-support/python2.6/mercurial/hgweb/hgweb_mod.py:106: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
[Sun Jun 06 14:43:25 2010] [error] [client 192.168.1.8] return '0\\n%s\\n' % inst.message,
| [
"The deprecation warning is a red herring. It's just letting you know that the server code accessed a python exception in a way that will eventually be unsupported. What you really want to find out is what exception was raised in the first place. (Was there an error message along with that 500 error?)\n"
] | [
0
] | [] | [] | [
"apache",
"mercurial",
"python"
] | stackoverflow_0002985577_apache_mercurial_python.txt |
Q:
Database query optimization
Ok my Giant friends once again I seek a little space in your shoulders :P
Here is the issue, I have a python script that is fixing some database issues but it is taking way too long, the main update statement is this:
cursor.execute("UPDATE jiveuser SET username = '%s' WHERE userid = %d" % (newName,userId))
That is getting called about 9500 times with different newName and userid pairs...
Any suggestions on how to speed up the process? Maybe somehow a way where I can do all updates with just one query?
Any help will be much appreciated!
PS: Postgres is the db being used.
A:
Insert all the data into another empty table (called userchanges, say) then UPDATE in a single batch:
UPDATE jiveuser
SET username = userchanges.username
FROM userchanges
WHERE userchanges.userid = jiveuser.userid
AND userchanges.username <> jiveuser.username
See this documentation on the COPY command for bulk loading your data.
There are also tips for improving performance when populating a database.
A:
First of all, do not use the % operator to construct your SQL. Instead, pass your tuple of arguments as the second parameter to cursor.execute, which also negates the need to quote your argument and allows you to use %s for everything:
cursor.execute("UPDATE jiveuser SET username = %s WHERE userid = %s", (newName, userId))
This is important to prevent SQL Injection attacks.
To answer your question, you can speed up these updates by creating an index on the userid column, which will allow the database to update in O(1) constant time rather than having to scan the entire database table, which is O(n). Since you're using PostgreSQL, here's the syntax to create your index:
CREATE INDEX username_lookup ON jiveuser (userid);
EDIT: Since your comment reveals that you already have an index on the userid column, there's not much you could possibly do to speed up that query. So your main choices are either living with the slowness, since this sounds like a one-time fix-something-broken thing, or following VeeArr's advice and testing whether cursor.executemany will give you a sufficient boost.
A:
The reason it's taking so long is probably that you've got autocommit enabled and each update gets done in its own transaction.
This is slow because even if you have a battery-backed raid controller (which you should definitely have on all database servers, of course), it still needs to do a write into that device for every transaction commit to ensure durability.
The solution is to do more than one row per transaction. But don't make transactions TOO big or you run into problems too. Try committing every 10,000 rows of changes as a rough guess.
A:
You might want to look into executemany(): Information here
A:
Perhaps you can create an index on userid to speed things up.
A:
I'd do an explain on this. If it's doing an indexed lookup to find the record -- which it should if you have an index on userid -- then I don't see what you could do to improve performance. If it's not using the index, then the trick is figuring out why not and fixing it.
Oh, you could try using a prepared statement. With 9500 inserts, that should help.
A:
Move this to a stored procedure and execute it from the database self.
A:
First ensure you have an index on 'userid', this will ensure the dbms doesn't have to do a table scan each time
CREATE INDEX jiveuser_userid ON jiveuser (userid);
Next try preparing the statement, and then calling execute on it. This will stop the optimizer from having to examine the query each time
PREPARE update_username(string,integer) AS UPDATE jiveuser SET username = $1 WHERE userid = $2;
EXECUTE update_username("New Name", 123);
Finally, a bit more performance could be squeezed out by turning off autocommit
\set autocommit off
| Database query optimization | Ok my Giant friends once again I seek a little space in your shoulders :P
Here is the issue, I have a python script that is fixing some database issues but it is taking way too long, the main update statement is this:
cursor.execute("UPDATE jiveuser SET username = '%s' WHERE userid = %d" % (newName,userId))
That is getting called about 9500 times with different newName and userid pairs...
Any suggestions on how to speed up the process? Maybe somehow a way where I can do all updates with just one query?
Any help will be much appreciated!
PS: Postgres is the db being used.
| [
"Insert all the data into another empty table (called userchanges, say) then UPDATE in a single batch:\nUPDATE jiveuser\nSET username = userchanges.username\nFROM userchanges\nWHERE userchanges.userid = jiveuser.userid\n AND userchanges.username <> jiveuser.username\n\nSee this documentation on the COPY command for bulk loading your data.\nThere are also tips for improving performance when populating a database.\n",
"First of all, do not use the % operator to construct your SQL. Instead, pass your tuple of arguments as the second parameter to cursor.execute, which also negates the need to quote your argument and allows you to use %s for everything:\ncursor.execute(\"UPDATE jiveuser SET username = %s WHERE userid = %s\", (newName, userId))\n\nThis is important to prevent SQL Injection attacks.\nTo answer your question, you can speed up these updates by creating an index on the userid column, which will allow the database to update in O(1) constant time rather than having to scan the entire database table, which is O(n). Since you're using PostgreSQL, here's the syntax to create your index:\nCREATE INDEX username_lookup ON jiveuser (userid);\n\nEDIT: Since your comment reveals that you already have an index on the userid column, there's not much you could possibly do to speed up that query. So your main choices are either living with the slowness, since this sounds like a one-time fix-something-broken thing, or following VeeArr's advice and testing whether cursor.executemany will give you a sufficient boost.\n",
"The reason it's taking so long is probably that you've got autocommit enabled and each update gets done in its own transaction.\nThis is slow because even if you have a battery-backed raid controller (which you should definitely have on all database servers, of course), it still needs to do a write into that device for every transaction commit to ensure durability.\nThe solution is to do more than one row per transaction. But don't make transactions TOO big or you run into problems too. Try committing every 10,000 rows of changes as a rough guess.\n",
"You might want to look into executemany(): Information here\n",
"Perhaps you can create an index on userid to speed things up.\n",
"I'd do an explain on this. If it's doing an indexed lookup to find the record -- which it should if you have an index on userid -- then I don't see what you could do to improve performance. If it's not using the index, then the trick is figuring out why not and fixing it.\nOh, you could try using a prepared statement. With 9500 inserts, that should help.\n",
"Move this to a stored procedure and execute it from the database self. \n",
"First ensure you have an index on 'userid', this will ensure the dbms doesn't have to do a table scan each time\nCREATE INDEX jiveuser_userid ON jiveuser (userid);\n\nNext try preparing the statement, and then calling execute on it. This will stop the optimizer from having to examine the query each time\nPREPARE update_username(string,integer) AS UPDATE jiveuser SET username = $1 WHERE userid = $2;\nEXECUTE update_username(\"New Name\", 123);\n\nFinally, a bit more performance could be squeezed out by turning off autocommit\n\\set autocommit off\n\n"
] | [
4,
3,
3,
2,
1,
1,
0,
0
] | [] | [] | [
"database",
"postgresql",
"python",
"query_optimization"
] | stackoverflow_0002968451_database_postgresql_python_query_optimization.txt |
Q:
Moving to an arbitrary position in a file in Python
Let's say that I routinely have to work with files with an unknown, but large, number of lines. Each line contains a set of integers (space, comma, semicolon, or some non-numeric character is the delimiter) in the closed interval [0, R], where R can be arbitrarily large. The number of integers on each line can be variable. Often times I get the same number of integers on each line, but occasionally I have lines with unequal sets of numbers.
Suppose I want to go to Nth line in the file and retrieve the Kth number on that line (and assume that the inputs N and K are valid --- that is, I am not worried about bad inputs). How do I go about doing this efficiently in Python 3.1.2 for Windows?
I do not want to traverse the file line by line.
I tried using mmap, but while poking around here on SO, I learned that that's probably not the best solution on a 32-bit build because of the 4GB limit. And in truth, I couldn't really figure out how to simply move N lines away from my current position. If I can at least just "jump" to the Nth line then I can use .split() and grab the Kth integer that way.
The nuance here is that I don't just need to grab one line from the file. I will need to grab several lines: they are not necessarily all near each other, the order in which I get them matters, and the order is not always based on some deterministic function.
Any ideas? I hope this is enough information.
Thanks!
A:
Python's seek goes to a byte offset in a file, not to a line offset, simply because that's the way modern operating systems and their filesystems work -- the OS/FS just don't record or remember "line offsets" in any way whatsoever, and there's no way for Python (or any other language) to just magically guess them. Any operation purporting to "go to a line" will inevitably need to "walk through the file" (under the covers) to make the association between line numbers and byte offsets.
If you're OK with that and just want it hidden from your sight, then the solution is the standard library module linecache -- but performance won't be any better than that of code you could write yourself.
If you need to read from the same large file multiple times, a large optimization would be to run once on that large file a script that builds and saves to disk the line number - to - byte offset correspondence (technically an "index" auxiliary file); then, all your successive runs (until the large file changes) could very speedily use the index file to navigate with very high performance through the large file. Is this your use case...?
Edit: since apparently this may apply -- here's the general idea (net of careful testing, error checking, or optimization;-). To make the index, use makeindex.py, as follows:
import array
import sys
BLOCKSIZE = 1024 * 1024
def reader(f):
blockstart = 0
while True:
block = f.read(BLOCKSIZE)
if not block: break
inblock = 0
while True:
nextnl = block.find(b'\n', inblock)
if nextnl < 0:
blockstart += len(block)
break
yield nextnl + blockstart
inblock = nextnl + 1
def doindex(fn):
with open(fn, 'rb') as f:
# result format: x[0] is tot # of lines,
# x[N] is byte offset of END of line N (1+)
result = array.array('L', [0])
result.extend(reader(f))
result[0] = len(result) - 1
return result
def main():
for fn in sys.argv[1:]:
index = doindex(fn)
with open(fn + '.indx', 'wb') as p:
print('File', fn, 'has', index[0], 'lines')
index.tofile(p)
main()
and then to use it, for example, the following useindex.py:
import array
import sys
def readline(n, f, findex):
f.seek(findex[n] + 1)
bytes = f.read(findex[n+1] - findex[n])
return bytes.decode('utf8')
def main():
fn = sys.argv[1]
with open(fn + '.indx', 'rb') as f:
findex = array.array('l')
findex.fromfile(f, 1)
findex.fromfile(f, findex[0])
findex[0] = -1
with open(fn, 'rb') as f:
for n in sys.argv[2:]:
print(n, repr(readline(int(n), f, findex)))
main()
Here's an example (on my slow laptop):
$ time py3 makeindex.py kjv10.txt
File kjv10.txt has 100117 lines
real 0m0.235s
user 0m0.184s
sys 0m0.035s
$ time py3 useindex.py kjv10.txt 12345 98765 33448
12345 '\r\n'
98765 '2:6 But this thou hast, that thou hatest the deeds of the\r\n'
33448 'the priest appointed officers over the house of the LORD.\r\n'
real 0m0.049s
user 0m0.028s
sys 0m0.020s
$
The sample file is a plain text file of King James' Bible:
$ wc kjv10.txt
100117 823156 4445260 kjv10.txt
100K lines, 4.4 MB, as you can see; this takes about a quarter second to index and 50 milliseconds to read and print out three random-y lines (no doubt this can be vastly accelerated with more careful optimization and a better machine). The index in memory (and on disk too) takes 4 bytes per line of the textfile being indexed, and performance should scale in a perfectly linear way, so if you had about 100 million lines, 4.4 GB, I would expect about 4-5 minutes to build the index, a minute to extract and print out three arbitrary lines (and the 400 MB of RAM taken for the index should not inconvenience even a small machine -- even my tiny slow laptop has 2GB after all;-).
You can also see that (for speed and convenience) I treat the file as binary (and assume utf8 encoding -- works with any subset like ASCII too of course, eg that KJ text file is ASCII) and don't bother collapsing \r\n into a single character if that's what the file has as line terminator (it's pretty trivial to do that after reading each line if you want).
A:
The problem is that since your lines are not of fixed length, you have to pay attention to line end markers to do your seeking, and that effectively becomes "traversing the file line by line". Thus, any viable approach is still going to be traversing the file, it's merely a matter of what can traverse it fastest.
A:
Another solution, if the file is potentially going to change a lot, is to go full-way to a proper database. The database engine will create and maintain the indexes for you so you can do very fast searches/queries.
This may be an overkill though.
| Moving to an arbitrary position in a file in Python | Let's say that I routinely have to work with files with an unknown, but large, number of lines. Each line contains a set of integers (space, comma, semicolon, or some non-numeric character is the delimiter) in the closed interval [0, R], where R can be arbitrarily large. The number of integers on each line can be variable. Often times I get the same number of integers on each line, but occasionally I have lines with unequal sets of numbers.
Suppose I want to go to Nth line in the file and retrieve the Kth number on that line (and assume that the inputs N and K are valid --- that is, I am not worried about bad inputs). How do I go about doing this efficiently in Python 3.1.2 for Windows?
I do not want to traverse the file line by line.
I tried using mmap, but while poking around here on SO, I learned that that's probably not the best solution on a 32-bit build because of the 4GB limit. And in truth, I couldn't really figure out how to simply move N lines away from my current position. If I can at least just "jump" to the Nth line then I can use .split() and grab the Kth integer that way.
The nuance here is that I don't just need to grab one line from the file. I will need to grab several lines: they are not necessarily all near each other, the order in which I get them matters, and the order is not always based on some deterministic function.
Any ideas? I hope this is enough information.
Thanks!
| [
"Python's seek goes to a byte offset in a file, not to a line offset, simply because that's the way modern operating systems and their filesystems work -- the OS/FS just don't record or remember \"line offsets\" in any way whatsoever, and there's no way for Python (or any other language) to just magically guess them. Any operation purporting to \"go to a line\" will inevitably need to \"walk through the file\" (under the covers) to make the association between line numbers and byte offsets.\nIf you're OK with that and just want it hidden from your sight, then the solution is the standard library module linecache -- but performance won't be any better than that of code you could write yourself.\nIf you need to read from the same large file multiple times, a large optimization would be to run once on that large file a script that builds and saves to disk the line number - to - byte offset correspondence (technically an \"index\" auxiliary file); then, all your successive runs (until the large file changes) could very speedily use the index file to navigate with very high performance through the large file. Is this your use case...?\nEdit: since apparently this may apply -- here's the general idea (net of careful testing, error checking, or optimization;-). To make the index, use makeindex.py, as follows:\nimport array\nimport sys\n\nBLOCKSIZE = 1024 * 1024\n\ndef reader(f):\n blockstart = 0\n while True:\n block = f.read(BLOCKSIZE)\n if not block: break\n inblock = 0\n while True:\n nextnl = block.find(b'\\n', inblock)\n if nextnl < 0:\n blockstart += len(block)\n break\n yield nextnl + blockstart\n inblock = nextnl + 1\n\ndef doindex(fn):\n with open(fn, 'rb') as f:\n # result format: x[0] is tot # of lines,\n # x[N] is byte offset of END of line N (1+)\n result = array.array('L', [0])\n result.extend(reader(f))\n result[0] = len(result) - 1\n return result\n\ndef main():\n for fn in sys.argv[1:]:\n index = doindex(fn)\n with open(fn + '.indx', 'wb') as p:\n print('File', fn, 'has', index[0], 'lines')\n index.tofile(p)\n\nmain()\n\nand then to use it, for example, the following useindex.py:\nimport array\nimport sys\n\ndef readline(n, f, findex):\n f.seek(findex[n] + 1)\n bytes = f.read(findex[n+1] - findex[n])\n return bytes.decode('utf8')\n\ndef main():\n fn = sys.argv[1]\n with open(fn + '.indx', 'rb') as f:\n findex = array.array('l')\n findex.fromfile(f, 1)\n findex.fromfile(f, findex[0])\n findex[0] = -1\n with open(fn, 'rb') as f:\n for n in sys.argv[2:]:\n print(n, repr(readline(int(n), f, findex)))\n\nmain()\n\nHere's an example (on my slow laptop):\n$ time py3 makeindex.py kjv10.txt \nFile kjv10.txt has 100117 lines\n\nreal 0m0.235s\nuser 0m0.184s\nsys 0m0.035s\n$ time py3 useindex.py kjv10.txt 12345 98765 33448\n12345 '\\r\\n'\n98765 '2:6 But this thou hast, that thou hatest the deeds of the\\r\\n'\n33448 'the priest appointed officers over the house of the LORD.\\r\\n'\n\nreal 0m0.049s\nuser 0m0.028s\nsys 0m0.020s\n$ \n\nThe sample file is a plain text file of King James' Bible:\n$ wc kjv10.txt\n100117 823156 4445260 kjv10.txt\n\n100K lines, 4.4 MB, as you can see; this takes about a quarter second to index and 50 milliseconds to read and print out three random-y lines (no doubt this can be vastly accelerated with more careful optimization and a better machine). The index in memory (and on disk too) takes 4 bytes per line of the textfile being indexed, and performance should scale in a perfectly linear way, so if you had about 100 million lines, 4.4 GB, I would expect about 4-5 minutes to build the index, a minute to extract and print out three arbitrary lines (and the 400 MB of RAM taken for the index should not inconvenience even a small machine -- even my tiny slow laptop has 2GB after all;-).\nYou can also see that (for speed and convenience) I treat the file as binary (and assume utf8 encoding -- works with any subset like ASCII too of course, eg that KJ text file is ASCII) and don't bother collapsing \\r\\n into a single character if that's what the file has as line terminator (it's pretty trivial to do that after reading each line if you want).\n",
"The problem is that since your lines are not of fixed length, you have to pay attention to line end markers to do your seeking, and that effectively becomes \"traversing the file line by line\". Thus, any viable approach is still going to be traversing the file, it's merely a matter of what can traverse it fastest.\n",
"Another solution, if the file is potentially going to change a lot, is to go full-way to a proper database. The database engine will create and maintain the indexes for you so you can do very fast searches/queries. \nThis may be an overkill though.\n"
] | [
17,
4,
0
] | [] | [] | [
"file",
"python",
"python_3.x"
] | stackoverflow_0002985725_file_python_python_3.x.txt |
Q:
High level audio crossfading library for python
I am looking for a high level audio library that supports crossfading for python (and that works in linux). In fact crossfading a song and saving it is about the only thing I need.
I tried pyechonest but I find it really slow. Working with multiple songs at the same time is hard on memory too (I tried to crossfade about 10 songs in one, but I got out of memory errors and my script was using 1.4Gb of memory). So now I'm looking for something else that works with python.
I have no idea if there exists anything like that, if not, are there good command line tools for this, I could write a wrapper for the tool.
A:
A list of Python sound libraries.
Play a Sound with Python
PyGame or Snack would work, but for this, I'd use something like audioop.
— basic first steps here : merge background audio file
A:
A scriptable solution using external tools AviSynth and avs2wav or WAVI:
Create an AviSynth script file:
test.avs
v=ColorBars()
a1=WAVSource("audio1.wav").FadeOut(50)
a2=WAVSource("audio2.wav").Reverse.FadeOut(50).Reverse
AudioDub(v,a1+a2)
Script fades out on audio1 stores that in a1 then fades in on audio2 and stores that in a2.
a1 & a2 are concatenated and then dubbed with a Colorbar screen pattern to make a video.
You can't just work with audio alone - a valid video must be generated.
I kept the script as simple as possible for demonstration purposes. Google for more details on audio processing via AviSynth.
Now using avs2wav (or WAVI) you can render the audio:
avs2wav.exe test.avs combined.wav
or
wavi.exe test.avs combined.wav
Good luck!
Some references:
How to edit with Avisynth
AviSynth filters reference
| High level audio crossfading library for python | I am looking for a high level audio library that supports crossfading for python (and that works in linux). In fact crossfading a song and saving it is about the only thing I need.
I tried pyechonest but I find it really slow. Working with multiple songs at the same time is hard on memory too (I tried to crossfade about 10 songs in one, but I got out of memory errors and my script was using 1.4Gb of memory). So now I'm looking for something else that works with python.
I have no idea if there exists anything like that, if not, are there good command line tools for this, I could write a wrapper for the tool.
| [
"A list of Python sound libraries.\nPlay a Sound with Python\nPyGame or Snack would work, but for this, I'd use something like audioop.\n— basic first steps here : merge background audio file\n",
"A scriptable solution using external tools AviSynth and avs2wav or WAVI:\nCreate an AviSynth script file:\ntest.avs\nv=ColorBars() \na1=WAVSource(\"audio1.wav\").FadeOut(50) \na2=WAVSource(\"audio2.wav\").Reverse.FadeOut(50).Reverse\nAudioDub(v,a1+a2)\n\nScript fades out on audio1 stores that in a1 then fades in on audio2 and stores that in a2.\na1 & a2 are concatenated and then dubbed with a Colorbar screen pattern to make a video. \nYou can't just work with audio alone - a valid video must be generated. \nI kept the script as simple as possible for demonstration purposes. Google for more details on audio processing via AviSynth.\nNow using avs2wav (or WAVI) you can render the audio:\navs2wav.exe test.avs combined.wav\n\nor\nwavi.exe test.avs combined.wav\n\nGood luck!\nSome references:\nHow to edit with Avisynth\nAviSynth filters reference\n"
] | [
1,
0
] | [] | [] | [
"audio",
"python"
] | stackoverflow_0002984390_audio_python.txt |
Q:
Python: why does this code take forever (infinite loop?)
I'm developing an app in Google App Engine. One of my methods is taking never completing, which makes me think it's caught in an infinite loop. I've stared at it, but can't figure it out.
Disclaimer: I'm using http://code.google.com/p/gaeunitlink text to run my tests. Perhaps it's acting oddly?
This is the problematic function:
def _traverseForwards(course, c_levels):
''' Looks forwards in the dependency graph '''
result = {'nodes': [], 'arcs': []}
if c_levels == 0:
return result
model_arc_tails_with_course = set(_getListArcTailsWithCourse(course))
q_arc_heads = DependencyArcHead.all()
for model_arc_head in q_arc_heads:
for model_arc_tail in model_arc_tails_with_course:
if model_arc_tail.key() in model_arc_head.tails:
result['nodes'].append(model_arc_head.sink)
result['arcs'].append(_makeArc(course, model_arc_head.sink))
# rec_result = _traverseForwards(model_arc_head.sink, c_levels - 1)
# _extendResult(result, rec_result)
return result
Originally, I thought it might be a recursion error, but I commented out the recursion and the problem persists. If this function is called with c_levels = 0, it runs fine.
The models it references:
class Course(db.Model):
dept_code = db.StringProperty()
number = db.IntegerProperty()
title = db.StringProperty()
raw_pre_reqs = db.StringProperty(multiline=True)
original_description = db.StringProperty()
def getPreReqs(self):
return pickle.loads(str(self.raw_pre_reqs))
def __repr__(self):
return "%s %s: %s" % (self.dept_code, self.number, self.title)
class DependencyArcTail(db.Model):
''' A list of courses that is a pre-req for something else '''
courses = db.ListProperty(db.Key)
def equals(self, arcTail):
for this_course in self.courses:
if not (this_course in arcTail.courses):
return False
for other_course in arcTail.courses:
if not (other_course in self.courses):
return False
return True
class DependencyArcHead(db.Model):
''' Maintains a course, and a list of tails with that course as their sink '''
sink = db.ReferenceProperty()
tails = db.ListProperty(db.Key)
Utility functions it references:
def _makeArc(source, sink):
return {'source': source, 'sink': sink}
def _getListArcTailsWithCourse(course):
''' returns a LIST, not SET
there may be duplicate entries
'''
q_arc_heads = DependencyArcHead.all()
result = []
for arc_head in q_arc_heads:
for key_arc_tail in arc_head.tails:
model_arc_tail = db.get(key_arc_tail)
if course.key() in model_arc_tail.courses:
result.append(model_arc_tail)
return result
Am I missing something pretty obvious here, or is GAEUnit acting up?
Also - the test that is making this run slow has no more than 5 models of any kind in the datastore. I know this is potentially slow, but my app only does this once then subsequently caches it.
A:
Ignoring the commented out recursion, I don't think this should be an infinite loop - you are just doing some for-loops over finite results sets.
However, it does seem like this would be really slow. You're looping over entire tables and then doing more datastore queries in every nested loop. It seems unlikely that this sort of request would complete in a timely manner on GAE unless your tables are really, really small.
Some rough numbers:
If H = # of entities in DepedencyArcHead and T = average # of tails in each DependencyArcHead then:
_getListArcTailsWithCourse is doing about H*T queries (understimate). In the "worst" case, the result returned from this function will have H*T elements.
_traverseForwards loops over all these results H times, and thus does another H*(H*T) queries.
Even if H and T are only on the order of 10s, you could be doing thousands of queries. If they're bigger, then ... (and this ignores any additional queries you'd do if you uncommented the recursive call).
In short, I think you may want to try to organize your data a little differently if possible. I'd make a specific suggestion, but what exactly you're trying to do isn't clear to me.
| Python: why does this code take forever (infinite loop?) | I'm developing an app in Google App Engine. One of my methods is taking never completing, which makes me think it's caught in an infinite loop. I've stared at it, but can't figure it out.
Disclaimer: I'm using http://code.google.com/p/gaeunitlink text to run my tests. Perhaps it's acting oddly?
This is the problematic function:
def _traverseForwards(course, c_levels):
''' Looks forwards in the dependency graph '''
result = {'nodes': [], 'arcs': []}
if c_levels == 0:
return result
model_arc_tails_with_course = set(_getListArcTailsWithCourse(course))
q_arc_heads = DependencyArcHead.all()
for model_arc_head in q_arc_heads:
for model_arc_tail in model_arc_tails_with_course:
if model_arc_tail.key() in model_arc_head.tails:
result['nodes'].append(model_arc_head.sink)
result['arcs'].append(_makeArc(course, model_arc_head.sink))
# rec_result = _traverseForwards(model_arc_head.sink, c_levels - 1)
# _extendResult(result, rec_result)
return result
Originally, I thought it might be a recursion error, but I commented out the recursion and the problem persists. If this function is called with c_levels = 0, it runs fine.
The models it references:
class Course(db.Model):
dept_code = db.StringProperty()
number = db.IntegerProperty()
title = db.StringProperty()
raw_pre_reqs = db.StringProperty(multiline=True)
original_description = db.StringProperty()
def getPreReqs(self):
return pickle.loads(str(self.raw_pre_reqs))
def __repr__(self):
return "%s %s: %s" % (self.dept_code, self.number, self.title)
class DependencyArcTail(db.Model):
''' A list of courses that is a pre-req for something else '''
courses = db.ListProperty(db.Key)
def equals(self, arcTail):
for this_course in self.courses:
if not (this_course in arcTail.courses):
return False
for other_course in arcTail.courses:
if not (other_course in self.courses):
return False
return True
class DependencyArcHead(db.Model):
''' Maintains a course, and a list of tails with that course as their sink '''
sink = db.ReferenceProperty()
tails = db.ListProperty(db.Key)
Utility functions it references:
def _makeArc(source, sink):
return {'source': source, 'sink': sink}
def _getListArcTailsWithCourse(course):
''' returns a LIST, not SET
there may be duplicate entries
'''
q_arc_heads = DependencyArcHead.all()
result = []
for arc_head in q_arc_heads:
for key_arc_tail in arc_head.tails:
model_arc_tail = db.get(key_arc_tail)
if course.key() in model_arc_tail.courses:
result.append(model_arc_tail)
return result
Am I missing something pretty obvious here, or is GAEUnit acting up?
Also - the test that is making this run slow has no more than 5 models of any kind in the datastore. I know this is potentially slow, but my app only does this once then subsequently caches it.
| [
"Ignoring the commented out recursion, I don't think this should be an infinite loop - you are just doing some for-loops over finite results sets.\nHowever, it does seem like this would be really slow. You're looping over entire tables and then doing more datastore queries in every nested loop. It seems unlikely that this sort of request would complete in a timely manner on GAE unless your tables are really, really small.\n\nSome rough numbers:\nIf H = # of entities in DepedencyArcHead and T = average # of tails in each DependencyArcHead then:\n\n_getListArcTailsWithCourse is doing about H*T queries (understimate). In the \"worst\" case, the result returned from this function will have H*T elements.\n_traverseForwards loops over all these results H times, and thus does another H*(H*T) queries.\nEven if H and T are only on the order of 10s, you could be doing thousands of queries. If they're bigger, then ... (and this ignores any additional queries you'd do if you uncommented the recursive call).\n\nIn short, I think you may want to try to organize your data a little differently if possible. I'd make a specific suggestion, but what exactly you're trying to do isn't clear to me.\n"
] | [
3
] | [] | [] | [
"google_app_engine",
"infinite_loop",
"python",
"recursion"
] | stackoverflow_0002986098_google_app_engine_infinite_loop_python_recursion.txt |
Q:
How do I convert a regex with pack and hex from Perl to Python?
I am a newcomer to Python and am converting a Perl script. What is the Python equivalent to...
$value =~ s/%([a-fA-F0-9][a-fA-F0-9])/pack("C", hex($1))/eg;
Any help is greatly appreciated.
A:
You should likely just use urllib.unquote (python 2) / urllib.parse.unquote (python 3), since it looks like that's what the code you provided is trying to accomplish.
A:
value = value.decode('hex')
| How do I convert a regex with pack and hex from Perl to Python? | I am a newcomer to Python and am converting a Perl script. What is the Python equivalent to...
$value =~ s/%([a-fA-F0-9][a-fA-F0-9])/pack("C", hex($1))/eg;
Any help is greatly appreciated.
| [
"You should likely just use urllib.unquote (python 2) / urllib.parse.unquote (python 3), since it looks like that's what the code you provided is trying to accomplish.\n",
"value = value.decode('hex')\n\n"
] | [
1,
0
] | [] | [] | [
"perl",
"python"
] | stackoverflow_0002986299_perl_python.txt |
Q:
What is the best way to open a URL and get up to X bytes in Python?
I want to have a robot fetch a URL every hour, but if the site's operator is malicious he could have his server send me a 1 GB file. Is there a good way to limit downloading to, say, 100 KB and stop after that limit?
I can imagine writing my own connection handler from scratch, but I'd like to use urllib2 if at all possible, just specifying the limit somehow.
Thanks!
A:
This is probably what you're looking for:
import urllib
def download(url, bytes = 1024):
"""Copy the contents of a file from a given URL
to a local file.
"""
webFile = urllib.urlopen(url)
localFile = open(url.split('/')[-1], 'w')
localFile.write(webFile.read(bytes))
webFile.close()
localFile.close()
| What is the best way to open a URL and get up to X bytes in Python? | I want to have a robot fetch a URL every hour, but if the site's operator is malicious he could have his server send me a 1 GB file. Is there a good way to limit downloading to, say, 100 KB and stop after that limit?
I can imagine writing my own connection handler from scratch, but I'd like to use urllib2 if at all possible, just specifying the limit somehow.
Thanks!
| [
"This is probably what you're looking for:\nimport urllib\n\ndef download(url, bytes = 1024):\n \"\"\"Copy the contents of a file from a given URL\n to a local file.\n \"\"\"\n webFile = urllib.urlopen(url)\n localFile = open(url.split('/')[-1], 'w')\n localFile.write(webFile.read(bytes))\n webFile.close()\n localFile.close()\n\n"
] | [
7
] | [] | [] | [
"http",
"python",
"sockets",
"url"
] | stackoverflow_0002986392_http_python_sockets_url.txt |
Q:
Python: what modules have been imported in my process?
How can I get a list of the modules that have been imported into my process?
A:
sys.modules.values() ... if you really need the names of the modules, use sys.modules.keys()
dir() is not what you want.
>>> import re
>>> def foo():
... import csv
... fubar = 0
... print dir()
...
>>> foo()
['csv', 'fubar'] # 're' is not in the current scope
>>>
A:
You can also run the interpreter with -v option if you just want to see the modules that are imported (and the order they are imported in)
| Python: what modules have been imported in my process? | How can I get a list of the modules that have been imported into my process?
| [
"sys.modules.values() ... if you really need the names of the modules, use sys.modules.keys()\ndir() is not what you want.\n>>> import re\n>>> def foo():\n... import csv\n... fubar = 0\n... print dir()\n...\n>>> foo()\n['csv', 'fubar'] # 're' is not in the current scope\n>>>\n\n",
"You can also run the interpreter with -v option if you just want to see the modules that are imported (and the order they are imported in)\n"
] | [
11,
4
] | [] | [] | [
"python",
"python_module"
] | stackoverflow_0002986419_python_python_module.txt |
Q:
Best practice for installing python modules from an arbitrary VCS repository
I'm newish to the python ecosystem, and have a question about module editing.
I use a bunch of third-party modules, distributed on PyPi. Coming from a C and Java background, I love the ease of easy_install <whatever>. This is a new, wonderful world, but the model breaks down when I want to edit the newly installed module for two reasons:
The egg files may be stored in a folder or archive somewhere crazy on the file system.
Using an egg seems to preclude using the version control system of the originating project, just as using a debian package precludes development from an originating VCS repository.
What is the best practice for installing modules from an arbitrary VCS repository? I want to be able to continue to import foomodule in other scripts. And if I modify the module's source code, will I need to perform any additional commands?
A:
Pip lets you install files gives a URL to the Subversion, git, Mercurial or bzr repository.
pip install -e svn+http://path_to_some_svn/repo#egg=package_name
Example:
pip install -e hg+https://rwilcox@bitbucket.org/ianb/cmdutils#egg=cmdutils
If I wanted to download the latest version of cmdutils. (Random package I decided to pull).
I installed this into a virtualenv (using the -E parameter), and pip installed cmdutls into a src folder at the top level of my virtualenv folder.
pip install -E thisIsATest -e hg+https://rwilcox@bitbucket.org/ianb/cmdutils#egg=cmdutils
$ ls thisIsATest/src
cmdutils
A:
Are you wanting to do development but have the developed version be handled as an egg by the system (for instance to get entry-points)? If so then you should check out the source and use Development Mode by doing:
python setup.py develop
If the project happens to not be a setuptools based project, which is required for the above, a quick work-around is this command:
python -c "import setuptools; execfile('setup.py')" develop
Almost everything you ever wanted to know about setuptools (the basis of easy_install) is available from the the setuptools docs. Also there are docs for easy_install.
Development mode adds the project to your import path in the same way that easy_install does. An changes you make will be available to your apps the next time they import the module.
As others mentioned, you can also directly use version control URLs if you just want to get the latest version as it is now without the ability to edit, but that will only take a snapshot, and indeed creates a normal egg as part of the process. I know for sure it does Subversion and I thought it did others but I can't find the docs on that.
A:
You can use the PYTHONPATH environment variable or symlink your code to somewhere in site-packages.
A:
Packages installed by easy_install tend to come from snapshots of the developer's version control, generally made when the developer releases an official version. You're therefore going to have to choose between convenient automatic downloads via easy_install and up-to-the-minute code updates via version control. If you pick the latter, you can build and install most packages seen in the python package index directly from a version control checkout by running python setup.py install.
If you don't like the default installation directory, you can install to a custom location instead, and export a PYTHONPATH environment variable whose value is the path of the installed package's parent folder.
| Best practice for installing python modules from an arbitrary VCS repository | I'm newish to the python ecosystem, and have a question about module editing.
I use a bunch of third-party modules, distributed on PyPi. Coming from a C and Java background, I love the ease of easy_install <whatever>. This is a new, wonderful world, but the model breaks down when I want to edit the newly installed module for two reasons:
The egg files may be stored in a folder or archive somewhere crazy on the file system.
Using an egg seems to preclude using the version control system of the originating project, just as using a debian package precludes development from an originating VCS repository.
What is the best practice for installing modules from an arbitrary VCS repository? I want to be able to continue to import foomodule in other scripts. And if I modify the module's source code, will I need to perform any additional commands?
| [
"Pip lets you install files gives a URL to the Subversion, git, Mercurial or bzr repository.\npip install -e svn+http://path_to_some_svn/repo#egg=package_name\n\nExample:\n pip install -e hg+https://rwilcox@bitbucket.org/ianb/cmdutils#egg=cmdutils\nIf I wanted to download the latest version of cmdutils. (Random package I decided to pull).\nI installed this into a virtualenv (using the -E parameter), and pip installed cmdutls into a src folder at the top level of my virtualenv folder.\npip install -E thisIsATest -e hg+https://rwilcox@bitbucket.org/ianb/cmdutils#egg=cmdutils\n\n$ ls thisIsATest/src\ncmdutils\n\n",
"Are you wanting to do development but have the developed version be handled as an egg by the system (for instance to get entry-points)? If so then you should check out the source and use Development Mode by doing:\npython setup.py develop\n\nIf the project happens to not be a setuptools based project, which is required for the above, a quick work-around is this command:\npython -c \"import setuptools; execfile('setup.py')\" develop\n\nAlmost everything you ever wanted to know about setuptools (the basis of easy_install) is available from the the setuptools docs. Also there are docs for easy_install.\nDevelopment mode adds the project to your import path in the same way that easy_install does. An changes you make will be available to your apps the next time they import the module.\nAs others mentioned, you can also directly use version control URLs if you just want to get the latest version as it is now without the ability to edit, but that will only take a snapshot, and indeed creates a normal egg as part of the process. I know for sure it does Subversion and I thought it did others but I can't find the docs on that.\n",
"You can use the PYTHONPATH environment variable or symlink your code to somewhere in site-packages.\n",
"Packages installed by easy_install tend to come from snapshots of the developer's version control, generally made when the developer releases an official version. You're therefore going to have to choose between convenient automatic downloads via easy_install and up-to-the-minute code updates via version control. If you pick the latter, you can build and install most packages seen in the python package index directly from a version control checkout by running python setup.py install.\nIf you don't like the default installation directory, you can install to a custom location instead, and export a PYTHONPATH environment variable whose value is the path of the installed package's parent folder.\n"
] | [
3,
2,
1,
1
] | [] | [] | [
"easy_install",
"module",
"python",
"version_control"
] | stackoverflow_0002986357_easy_install_module_python_version_control.txt |
Q:
How do i read EXIF data from an image without the use of external scripts in python?
How do i read EXIF data from an image without the use of external scripts in python? I don't want to use any prewritten scripts.
Thanks!
A:
You can use exif.py. If you don't want it to be an external module (it wouldn't necessarily be used as a script anyway), you can just copy all 1767 lines right inside your own module. If your objection is to "prewritten", you can study and then rewrite the 1767 lines in question, if you have a few days to waste, but I don't understand why you'd want to do that.
A:
I agree with Alex - the work has been done already! But if you just want to flex your brain muscle, this is where I would start:
EXIF Specifications
edit
Here is the official repository for the specification:
Japan Electronics and Information Technology Industries Association - Digital Cameras
| How do i read EXIF data from an image without the use of external scripts in python? | How do i read EXIF data from an image without the use of external scripts in python? I don't want to use any prewritten scripts.
Thanks!
| [
"You can use exif.py. If you don't want it to be an external module (it wouldn't necessarily be used as a script anyway), you can just copy all 1767 lines right inside your own module. If your objection is to \"prewritten\", you can study and then rewrite the 1767 lines in question, if you have a few days to waste, but I don't understand why you'd want to do that.\n",
"I agree with Alex - the work has been done already! But if you just want to flex your brain muscle, this is where I would start: \nEXIF Specifications\nedit\nHere is the official repository for the specification:\nJapan Electronics and Information Technology Industries Association - Digital Cameras\n"
] | [
4,
2
] | [] | [] | [
"exif",
"python"
] | stackoverflow_0002985218_exif_python.txt |
Q:
Very simple python functions takes spends long time in function and not subfunctions
I have spent many hours trying to figure what is going on here.
The function 'grad_logp' in the code below is called many times in my program, and cProfile and runsnakerun the visualize the results reveals that the function grad_logp spends about .00004s 'locally' every call not in any functions it calls and the function 'n' spends about .00006s locally every call. Together these two times make up about 30% of program time that I care about. It doesn't seem like this is function overhead as other python functions spend far less time 'locally' and merging 'grad_logp' and 'n' does not make my program faster, but the operations that these two functions do seem rather trivial. Does anyone have any suggestions on what might be happening?
Have I done something obviously inefficient? Am I misunderstanding how cProfile works?
def grad_logp(self, variable, calculation_set ):
p = params(self.p,self.parents)
return self.n(variable, self.p)
def n (self, variable, p ):
gradient = self.gg(variable, p)
return np.reshape(gradient, np.shape(variable.value))
def gg(self, variable, p):
if variable is self:
gradient = self._grad_logps['x']( x = self.value, **p)
else:
gradient = __builtin__.sum([self._pgradient(variable, parameter, value, p) for parameter, value in self.parents.iteritems()])
return gradient
A:
Functions coded in C are not instrumented by profiling; so, for example, any time spent in sum (which you're spelling __builtin__.sum) will be charged to its caller. Not sure what np.reshape is, but if it's numpy.reshape, the same applies there.
A:
Your "many hours" might be better spent making your code less like a maze of twisty little passages and also documenting it.
The first method's arg calculation_set is NOT USED.
Then it does p = params(self.p,self.parents) but that p is NOT USED.
variable is self???
__builtin__.sum???
Get it firstly understandable, secondly correct. Then and only then, worry about the speed.
| Very simple python functions takes spends long time in function and not subfunctions | I have spent many hours trying to figure what is going on here.
The function 'grad_logp' in the code below is called many times in my program, and cProfile and runsnakerun the visualize the results reveals that the function grad_logp spends about .00004s 'locally' every call not in any functions it calls and the function 'n' spends about .00006s locally every call. Together these two times make up about 30% of program time that I care about. It doesn't seem like this is function overhead as other python functions spend far less time 'locally' and merging 'grad_logp' and 'n' does not make my program faster, but the operations that these two functions do seem rather trivial. Does anyone have any suggestions on what might be happening?
Have I done something obviously inefficient? Am I misunderstanding how cProfile works?
def grad_logp(self, variable, calculation_set ):
p = params(self.p,self.parents)
return self.n(variable, self.p)
def n (self, variable, p ):
gradient = self.gg(variable, p)
return np.reshape(gradient, np.shape(variable.value))
def gg(self, variable, p):
if variable is self:
gradient = self._grad_logps['x']( x = self.value, **p)
else:
gradient = __builtin__.sum([self._pgradient(variable, parameter, value, p) for parameter, value in self.parents.iteritems()])
return gradient
| [
"Functions coded in C are not instrumented by profiling; so, for example, any time spent in sum (which you're spelling __builtin__.sum) will be charged to its caller. Not sure what np.reshape is, but if it's numpy.reshape, the same applies there.\n",
"Your \"many hours\" might be better spent making your code less like a maze of twisty little passages and also documenting it.\nThe first method's arg calculation_set is NOT USED.\nThen it does p = params(self.p,self.parents) but that p is NOT USED.\nvariable is self???\n__builtin__.sum???\nGet it firstly understandable, secondly correct. Then and only then, worry about the speed.\n"
] | [
3,
1
] | [] | [] | [
"performance",
"python"
] | stackoverflow_0002986372_performance_python.txt |
Q:
Python C API from C++ app - know when to lock
I am trying to write a C++ class that calls Python methods of a class that does some I/O operations (file, stdout) at once. The problem I have ran into is that my class is called from different threads: sometimes main thread, sometimes different others. Obviously I tried to apply the approach for Python calls in multi-threaded native applications. Basically everything starts from PyEval_AcquireLock and PyEval_ReleaseLock or just global locks. According to the documentation here when a thread is already locked a deadlock ensues. When my class is called from the main thread or other one that blocks Python execution I have a deadlock.
Python> Cfunc1() - C++ func that creates threads internally which lead to calls in "my class",
It stuck on PyEval_AcquireLock, obviously the Python is already locked, i.e. waiting for C++ Cfunc1 call to complete... It completes fine if I omit those locks. Also it completes fine when Python interpreter is ready for the next user command, i.e. when thread is calling funcs in the background - not inside of a native call
I am looking for a workaround. I need to distinguish whether or not the global lock is allowed, i.e. Python is not locked and ready to receive the next command... I tried PyGIL_Ensure, unfortunately I see hang.
Any known API or solution for this ?
(Python 2.4)
A:
Unless you have wrapped your C++ code quite peculiarly, when any Python thread calls into your C++ code, the GIL is held. You may release it in your C++ code (if you want to do some consuming task that doesn't require any Python interaction), and then will have to acquire it again when you want to do any Python interaction -- see the docs: if you're just using the good old C API, there are macros for that, and the recommended idiom is
Py_BEGIN_ALLOW_THREADS
...Do some blocking I/O operation...
Py_END_ALLOW_THREADS
the docs explain:
The Py_BEGIN_ALLOW_THREADS macro opens
a new block and declares a hidden
local variable; the
Py_END_ALLOW_THREADS macro closes the
block. Another advantage of using
these two macros is that when Python
is compiled without thread support,
they are defined empty, thus saving
the thread state and GIL
manipulations.
So you just don't have to acquire the GIL (and shouldn't) until after you've explicitly released it (ideally with that macro) and need to interact with Python in any way again. (Where the docs say "some blocking I/O operation", it could actually be any long-running operation with no Python interaction whatsoever).
| Python C API from C++ app - know when to lock | I am trying to write a C++ class that calls Python methods of a class that does some I/O operations (file, stdout) at once. The problem I have ran into is that my class is called from different threads: sometimes main thread, sometimes different others. Obviously I tried to apply the approach for Python calls in multi-threaded native applications. Basically everything starts from PyEval_AcquireLock and PyEval_ReleaseLock or just global locks. According to the documentation here when a thread is already locked a deadlock ensues. When my class is called from the main thread or other one that blocks Python execution I have a deadlock.
Python> Cfunc1() - C++ func that creates threads internally which lead to calls in "my class",
It stuck on PyEval_AcquireLock, obviously the Python is already locked, i.e. waiting for C++ Cfunc1 call to complete... It completes fine if I omit those locks. Also it completes fine when Python interpreter is ready for the next user command, i.e. when thread is calling funcs in the background - not inside of a native call
I am looking for a workaround. I need to distinguish whether or not the global lock is allowed, i.e. Python is not locked and ready to receive the next command... I tried PyGIL_Ensure, unfortunately I see hang.
Any known API or solution for this ?
(Python 2.4)
| [
"Unless you have wrapped your C++ code quite peculiarly, when any Python thread calls into your C++ code, the GIL is held. You may release it in your C++ code (if you want to do some consuming task that doesn't require any Python interaction), and then will have to acquire it again when you want to do any Python interaction -- see the docs: if you're just using the good old C API, there are macros for that, and the recommended idiom is\nPy_BEGIN_ALLOW_THREADS\n...Do some blocking I/O operation...\nPy_END_ALLOW_THREADS\n\nthe docs explain:\n\nThe Py_BEGIN_ALLOW_THREADS macro opens\n a new block and declares a hidden\n local variable; the\n Py_END_ALLOW_THREADS macro closes the\n block. Another advantage of using\n these two macros is that when Python\n is compiled without thread support,\n they are defined empty, thus saving\n the thread state and GIL\n manipulations.\n\nSo you just don't have to acquire the GIL (and shouldn't) until after you've explicitly released it (ideally with that macro) and need to interact with Python in any way again. (Where the docs say \"some blocking I/O operation\", it could actually be any long-running operation with no Python interaction whatsoever).\n"
] | [
2
] | [] | [] | [
"c++",
"multithreading",
"python",
"python_2.4"
] | stackoverflow_0002986547_c++_multithreading_python_python_2.4.txt |
Q:
python code to retrieve url from mozilla
is it any possibilities to retrieve currently open url in Mozilla firefox using python
A:
You could write an extension in python and access the browser that way.
| python code to retrieve url from mozilla | is it any possibilities to retrieve currently open url in Mozilla firefox using python
| [
"You could write an extension in python and access the browser that way.\n"
] | [
0
] | [] | [] | [
"firefox",
"python",
"url"
] | stackoverflow_0002986710_firefox_python_url.txt |
Q:
Need some help converting a MAC address to binary data for use in an ethernet frame
I've been looking over various internet postings and a lot of the code I've seen looks similar to this:
def mactobinar(mac):
addr = ''
temp = mac.replace(':', '')
for i in range(0, len(temp), 2):
addr = ''.join([addr, struct.pack('B', int(temp[i: i + 2], 16)))])
return addr
Can someone explain how this code works?
A:
Why do people insist on writing all that?
def mactobinar(mac):
return binascii.unhexlify(mac.replace(':', ''))
A:
7.3. struct — Interpret strings as packed binary data. That'd be a good place to start.
A:
Ok im not really the best at pythen but ill give it a shot.
when the mac address is passed into mactobinar the first thing that happens is that your removing the semi colon to make a constant string without any delimiters.
So 01:23:45:67:89:ab becomes 0123456789ab
Ok in the next part were looping threw a range, this range here is creating a offset range.
So range(0, len(temp), 2) returns an array with the ranges like range(start,max,steps);
then for every value in that array were creating a binary for that integer using struct.pack and also joining it together
Your version
struct.pack('B', int(temp[i: i + 2], 16)))
Documanted version
struct.pack(fmt, v1, v2, ...)
pack converts an entity into its binary format.
hope this gives you some insight on whats going here
Here are some items to get you started:
http://docs.python.org/library/struct.html#format-characters
http://docs.python.org/library/struct.html#struct.pack
| Need some help converting a MAC address to binary data for use in an ethernet frame | I've been looking over various internet postings and a lot of the code I've seen looks similar to this:
def mactobinar(mac):
addr = ''
temp = mac.replace(':', '')
for i in range(0, len(temp), 2):
addr = ''.join([addr, struct.pack('B', int(temp[i: i + 2], 16)))])
return addr
Can someone explain how this code works?
| [
"Why do people insist on writing all that?\ndef mactobinar(mac):\n return binascii.unhexlify(mac.replace(':', ''))\n\n",
"7.3. struct — Interpret strings as packed binary data. That'd be a good place to start.\n",
"Ok im not really the best at pythen but ill give it a shot.\nwhen the mac address is passed into mactobinar the first thing that happens is that your removing the semi colon to make a constant string without any delimiters.\nSo 01:23:45:67:89:ab becomes 0123456789ab\nOk in the next part were looping threw a range, this range here is creating a offset range.\nSo range(0, len(temp), 2) returns an array with the ranges like range(start,max,steps);\nthen for every value in that array were creating a binary for that integer using struct.pack and also joining it together\nYour version\nstruct.pack('B', int(temp[i: i + 2], 16)))\nDocumanted version\nstruct.pack(fmt, v1, v2, ...)\npack converts an entity into its binary format.\nhope this gives you some insight on whats going here\nHere are some items to get you started:\nhttp://docs.python.org/library/struct.html#format-characters\nhttp://docs.python.org/library/struct.html#struct.pack\n"
] | [
4,
1,
0
] | [] | [] | [
"ethernet",
"networking",
"python"
] | stackoverflow_0002986702_ethernet_networking_python.txt |
Q:
Directory ignored by "setup.py"
The Selenium setup.py can be found at http://code.google.com/p/selenium/source/browse/trunk/setup.py.
When running "python setup.py sdist" the "firefox/test/py" directory
is ignored for some reason though it's
mentioned in the "package_dir" and in "packages".
Any ideas why it's ignored?
A:
That directory is ignored because it is not in the MANIFEST.
More info - http://docs.python.org/distutils/sourcedist.html
| Directory ignored by "setup.py" | The Selenium setup.py can be found at http://code.google.com/p/selenium/source/browse/trunk/setup.py.
When running "python setup.py sdist" the "firefox/test/py" directory
is ignored for some reason though it's
mentioned in the "package_dir" and in "packages".
Any ideas why it's ignored?
| [
"That directory is ignored because it is not in the MANIFEST.\nMore info - http://docs.python.org/distutils/sourcedist.html\n"
] | [
1
] | [] | [] | [
"distutils",
"python"
] | stackoverflow_0002984481_distutils_python.txt |
Q:
Counting entries in a list of dictionaries: for loop vs. list comprehension with map(itemgetter)
In a Python program I'm writing I've compared using a for loop and increment variables versus list comprehension with map(itemgetter) and len() when counting entries in dictionaries which are in a list. It takes the same time using a each method. Am I doing something wrong or is there a better approach?
Here is a greatly simplified and shortened data structure:
list = [
{'key1': True, 'dontcare': False, 'ignoreme': False, 'key2': True, 'filenotfound': 'biscuits and gravy'},
{'key1': False, 'dontcare': False, 'ignoreme': False, 'key2': True, 'filenotfound': 'peaches and cream'},
{'key1': True, 'dontcare': False, 'ignoreme': False, 'key2': False, 'filenotfound': 'Abbott and Costello'},
{'key1': False, 'dontcare': False, 'ignoreme': True, 'key2': False, 'filenotfound': 'over and under'},
{'key1': True, 'dontcare': True, 'ignoreme': False, 'key2': True, 'filenotfound': 'Scotch and... well... neat, thanks'}
]
Here is the for loop version:
#!/usr/bin/env python
# Python 2.6
# count the entries where key1 is True
# keep a separate count for the subset that also have key2 True
key1 = key2 = 0
for dictionary in list:
if dictionary["key1"]:
key1 += 1
if dictionary["key2"]:
key2 += 1
print "Counts: key1: " + str(key1) + ", subset key2: " + str(key2)
Output for the data above:
Counts: key1: 3, subset key2: 2
Here is the other, perhaps more Pythonic, version:
#!/usr/bin/env python
# Python 2.6
# count the entries where key1 is True
# keep a separate count for the subset that also have key2 True
from operator import itemgetter
KEY1 = 0
KEY2 = 1
getentries = itemgetter("key1", "key2")
entries = map(getentries, list)
key1 = len([x for x in entries if x[KEY1]])
key2 = len([x for x in entries if x[KEY1] and x[KEY2]])
print "Counts: key1: " + str(key1) + ", subset key2: " + str(key2)
Output for the data above (same as before):
Counts: key1: 3, subset key2: 2
I'm a tiny bit surprised these take the same amount of time. I wonder if there's something faster. I'm sure I'm overlooking something simple.
One alternative I've considered is loading the data into a database and doing SQL queries, but the data doesn't need to persist and I'd have to profile the overhead of the data transfer, etc., and a database may not always be available.
I have no control over the original form of the data.
The code above is not going for style points.
A:
I think you're measuring incorrectly by swamping the code to be measured in a lot of overhead (running at top module level instead of in a function, doing output). Putting the two snippets into functions named forloop and withmap, and adding a * 100 to the list's definition (after the closing ]) to make the measurement a little substantial, I see, on my slow laptop:
$ py26 -mtimeit -s'import co' 'co.forloop()'
10000 loops, best of 3: 202 usec per loop
$ py26 -mtimeit -s'import co' 'co.withmap()'
10 loops, best of 3: 601 usec per loop
i.e., the allegedly "more pythonic" approach with map is three times slower than the plain for approach -- which tells you that it's not really "more pythonic";-).
The mark of good Python is simplicity, which, to me, recommends what I hubris-ly named...:
def thebest():
entries = [d['key2'] for d in list if d['key1']]
return len(entries), sum(entries)
which, on measurement, saves between 10% and 20% time over the forloop approach.
| Counting entries in a list of dictionaries: for loop vs. list comprehension with map(itemgetter) | In a Python program I'm writing I've compared using a for loop and increment variables versus list comprehension with map(itemgetter) and len() when counting entries in dictionaries which are in a list. It takes the same time using a each method. Am I doing something wrong or is there a better approach?
Here is a greatly simplified and shortened data structure:
list = [
{'key1': True, 'dontcare': False, 'ignoreme': False, 'key2': True, 'filenotfound': 'biscuits and gravy'},
{'key1': False, 'dontcare': False, 'ignoreme': False, 'key2': True, 'filenotfound': 'peaches and cream'},
{'key1': True, 'dontcare': False, 'ignoreme': False, 'key2': False, 'filenotfound': 'Abbott and Costello'},
{'key1': False, 'dontcare': False, 'ignoreme': True, 'key2': False, 'filenotfound': 'over and under'},
{'key1': True, 'dontcare': True, 'ignoreme': False, 'key2': True, 'filenotfound': 'Scotch and... well... neat, thanks'}
]
Here is the for loop version:
#!/usr/bin/env python
# Python 2.6
# count the entries where key1 is True
# keep a separate count for the subset that also have key2 True
key1 = key2 = 0
for dictionary in list:
if dictionary["key1"]:
key1 += 1
if dictionary["key2"]:
key2 += 1
print "Counts: key1: " + str(key1) + ", subset key2: " + str(key2)
Output for the data above:
Counts: key1: 3, subset key2: 2
Here is the other, perhaps more Pythonic, version:
#!/usr/bin/env python
# Python 2.6
# count the entries where key1 is True
# keep a separate count for the subset that also have key2 True
from operator import itemgetter
KEY1 = 0
KEY2 = 1
getentries = itemgetter("key1", "key2")
entries = map(getentries, list)
key1 = len([x for x in entries if x[KEY1]])
key2 = len([x for x in entries if x[KEY1] and x[KEY2]])
print "Counts: key1: " + str(key1) + ", subset key2: " + str(key2)
Output for the data above (same as before):
Counts: key1: 3, subset key2: 2
I'm a tiny bit surprised these take the same amount of time. I wonder if there's something faster. I'm sure I'm overlooking something simple.
One alternative I've considered is loading the data into a database and doing SQL queries, but the data doesn't need to persist and I'd have to profile the overhead of the data transfer, etc., and a database may not always be available.
I have no control over the original form of the data.
The code above is not going for style points.
| [
"I think you're measuring incorrectly by swamping the code to be measured in a lot of overhead (running at top module level instead of in a function, doing output). Putting the two snippets into functions named forloop and withmap, and adding a * 100 to the list's definition (after the closing ]) to make the measurement a little substantial, I see, on my slow laptop:\n$ py26 -mtimeit -s'import co' 'co.forloop()'\n10000 loops, best of 3: 202 usec per loop\n$ py26 -mtimeit -s'import co' 'co.withmap()'\n10 loops, best of 3: 601 usec per loop\n\ni.e., the allegedly \"more pythonic\" approach with map is three times slower than the plain for approach -- which tells you that it's not really \"more pythonic\";-).\nThe mark of good Python is simplicity, which, to me, recommends what I hubris-ly named...:\ndef thebest():\n entries = [d['key2'] for d in list if d['key1']]\n return len(entries), sum(entries)\n\nwhich, on measurement, saves between 10% and 20% time over the forloop approach.\n"
] | [
12
] | [] | [] | [
"dictionary",
"list_comprehension",
"loops",
"map",
"python"
] | stackoverflow_0002986929_dictionary_list_comprehension_loops_map_python.txt |
Q:
need help in site classification
I have to crawl the contents of several blogs. The problem is that I need to classify whether the blogs the authors are from a specific school and is talking about the school's stuff. May i know what's the best approach in doing the crawling or how should i go about the classification?
A:
If you're looking for a good Python web scraper, this question seems to have all the information you're looking for.
As for classifying whether the blog is discussing the school's stuff, that's a much trickier problem. I doubt you'll get away from having to have the results reviewed by humans. A really sophisticated scraper would use probabilistic filters--train it on blog posts which do and don't discuss the school, and let it infer the rules itself. That's fairly sophisticated, however, and from the question I'm guessing you want quick-and-dirty. I'd just put together a list of keywords, and review (and refine) the results until it's close enough to what you want.
As for identifying the authors, this is the Internet, where no one knows whether or not you're a dog (or, by extension, what school you attended). If you had a list of authors to look for you could always use them as part of the keyword search, but if the authors choose not to identify themselves (or, worse, identify themselves as someone else) there's no practical way to do it.
A:
Web scraping is one problem. Handling classification is a whole field.
You really have two choices: hire someone who knows how to do it or figure it out. For figuring it out, I strongly recommend the Programming Collective Intelligence book. The examples are in Python, use real world APIs, and invite hacking around to find solutions. Each chapter handles one part of the collective intelligence world, e.g., grouping or classifying, walks through some basics, and provides plenty of references for more information. It might be a good idea to skim the book even if you decide to hire an expert.
| need help in site classification | I have to crawl the contents of several blogs. The problem is that I need to classify whether the blogs the authors are from a specific school and is talking about the school's stuff. May i know what's the best approach in doing the crawling or how should i go about the classification?
| [
"If you're looking for a good Python web scraper, this question seems to have all the information you're looking for.\nAs for classifying whether the blog is discussing the school's stuff, that's a much trickier problem. I doubt you'll get away from having to have the results reviewed by humans. A really sophisticated scraper would use probabilistic filters--train it on blog posts which do and don't discuss the school, and let it infer the rules itself. That's fairly sophisticated, however, and from the question I'm guessing you want quick-and-dirty. I'd just put together a list of keywords, and review (and refine) the results until it's close enough to what you want.\nAs for identifying the authors, this is the Internet, where no one knows whether or not you're a dog (or, by extension, what school you attended). If you had a list of authors to look for you could always use them as part of the keyword search, but if the authors choose not to identify themselves (or, worse, identify themselves as someone else) there's no practical way to do it. \n",
"Web scraping is one problem. Handling classification is a whole field.\nYou really have two choices: hire someone who knows how to do it or figure it out. For figuring it out, I strongly recommend the Programming Collective Intelligence book. The examples are in Python, use real world APIs, and invite hacking around to find solutions. Each chapter handles one part of the collective intelligence world, e.g., grouping or classifying, walks through some basics, and provides plenty of references for more information. It might be a good idea to skim the book even if you decide to hire an expert.\n"
] | [
1,
1
] | [] | [] | [
"python",
"web_crawler"
] | stackoverflow_0002986963_python_web_crawler.txt |
Q:
In Python, how to use a C++ function which returns an allocated array of structs via a ** parameter?
I'd like to use some existing C++ code, NvTriStrip, in a Python tool.
SWIG easily handles the functions with simple parameters, but the main function, GenerateStrips, is much more complicated.
What do I need to put in the SWIG interface file to indicate that primGroups is really an output parameter and that it must be cleaned up with delete[]?
///////////////////////////////////////////////////////////////////////////
// GenerateStrips()
//
// in_indices: input index list, the indices you would use to render
// in_numIndices: number of entries in in_indices
// primGroups: array of optimized/stripified PrimitiveGroups
// numGroups: number of groups returned
//
// Be sure to call delete[] on the returned primGroups to avoid leaking mem
//
bool GenerateStrips( const unsigned short* in_indices,
const unsigned int in_numIndices,
PrimitiveGroup** primGroups,
unsigned short* numGroups,
bool validateEnabled = false );
FYI, here is the PrimitiveGroup declaration:
enum PrimType
{
PT_LIST,
PT_STRIP,
PT_FAN
};
struct PrimitiveGroup
{
PrimType type;
unsigned int numIndices;
unsigned short* indices;
PrimitiveGroup() : type(PT_STRIP), numIndices(0), indices(NULL) {}
~PrimitiveGroup()
{
if(indices)
delete[] indices;
indices = NULL;
}
};
A:
Have you looked at the documentation of SWIG regarding their "cpointer.i" and "carray.i" libraries? They're found here. That's how you have to manipulate things unless you want to create your own utility libraries to accompany the wrapped code. Here's the link to the Python handling of pointers with SWIG.
Onto your question on getting it to recognize input versus output. They've got another section in the documentation here, that describes exactly that. You lable things OUTPUT in the *.i file. So in your case you'd write:
%inline{
extern bool GenerateStrips( const unsigned short* in_dices,
const unsigned short* in_numIndices,
PrimitiveGroup** OUTPUT,
unsigned short* numGroups,
bool validated );
%}
which gives you a function that returns both the bool and the PrimitiveGroup* array as a tuple.
Does that help?
A:
It's actually so easy to make python bindings for things directly that I don't know why people bother with confusing wrapper stuff like SWIG.
Just use Py_BuildValue once per element of the outer array, producing one tuple per row. Store those tuples in a C array. Then Call PyList_New and PyList_SetSlice to generate a list of tuples, and return the list pointer from your C function.
A:
I don't know how to do it with SWIG, but you might want to consider moving to a more modern binding system like Pyrex or Cython.
For example, Pyrex gives you access to C++ delete for cases like this. Here's an excerpt from the documentation:
Disposal
The del statement can be applied to a pointer to a C++ struct
to deallocate it. This is equivalent to delete in C++.
cdef Shrubbery *big_sh
big_sh = new Shrubbery(42.0)
display_in_garden_show(big_sh)
del big_sh
http://www.cosc.canterbury.ac.nz/greg.ewing/python/Pyrex/version/Doc/Manual/using_with_c++.html
| In Python, how to use a C++ function which returns an allocated array of structs via a ** parameter? | I'd like to use some existing C++ code, NvTriStrip, in a Python tool.
SWIG easily handles the functions with simple parameters, but the main function, GenerateStrips, is much more complicated.
What do I need to put in the SWIG interface file to indicate that primGroups is really an output parameter and that it must be cleaned up with delete[]?
///////////////////////////////////////////////////////////////////////////
// GenerateStrips()
//
// in_indices: input index list, the indices you would use to render
// in_numIndices: number of entries in in_indices
// primGroups: array of optimized/stripified PrimitiveGroups
// numGroups: number of groups returned
//
// Be sure to call delete[] on the returned primGroups to avoid leaking mem
//
bool GenerateStrips( const unsigned short* in_indices,
const unsigned int in_numIndices,
PrimitiveGroup** primGroups,
unsigned short* numGroups,
bool validateEnabled = false );
FYI, here is the PrimitiveGroup declaration:
enum PrimType
{
PT_LIST,
PT_STRIP,
PT_FAN
};
struct PrimitiveGroup
{
PrimType type;
unsigned int numIndices;
unsigned short* indices;
PrimitiveGroup() : type(PT_STRIP), numIndices(0), indices(NULL) {}
~PrimitiveGroup()
{
if(indices)
delete[] indices;
indices = NULL;
}
};
| [
"Have you looked at the documentation of SWIG regarding their \"cpointer.i\" and \"carray.i\" libraries? They're found here. That's how you have to manipulate things unless you want to create your own utility libraries to accompany the wrapped code. Here's the link to the Python handling of pointers with SWIG.\nOnto your question on getting it to recognize input versus output. They've got another section in the documentation here, that describes exactly that. You lable things OUTPUT in the *.i file. So in your case you'd write:\n%inline{\nextern bool GenerateStrips( const unsigned short* in_dices,\n const unsigned short* in_numIndices,\n PrimitiveGroup** OUTPUT,\n unsigned short* numGroups,\n bool validated );\n%}\n\nwhich gives you a function that returns both the bool and the PrimitiveGroup* array as a tuple.\nDoes that help?\n",
"It's actually so easy to make python bindings for things directly that I don't know why people bother with confusing wrapper stuff like SWIG.\nJust use Py_BuildValue once per element of the outer array, producing one tuple per row. Store those tuples in a C array. Then Call PyList_New and PyList_SetSlice to generate a list of tuples, and return the list pointer from your C function.\n",
"I don't know how to do it with SWIG, but you might want to consider moving to a more modern binding system like Pyrex or Cython.\nFor example, Pyrex gives you access to C++ delete for cases like this. Here's an excerpt from the documentation:\n\nDisposal\nThe del statement can be applied to a pointer to a C++ struct\n to deallocate it. This is equivalent to delete in C++.\ncdef Shrubbery *big_sh\nbig_sh = new Shrubbery(42.0)\ndisplay_in_garden_show(big_sh)\ndel big_sh\n\n\nhttp://www.cosc.canterbury.ac.nz/greg.ewing/python/Pyrex/version/Doc/Manual/using_with_c++.html\n"
] | [
2,
2,
1
] | [] | [] | [
"python",
"swig"
] | stackoverflow_0002897717_python_swig.txt |
Q:
Is it possible to post binaries to usenet with Python?
I'm trying to use the nntplib that comes with python to make some posts to usenet. However I can't figure out how to post binary files using the .post method.
I can post plain text files just fine, but not binary files. any ideas?
-- EDIT--
So thanks to Adrian's comment below I've managed to make one step towards my goal.
I now use the email library to make a multipart message and attach the binary files to the message. However I can't seem to figure out how to pass that message directly to the nttplib post method.
I have to first write a temporary file, then read it back in to the nttplib method. There has to be a way to do this all in memory....any suggestions?
A:
you have to MIME-encode your post: a binary post in an NNTP newsgroup is like a mail with an attachment.
the file has to be encoded in ASCII, generally using the base64 encoding, then the encoded file is packaged iton a multipart MIME message and posted...
have a look at the email module: it implements all that you want.
i encourage you to read RFC3977 which is the official standard defining the NNTP protocol.
for the second part of your question:
use StringIO to build a fake file object from a string (the post() method of nntplib accepts open file objects).
email.Message objects have a as_string() method to retrieve the content of the message as a plain string.
| Is it possible to post binaries to usenet with Python? | I'm trying to use the nntplib that comes with python to make some posts to usenet. However I can't figure out how to post binary files using the .post method.
I can post plain text files just fine, but not binary files. any ideas?
-- EDIT--
So thanks to Adrian's comment below I've managed to make one step towards my goal.
I now use the email library to make a multipart message and attach the binary files to the message. However I can't seem to figure out how to pass that message directly to the nttplib post method.
I have to first write a temporary file, then read it back in to the nttplib method. There has to be a way to do this all in memory....any suggestions?
| [
"you have to MIME-encode your post: a binary post in an NNTP newsgroup is like a mail with an attachment.\nthe file has to be encoded in ASCII, generally using the base64 encoding, then the encoded file is packaged iton a multipart MIME message and posted...\nhave a look at the email module: it implements all that you want.\ni encourage you to read RFC3977 which is the official standard defining the NNTP protocol.\nfor the second part of your question:\nuse StringIO to build a fake file object from a string (the post() method of nntplib accepts open file objects).\nemail.Message objects have a as_string() method to retrieve the content of the message as a plain string.\n"
] | [
3
] | [] | [] | [
"nntp",
"python"
] | stackoverflow_0002987255_nntp_python.txt |
Q:
Trouble with copying dictionaries and using deepcopy on an SQLAlchemy ORM object
I'm doing a Simulated Annealing algorithm to optimise a given allocation of students and projects.
This is language-agnostic pseudocode from Wikipedia:
s ← s0; e ← E(s) // Initial state, energy.
sbest ← s; ebest ← e // Initial "best" solution
k ← 0 // Energy evaluation count.
while k < kmax and e > emax // While time left & not good enough:
snew ← neighbour(s) // Pick some neighbour.
enew ← E(snew) // Compute its energy.
if enew < ebest then // Is this a new best?
sbest ← snew; ebest ← enew // Save 'new neighbour' to 'best found'.
if P(e, enew, temp(k/kmax)) > random() then // Should we move to it?
s ← snew; e ← enew // Yes, change state.
k ← k + 1 // One more evaluation done
return sbest // Return the best solution found.
The following is an adaptation of the technique. My supervisor said the idea is fine in theory.
First I pick up some allocation (i.e. an entire dictionary of students and their allocated projects, including the ranks for the projects) from entire set of randomised allocations, copy it and pass it to my function. Let's call this allocation aOld (it is a dictionary). aOld has a weight related to it called wOld. The weighting is described below.
The function does the following:
Let this allocation, aOld be the best_node
From all the students, pick a random number of students and stick in a list
Strip (DEALLOCATE) them of their projects ++ reflect the changes for projects (allocated parameter is now False) and lecturers (free up slots if one or more of their projects are no longer allocated)
Randomise that list
Try assigning (REALLOCATE) everyone in that list projects again
Calculate the weight (add up ranks, rank 1 = 1, rank 2 = 2... and no project rank = 101)
For this new allocation aNew, if the weight wNew is smaller than the allocation weight wOld I picked up at the beginning, then this is the best_node (as defined by the Simulated Annealing algorithm above). Apply the algorithm to aNew and continue.
If wOld < wNew, then apply the algorithm to aOld again and continue.
The allocations/data-points are expressed as "nodes" such that a node = (weight, allocation_dict, projects_dict, lecturers_dict)
Right now, I can only perform this algorithm once, but I'll need to try for a number N (denoted by kmax in the Wikipedia snippet) and make sure I always have with me, the previous node and the best_node.
So that I don't modify my original dictionaries (which I might want to reset to), I've done a shallow copy of the dictionaries. From what I've read in the docs, it seems that it only copies the references and since my dictionaries contain objects, changing the copied dictionary ends up changing the objects anyway. So I tried to use copy.deepcopy().These dictionaries refer to objects that have been mapped with SQLA.
Questions:
I've been given some solutions to the problems faced but due to my über green-ness with using Python, they all sound rather cryptic to me.
Deepcopy isn't playing nicely with SQLA. I've been told thatdeepcopy on ORM objects probably has issues that prevent it from working as you'd expect. Apparently I'd be better off "building copy constructors, i.e. def copy(self): return FooBar(....)." Can someone please explain what that means?
I checked and found out that deepcopy has issues because SQLAlchemy places extra information on your objects, i.e. an _sa_instance_state attribute, that I wouldn't want in the copy but is necessary for the object to have. I've been told: "There are ways to manually blow away the old _sa_instance_state and put a new one on the object, but the most straightforward is to make a new object with __init__() and set up the attributes that are significant, instead of doing a full deep copy." What exactly does that mean? Do I create a new, unmapped class similar to the old, mapped one?
An alternate solution is that I'd have to "implement __deepcopy__() on your objects and ensure that a new _sa_instance_state is set up, there are functions in sqlalchemy.orm.attributes which can help with that." Once again this is beyond me so could someone kindly explain what it means?
A more general question: given the above information are there any suggestions on how I can maintain the information/state for the best_node (which must always persist through my while loop) and the previous_node, if my actual objects (referenced by the dictionaries, therefore the nodes) are changing due to the deallocation/reallocation taking place? That is, without using copy?
A:
I have another possible solution: use transactions. This probably still isn't the best solution but implementing it should be faster.
Firstly create your session like this:
# transactional session
Session = sessionmaker(transactional=True)
sess = Session()
That way it will be transactional. The way transactions work is that sess.commit() will make your changes permanent while sess.rollback() will revert them.
In the case of simulated annealing you want to commit when you find a new best solution. At any later point, you can invoke rollback() to revert the status back to that position.
A:
You don't want to copy sqlalchemy objects like that. You could implement your own methods which make the copies easily enough, but that is probably not want you want. You don't want copies of students and projects in your database do you? So don't copy that data.
So you have a dictionary which holds your allocations. During the process you should never modify the SQLAlchemy objects. All information that can be modified should be stored in those dictionaries. If you need to modify the objects to take that into account, copy the data back at the end.
| Trouble with copying dictionaries and using deepcopy on an SQLAlchemy ORM object | I'm doing a Simulated Annealing algorithm to optimise a given allocation of students and projects.
This is language-agnostic pseudocode from Wikipedia:
s ← s0; e ← E(s) // Initial state, energy.
sbest ← s; ebest ← e // Initial "best" solution
k ← 0 // Energy evaluation count.
while k < kmax and e > emax // While time left & not good enough:
snew ← neighbour(s) // Pick some neighbour.
enew ← E(snew) // Compute its energy.
if enew < ebest then // Is this a new best?
sbest ← snew; ebest ← enew // Save 'new neighbour' to 'best found'.
if P(e, enew, temp(k/kmax)) > random() then // Should we move to it?
s ← snew; e ← enew // Yes, change state.
k ← k + 1 // One more evaluation done
return sbest // Return the best solution found.
The following is an adaptation of the technique. My supervisor said the idea is fine in theory.
First I pick up some allocation (i.e. an entire dictionary of students and their allocated projects, including the ranks for the projects) from entire set of randomised allocations, copy it and pass it to my function. Let's call this allocation aOld (it is a dictionary). aOld has a weight related to it called wOld. The weighting is described below.
The function does the following:
Let this allocation, aOld be the best_node
From all the students, pick a random number of students and stick in a list
Strip (DEALLOCATE) them of their projects ++ reflect the changes for projects (allocated parameter is now False) and lecturers (free up slots if one or more of their projects are no longer allocated)
Randomise that list
Try assigning (REALLOCATE) everyone in that list projects again
Calculate the weight (add up ranks, rank 1 = 1, rank 2 = 2... and no project rank = 101)
For this new allocation aNew, if the weight wNew is smaller than the allocation weight wOld I picked up at the beginning, then this is the best_node (as defined by the Simulated Annealing algorithm above). Apply the algorithm to aNew and continue.
If wOld < wNew, then apply the algorithm to aOld again and continue.
The allocations/data-points are expressed as "nodes" such that a node = (weight, allocation_dict, projects_dict, lecturers_dict)
Right now, I can only perform this algorithm once, but I'll need to try for a number N (denoted by kmax in the Wikipedia snippet) and make sure I always have with me, the previous node and the best_node.
So that I don't modify my original dictionaries (which I might want to reset to), I've done a shallow copy of the dictionaries. From what I've read in the docs, it seems that it only copies the references and since my dictionaries contain objects, changing the copied dictionary ends up changing the objects anyway. So I tried to use copy.deepcopy().These dictionaries refer to objects that have been mapped with SQLA.
Questions:
I've been given some solutions to the problems faced but due to my über green-ness with using Python, they all sound rather cryptic to me.
Deepcopy isn't playing nicely with SQLA. I've been told thatdeepcopy on ORM objects probably has issues that prevent it from working as you'd expect. Apparently I'd be better off "building copy constructors, i.e. def copy(self): return FooBar(....)." Can someone please explain what that means?
I checked and found out that deepcopy has issues because SQLAlchemy places extra information on your objects, i.e. an _sa_instance_state attribute, that I wouldn't want in the copy but is necessary for the object to have. I've been told: "There are ways to manually blow away the old _sa_instance_state and put a new one on the object, but the most straightforward is to make a new object with __init__() and set up the attributes that are significant, instead of doing a full deep copy." What exactly does that mean? Do I create a new, unmapped class similar to the old, mapped one?
An alternate solution is that I'd have to "implement __deepcopy__() on your objects and ensure that a new _sa_instance_state is set up, there are functions in sqlalchemy.orm.attributes which can help with that." Once again this is beyond me so could someone kindly explain what it means?
A more general question: given the above information are there any suggestions on how I can maintain the information/state for the best_node (which must always persist through my while loop) and the previous_node, if my actual objects (referenced by the dictionaries, therefore the nodes) are changing due to the deallocation/reallocation taking place? That is, without using copy?
| [
"I have another possible solution: use transactions. This probably still isn't the best solution but implementing it should be faster.\nFirstly create your session like this:\n# transactional session\nSession = sessionmaker(transactional=True)\nsess = Session()\n\nThat way it will be transactional. The way transactions work is that sess.commit() will make your changes permanent while sess.rollback() will revert them. \nIn the case of simulated annealing you want to commit when you find a new best solution. At any later point, you can invoke rollback() to revert the status back to that position.\n",
"You don't want to copy sqlalchemy objects like that. You could implement your own methods which make the copies easily enough, but that is probably not want you want. You don't want copies of students and projects in your database do you? So don't copy that data. \nSo you have a dictionary which holds your allocations. During the process you should never modify the SQLAlchemy objects. All information that can be modified should be stored in those dictionaries. If you need to modify the objects to take that into account, copy the data back at the end. \n"
] | [
2,
0
] | [] | [] | [
"deep_copy",
"python",
"simulated_annealing",
"sqlalchemy"
] | stackoverflow_0002970456_deep_copy_python_simulated_annealing_sqlalchemy.txt |
Q:
Dealing with wacky encodings in Python
I have a Python script that pulls in data from many sources (databases, files, etc.). Supposedly, all the strings are unicode, but what I end up getting is any variation on the following theme (as returned by repr()):
u'D\\xc3\\xa9cor'
u'D\xc3\xa9cor'
'D\\xc3\\xa9cor'
'D\xc3\xa9cor'
Is there a reliable way to take any four of the above strings and return the proper unicode string?
u'D\xe9cor' # --> Décor
The only way I can think of right now uses eval(), replace(), and a deep, burning shame that will never wash away.
A:
That's just UTF-8 data. Use .decode to convert it into unicode.
>>> 'D\xc3\xa9cor'.decode('utf-8')
u'D\xe9cor'
You can perform an additional string-escape decode for the 'D\\xc3\\xa9cor' case.
>>> 'D\xc3\xa9cor'.decode('string-escape').decode('utf-8')
u'D\xe9cor'
>>> 'D\\xc3\\xa9cor'.decode('string-escape').decode('utf-8')
u'D\xe9cor'
>>> u'D\\xc3\\xa9cor'.decode('string-escape').decode('utf-8')
u'D\xe9cor'
To handle the 2nd case as well, you need to detect if the input is unicode, and convert it into a str first.
>>> def conv(s):
... if isinstance(s, unicode):
... s = s.encode('iso-8859-1')
... return s.decode('string-escape').decode('utf-8')
...
>>> map(conv, [u'D\\xc3\\xa9cor', u'D\xc3\xa9cor', 'D\\xc3\\xa9cor', 'D\xc3\xa9cor'])
[u'D\xe9cor', u'D\xe9cor', u'D\xe9cor', u'D\xe9cor']
A:
Write adapters that know which transformations should be applied to their sources.
>>> 'D\xc3\xa9cor'.decode('utf-8')
u'D\xe9cor'
>>> 'D\\xc3\\xa9cor'.decode('string-escape').decode('utf-8')
u'D\xe9cor'
A:
Here's the solution I came to before I saw KennyTM's proper, more concise soltion:
def ensure_unicode(string):
try:
string = string.decode('string-escape').decode('string-escape')
except UnicodeEncodeError:
string = string.encode('raw_unicode_escape')
return unicode(string, 'utf-8')
| Dealing with wacky encodings in Python | I have a Python script that pulls in data from many sources (databases, files, etc.). Supposedly, all the strings are unicode, but what I end up getting is any variation on the following theme (as returned by repr()):
u'D\\xc3\\xa9cor'
u'D\xc3\xa9cor'
'D\\xc3\\xa9cor'
'D\xc3\xa9cor'
Is there a reliable way to take any four of the above strings and return the proper unicode string?
u'D\xe9cor' # --> Décor
The only way I can think of right now uses eval(), replace(), and a deep, burning shame that will never wash away.
| [
"That's just UTF-8 data. Use .decode to convert it into unicode.\n>>> 'D\\xc3\\xa9cor'.decode('utf-8')\nu'D\\xe9cor'\n\nYou can perform an additional string-escape decode for the 'D\\\\xc3\\\\xa9cor' case.\n>>> 'D\\xc3\\xa9cor'.decode('string-escape').decode('utf-8')\nu'D\\xe9cor'\n>>> 'D\\\\xc3\\\\xa9cor'.decode('string-escape').decode('utf-8')\nu'D\\xe9cor'\n>>> u'D\\\\xc3\\\\xa9cor'.decode('string-escape').decode('utf-8')\nu'D\\xe9cor'\n\nTo handle the 2nd case as well, you need to detect if the input is unicode, and convert it into a str first.\n>>> def conv(s):\n... if isinstance(s, unicode):\n... s = s.encode('iso-8859-1')\n... return s.decode('string-escape').decode('utf-8')\n... \n>>> map(conv, [u'D\\\\xc3\\\\xa9cor', u'D\\xc3\\xa9cor', 'D\\\\xc3\\\\xa9cor', 'D\\xc3\\xa9cor'])\n[u'D\\xe9cor', u'D\\xe9cor', u'D\\xe9cor', u'D\\xe9cor']\n\n",
"Write adapters that know which transformations should be applied to their sources.\n>>> 'D\\xc3\\xa9cor'.decode('utf-8')\nu'D\\xe9cor'\n>>> 'D\\\\xc3\\\\xa9cor'.decode('string-escape').decode('utf-8')\nu'D\\xe9cor'\n\n",
"Here's the solution I came to before I saw KennyTM's proper, more concise soltion:\ndef ensure_unicode(string):\n try:\n string = string.decode('string-escape').decode('string-escape')\n except UnicodeEncodeError:\n string = string.encode('raw_unicode_escape')\n\n return unicode(string, 'utf-8')\n\n"
] | [
4,
2,
1
] | [] | [] | [
"character_encoding",
"encoding",
"python",
"unicode"
] | stackoverflow_0002987305_character_encoding_encoding_python_unicode.txt |
Q:
Will this SQL cause any problems?
I'm sure everyone knows the joys of concurrency when it comes to threading.
Imagine the following scenario on every page-load on a noobily set up MySQL db:
UPDATE stats SET visits = (visits+1)
If a thousand users load the page at same time, will the count cause any issues? is this that table locking/row locking mechanism? Which one mysql use.
A:
No this won't screw up. This is perfectly acceptable in any ACID compliant DB. The I stands for Isolation. Each one of these queries will lock all rows in the visit table. The A (in ACID) stands for Atomicity and means the transaction must run in full or not at all.
A:
You have two potential problems:
Will you get the right answer?
Will you get unreasonable locking, will your whole app go very slow or even deadlock.
The right answer depends upon whether two users could compute (visit + 1) on the same value of visit. We can imagine that the database needs to do these actions:
Read visit count
Add one to visit count
Write visit count
So if two users are working at the same time could they both read the same old value? That's where the isolation level of the transaction comes into play. As observed by Artefacto the default isolation level is repeatable read, and hence we get:
Grab a lock
Read, increment, Write
Release lock
Rather than
Read (two users same old value)
Increment
First user Grab Lock, second waits
Write
Release, second user grabs lock
Write (same value!)
Release
However the level of contention could be quite high, and very much depends on the scope of your transaction. Suppose you have:
Begin transaction
Do the visit increment stuff
Do some serious business work
End transaction <==== visit lock is held until here
Then you will get a lot of folks waiting for that visit lock. We don't know the overall structure of your app, whether you are using large transaction scopes like this. Very likely you are getting a default behaviour of a single transaction per SQL statement, and in which case you're contention is just for the duration of the SQL statement, pretty much as you would be hoping.
Other folks might not be so fortunate: there are environments (eg. Java EE Servlets) where implicit transaction scopes can be created by the infrastructure and then the longer lived transactions I show above happen by default. Worse is the possibility that your code is not written consistently (with the visit increment always first, or always last) you can get:
Begin transaction
Do the visit increment stuff
Do some serious business work
End transaction <==== visit lock and business locks held until here
and
Begin transaction
Do some other serious business work
Do the visit increment stuff
End transaction <==== visit lock and maybesame business locks held until here
And bingo: Deadlock
For high volume sites you bcould consider writing a "Visit" event to a queue, and having a daemon listening for those events and maintaining the count. More complex, but possibly fewer contention issues.
A:
For MySQL, the manual says:
[Repeatable read] is the default isolation level for InnoDB. [...] For [...] UPDATE, and DELETE statements, locking depends on whether the statement uses a unique index with a unique search condition, or a range-type search condition. For a unique index with a unique search condition, InnoDB locks only the index record found, not the gap before it. For other search conditions, InnoDB locks the index range scanned, using gap locks or next-key (gap plus index-record) locks to block insertions by other sessions into the gaps covered by the range.
So I'd say yes, you're fine, though that particular query may well lock the entire table. It would probably be better:
UPDATE stats SET value = value + 1 WHERE key = 'visits'
with an index on "key".
A:
All the answers so far appear to assume an InnoDB table, which does support transactions; with MyISAM tables, you get "atomic transactions" instead, which should be fine for your specific use case (though they fall well short of full ACID for the general case).
In the MySQL docs on transactions (e.g. here) it gives your UPDATE form as a good practice typical case, specifically, and I quote...:
This gives us something that is
similar to column locking but is
actually even better because we only
update some of the columns, using
values that are relative to their
current values. This means that
typical UPDATE statements look
something like these:
UPDATE tablename SET pay_back=pay_back+125;
...This is very efficient and works even if another client has changed the values in the pay_back [[column]]
A:
Make sure you have SET autocommit so this is treated as a transaction, and the count will be fine. The only concern is performance (e.g. having a table hot spot)
A:
This will work if you:
Are running in a transaction
Row locking is set up correctly
Be especially careful with the second point. It's not a no-brainer because MySQL allows you to relax locking constraints to a point where this will indeed screw up.
On the other hand (when locking is set up correctly) if you hit some (very) heavy traffic this might become your bottle neck (as it can only execute in a single thread). If you keep the transaction open longer than just to update the number this becomes more likely, and it can even cause a deadlock if you're not careful as djna explains in detail.
A:
As everyone has said this will lock the row if you're using InnoDB. Now if you're just using just one row and that row stores stats about all accesses then locking this row may slow things down as queries wait to acquire the lock. This slow down may be imperceptible under your loads. If it is significant you could get around it by writing to multiple different rows, say 0-255. This will still lock each row, but the chance of lock contention is now 1/256 of what it originally was. When you want the total you can just sum all rows.
UPDATE stats SET value=value+1 WHERE id=X
where X is a random id 0-255
then
SELECT SUM(value) FROM stats
will give you the real total.
| Will this SQL cause any problems? | I'm sure everyone knows the joys of concurrency when it comes to threading.
Imagine the following scenario on every page-load on a noobily set up MySQL db:
UPDATE stats SET visits = (visits+1)
If a thousand users load the page at same time, will the count cause any issues? is this that table locking/row locking mechanism? Which one mysql use.
| [
"No this won't screw up. This is perfectly acceptable in any ACID compliant DB. The I stands for Isolation. Each one of these queries will lock all rows in the visit table. The A (in ACID) stands for Atomicity and means the transaction must run in full or not at all.\n",
"You have two potential problems:\n\nWill you get the right answer?\nWill you get unreasonable locking, will your whole app go very slow or even deadlock.\n\nThe right answer depends upon whether two users could compute (visit + 1) on the same value of visit. We can imagine that the database needs to do these actions: \n Read visit count\n Add one to visit count\n Write visit count\n\nSo if two users are working at the same time could they both read the same old value? That's where the isolation level of the transaction comes into play. As observed by Artefacto the default isolation level is repeatable read, and hence we get: \n Grab a lock\n Read, increment, Write\n Release lock\n\nRather than\n Read (two users same old value)\n Increment\n First user Grab Lock, second waits\n Write \n Release, second user grabs lock\n Write (same value!)\n Release\n\nHowever the level of contention could be quite high, and very much depends on the scope of your transaction. Suppose you have:\n Begin transaction\n\n Do the visit increment stuff\n\n Do some serious business work\n\n End transaction <==== visit lock is held until here\n\nThen you will get a lot of folks waiting for that visit lock. We don't know the overall structure of your app, whether you are using large transaction scopes like this. Very likely you are getting a default behaviour of a single transaction per SQL statement, and in which case you're contention is just for the duration of the SQL statement, pretty much as you would be hoping.\nOther folks might not be so fortunate: there are environments (eg. Java EE Servlets) where implicit transaction scopes can be created by the infrastructure and then the longer lived transactions I show above happen by default. Worse is the possibility that your code is not written consistently (with the visit increment always first, or always last) you can get:\n Begin transaction\n Do the visit increment stuff\n Do some serious business work\n End transaction <==== visit lock and business locks held until here\n\nand\n Begin transaction\n Do some other serious business work\n Do the visit increment stuff \n End transaction <==== visit lock and maybesame business locks held until here\n\nAnd bingo: Deadlock\nFor high volume sites you bcould consider writing a \"Visit\" event to a queue, and having a daemon listening for those events and maintaining the count. More complex, but possibly fewer contention issues.\n",
"For MySQL, the manual says:\n\n[Repeatable read] is the default isolation level for InnoDB. [...] For [...] UPDATE, and DELETE statements, locking depends on whether the statement uses a unique index with a unique search condition, or a range-type search condition. For a unique index with a unique search condition, InnoDB locks only the index record found, not the gap before it. For other search conditions, InnoDB locks the index range scanned, using gap locks or next-key (gap plus index-record) locks to block insertions by other sessions into the gaps covered by the range.\n\nSo I'd say yes, you're fine, though that particular query may well lock the entire table. It would probably be better:\nUPDATE stats SET value = value + 1 WHERE key = 'visits'\n\nwith an index on \"key\".\n",
"All the answers so far appear to assume an InnoDB table, which does support transactions; with MyISAM tables, you get \"atomic transactions\" instead, which should be fine for your specific use case (though they fall well short of full ACID for the general case).\nIn the MySQL docs on transactions (e.g. here) it gives your UPDATE form as a good practice typical case, specifically, and I quote...:\n\nThis gives us something that is\n similar to column locking but is\n actually even better because we only\n update some of the columns, using\n values that are relative to their\n current values. This means that\n typical UPDATE statements look\n something like these:\n\nUPDATE tablename SET pay_back=pay_back+125;\n\n\n...This is very efficient and works even if another client has changed the values in the pay_back [[column]] \n\n",
"Make sure you have SET autocommit so this is treated as a transaction, and the count will be fine. The only concern is performance (e.g. having a table hot spot)\n",
"This will work if you:\n\nAre running in a transaction\nRow locking is set up correctly\n\nBe especially careful with the second point. It's not a no-brainer because MySQL allows you to relax locking constraints to a point where this will indeed screw up. \nOn the other hand (when locking is set up correctly) if you hit some (very) heavy traffic this might become your bottle neck (as it can only execute in a single thread). If you keep the transaction open longer than just to update the number this becomes more likely, and it can even cause a deadlock if you're not careful as djna explains in detail.\n",
"As everyone has said this will lock the row if you're using InnoDB. Now if you're just using just one row and that row stores stats about all accesses then locking this row may slow things down as queries wait to acquire the lock. This slow down may be imperceptible under your loads. If it is significant you could get around it by writing to multiple different rows, say 0-255. This will still lock each row, but the chance of lock contention is now 1/256 of what it originally was. When you want the total you can just sum all rows. \nUPDATE stats SET value=value+1 WHERE id=X \n\nwhere X is a random id 0-255\nthen\nSELECT SUM(value) FROM stats\n\nwill give you the real total.\n"
] | [
3,
3,
2,
2,
1,
1,
0
] | [
"It's fine.\nall that \"table locking/row locking\" is the crap databases were invented to take care for. \nThere can be other issues when \"thousand users load the page at same time\", like index updating. But that's another story and noobily MySQL setup isn't a case anyway.\n"
] | [
-1
] | [
"c++",
"java",
"mysql",
"php",
"python"
] | stackoverflow_0002983264_c++_java_mysql_php_python.txt |
Q:
Mimic C preprocessor with Python/Ruby?
I need to mimic the preprocessor feature of C with Python.
If I want to run the debug release, I use as follows with C
#ifdef DEBUG
printf(...)
#endif
I just use -DDEBUG or similar to trigger it on or off.
What method can I use for Python/Ruby?
I mean, what should I do to control the behavior of python/ruby scripts in such a way that I can change a variable that affects all the script files in a project?
A:
You usually use the python logging functionality in this use-case. That is configured in configuration files, and you can set the output levels. Very close in usage to java log4j, if you're familiar with that.
A:
You can almost use the actual C preprocessor. If you rename your file to end in .c, you can then do this: gcc -w -E input/file.py.c -o output/file.py.
The main issue seems to be with comments. The preprocessor will complains about python comment lines being invalid preprocessor directives. You can remedy this by using C++ comments (// comment).
Or, a better idea would be to just write your own simple preprocessor. If you only need #define functionality, you're just talking about a doing a search and replace on your file.
Another solution would be something like this:
def nothing(*args):
pass
def print_debug(msg):
print msg
if not DEBUG:
print_debug = nothing
That way your print statements don't do anything if you're not in debug mode.
A:
Use pypreprocessor
The latest release can also be accessed through the PYPI
Here's the basic usage:
from pypreprocessor import pypreprocessor
pypreprocessor.parse()
#define debug
#ifdef debug
print('The source is in debug mode')
#else
print('The source is not in debug mode')
#endif
There you go. C style preprocessor conditional compilation implemented in python.
SideNote: The module is compatible with both python2x and python3k.
Disclaimer: I'm the author of pypreprocessor.
| Mimic C preprocessor with Python/Ruby? | I need to mimic the preprocessor feature of C with Python.
If I want to run the debug release, I use as follows with C
#ifdef DEBUG
printf(...)
#endif
I just use -DDEBUG or similar to trigger it on or off.
What method can I use for Python/Ruby?
I mean, what should I do to control the behavior of python/ruby scripts in such a way that I can change a variable that affects all the script files in a project?
| [
"You usually use the python logging functionality in this use-case. That is configured in configuration files, and you can set the output levels. Very close in usage to java log4j, if you're familiar with that.\n",
"You can almost use the actual C preprocessor. If you rename your file to end in .c, you can then do this: gcc -w -E input/file.py.c -o output/file.py. \nThe main issue seems to be with comments. The preprocessor will complains about python comment lines being invalid preprocessor directives. You can remedy this by using C++ comments (// comment). \nOr, a better idea would be to just write your own simple preprocessor. If you only need #define functionality, you're just talking about a doing a search and replace on your file. \nAnother solution would be something like this: \ndef nothing(*args):\n pass\n\ndef print_debug(msg):\n print msg\n\nif not DEBUG: \n print_debug = nothing\n\nThat way your print statements don't do anything if you're not in debug mode. \n",
"Use pypreprocessor\nThe latest release can also be accessed through the PYPI\nHere's the basic usage:\nfrom pypreprocessor import pypreprocessor\n\npypreprocessor.parse()\n\n#define debug\n\n#ifdef debug\nprint('The source is in debug mode')\n#else\nprint('The source is not in debug mode')\n#endif\n\nThere you go. C style preprocessor conditional compilation implemented in python.\nSideNote: The module is compatible with both python2x and python3k.\nDisclaimer: I'm the author of pypreprocessor.\n"
] | [
5,
2,
2
] | [] | [] | [
"preprocessor",
"python",
"ruby",
"user_controls"
] | stackoverflow_0002684712_preprocessor_python_ruby_user_controls.txt |
Q:
Any high-level languages that can use c libraries?
I know this question could be in vain, but it's just out of curiosity, and I'm still much a newb^^ Anyways I've been loving python for some time while learning it. My problem is obviously speed issues. I'd like to get into indie game creation, and for the short future, 2d and pygame will work.
But I'd eventually like to branch into the 3d area, and python is really too slow to make anything 3d and professional. So I'm wondering if there has ever been work to create a high-level language able to import and use c libraries? I've looked at Genie and it seems to be able to use certain libraries, but I'm not sure to what extent. Will I be able to use it for openGL programing, or in a c game engine?
I do know some lisp and enjoy it a lot, but there aren't a great many libraries out there for it. Which leads to the problem: I can't stand C syntax, but C has libraries galore that I could need! And game engines like irrlicht. Is there any language that can be used in place of C around C?
Thanks so much guys
A:
Python can call functions in dynamically loaded C libraries (.so in unix, .dll in Windows) using the ctypes module.
There is also cython - a variation of python that compiles to C and can call C libraries directly. You can mix modules written in pure Python and cython.
You may also want to look at the numerous 3D game engines either written specifically for Python or with a python interface. The ones I have heard the most about (but not used) are Blender and Python-Ogre.
A:
Panda3D is an engine which uses Python as it's "game logic" interface. You basically write everything in Python and the Panda3D backend (which I assume is mostly written in C or C++) is responsible for rendering.
Check out the gallery of projects that use Panda3D. It's not going to be AAA the-next-Gears-of-War level graphics, but it's still pretty impressive.
A:
Using swig you can make C imports in various languages: lua, python, php, c# ...
See more info here about supported wrappers.
A:
Python is able to use C libraries via the ctypes module. You'll have to write some Python code to import the C functions, but if the C API is clean and simple you'll have no trouble at all.
A:
You might find these useful:
C functions from Python
Integrating Python, C and C++
A:
I have been using PyOpenGL, it works great. Swig does its job if you want to call C/C++ libraries from Python.
A:
I'm surprised that no-one has yet stated clearly that C++ is what you are looking for. Like you I have a distaste for C syntax, but that's a poor reason for avoiding C++ if you want to get into 3D gaming. Do you want to dive into 3D gaming, or do you want to sit on the edge of the pool crying that the water is too cold ?
I think you'll also find that C++ plays very well with OpenGL, which is probably not true of a lot of the alternatives that have already been suggested
A:
To some extent, Cython might be what you are looking for. It allows you to use Python as a high level language, but then use C for the parts that need to be optimized.
But, at the end of the day, if you want to do 3D, just learning C or C++ may be the way to go. :-)
A:
There are Python wrappers available for major open source game engines (Ogre, Irrlicht, etc.). Particularly Panda3D ought to have nice bindings.
A:
If you'd like to have a look at .Net platform. You have the following solution:
Use C++/CLI to compile your C/C++ code into .Net assembly, the running time of this part would be as the same as your native C/C++ code.
Use any .Net language (C#, F#, IronPython) to develop high-level stuff using the low level library. For pure number crunching, C#/F# is usually 2-4 times slower than native C code, which is still far faster than Python. For non-number crunching tasks, C#/F# could sometimes match the speed of native code.
| Any high-level languages that can use c libraries? | I know this question could be in vain, but it's just out of curiosity, and I'm still much a newb^^ Anyways I've been loving python for some time while learning it. My problem is obviously speed issues. I'd like to get into indie game creation, and for the short future, 2d and pygame will work.
But I'd eventually like to branch into the 3d area, and python is really too slow to make anything 3d and professional. So I'm wondering if there has ever been work to create a high-level language able to import and use c libraries? I've looked at Genie and it seems to be able to use certain libraries, but I'm not sure to what extent. Will I be able to use it for openGL programing, or in a c game engine?
I do know some lisp and enjoy it a lot, but there aren't a great many libraries out there for it. Which leads to the problem: I can't stand C syntax, but C has libraries galore that I could need! And game engines like irrlicht. Is there any language that can be used in place of C around C?
Thanks so much guys
| [
"Python can call functions in dynamically loaded C libraries (.so in unix, .dll in Windows) using the ctypes module.\nThere is also cython - a variation of python that compiles to C and can call C libraries directly. You can mix modules written in pure Python and cython.\nYou may also want to look at the numerous 3D game engines either written specifically for Python or with a python interface. The ones I have heard the most about (but not used) are Blender and Python-Ogre.\n",
"Panda3D is an engine which uses Python as it's \"game logic\" interface. You basically write everything in Python and the Panda3D backend (which I assume is mostly written in C or C++) is responsible for rendering.\nCheck out the gallery of projects that use Panda3D. It's not going to be AAA the-next-Gears-of-War level graphics, but it's still pretty impressive.\n",
"Using swig you can make C imports in various languages: lua, python, php, c# ...\nSee more info here about supported wrappers.\n",
"Python is able to use C libraries via the ctypes module. You'll have to write some Python code to import the C functions, but if the C API is clean and simple you'll have no trouble at all.\n",
"You might find these useful:\n\nC functions from Python\nIntegrating Python, C and C++\n\n",
"I have been using PyOpenGL, it works great. Swig does its job if you want to call C/C++ libraries from Python.\n",
"I'm surprised that no-one has yet stated clearly that C++ is what you are looking for. Like you I have a distaste for C syntax, but that's a poor reason for avoiding C++ if you want to get into 3D gaming. Do you want to dive into 3D gaming, or do you want to sit on the edge of the pool crying that the water is too cold ?\nI think you'll also find that C++ plays very well with OpenGL, which is probably not true of a lot of the alternatives that have already been suggested\n",
"To some extent, Cython might be what you are looking for. It allows you to use Python as a high level language, but then use C for the parts that need to be optimized. \nBut, at the end of the day, if you want to do 3D, just learning C or C++ may be the way to go. :-)\n",
"There are Python wrappers available for major open source game engines (Ogre, Irrlicht, etc.). Particularly Panda3D ought to have nice bindings.\n",
"If you'd like to have a look at .Net platform. You have the following solution:\n\nUse C++/CLI to compile your C/C++ code into .Net assembly, the running time of this part would be as the same as your native C/C++ code.\nUse any .Net language (C#, F#, IronPython) to develop high-level stuff using the low level library. For pure number crunching, C#/F# is usually 2-4 times slower than native C code, which is still far faster than Python. For non-number crunching tasks, C#/F# could sometimes match the speed of native code. \n\n"
] | [
9,
3,
3,
1,
1,
1,
1,
0,
0,
0
] | [] | [] | [
"c",
"c++",
"python"
] | stackoverflow_0002987524_c_c++_python.txt |
Q:
python return class
I new to python and I read from someone else of the example code below:
class A:
def current(self):
data = Data(a=a,b=b,c=c)
return data
class B(A):
#something here
#print data a b c
How do I print out the data a, b, and c?
A:
It's not really clear what you exactly want, but here is a try:
class A:
def current(self):
data = Data(a=a,b=b,c=c)
return data
class B(A):
def print(self):
data = self.current()
print "Data A:%s B:%s C:%s" % (data.a, data.b, data.c)
| python return class | I new to python and I read from someone else of the example code below:
class A:
def current(self):
data = Data(a=a,b=b,c=c)
return data
class B(A):
#something here
#print data a b c
How do I print out the data a, b, and c?
| [
"It's not really clear what you exactly want, but here is a try:\nclass A:\n def current(self):\n data = Data(a=a,b=b,c=c)\n return data\n\nclass B(A):\n def print(self):\n data = self.current()\n print \"Data A:%s B:%s C:%s\" % (data.a, data.b, data.c) \n\n"
] | [
1
] | [] | [] | [
"python"
] | stackoverflow_0002988059_python.txt |
Q:
which file stored os.environ,and store where , disk c: or disk d:
my code is :
os.environ['ss']='ssss'
print os.environ
and it show :
{'TMP': 'C:\\DOCUME~1\\ADMINI~1\\LOCALS~1\\Temp', 'COMPUTERNAME': 'PC-200908062210', 'USERDOMAIN': 'PC-200908062210', 'COMMONPROGRAMFILES': 'C:\\Program Files\\Common Files', 'PROCESSOR_IDENTIFIER': 'x86 Family 6 Model 15 Stepping 2, GenuineIntel', 'PROGRAMFILES': 'C:\\Program Files', 'PROCESSOR_REVISION': '0f02', 'SYSTEMROOT': 'C:\\WINDOWS', 'PATH': 'C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\Program Files\\Hewlett-Packard\\IAM\\bin;C:\\Program Files\\Common Files\\Thunder Network\\KanKan\\Codecs;D:\\Program Files\\TortoiseSVN\\bin;d:\\Program Files\\Mercurial\\;D:\\Program Files\\Graphviz2.26.3\\bin;D:\\TDDOWNLOAD\\ok\\gettext\\bin;D:\\Python25;C:\\Program Files\\StormII\\Codec;C:\\Program Files\\StormII;D:\\zjm_code\\;D:\\Python25\\Scripts;D:\\MinGW\\bin;d:\\Program Files\\Google\\google_appengine\\', 'TEMP': 'C:\\DOCUME~1\\ADMINI~1\\LOCALS~1\\Temp', 'BID': '56727834-D5C3-4EBF-BFAA-FA0933E4E721', 'PROCESSOR_ARCHITECTURE': 'x86', 'ALLUSERSPROFILE': 'C:\\Documents and Settings\\All Users', 'SESSIONNAME': 'Console', 'HOMEPATH': '\\Documents and Settings\\Administrator', 'USERNAME': 'Administrator', 'LOGONSERVER': '\\\\PC-200908062210', 'COMSPEC': 'C:\\WINDOWS\\system32\\cmd.exe', 'PATHEXT': '.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH', 'CLIENTNAME': 'Console', 'FP_NO_HOST_CHECK': 'NO', 'WINDIR': 'C:\\WINDOWS', 'APPDATA': 'C:\\Documents and Settings\\Administrator\\Application Data', 'HOMEDRIVE': 'C:', 'SS': 'ssss', 'SYSTEMDRIVE': 'C:', 'NUMBER_OF_PROCESSORS': '2', 'PROCESSOR_LEVEL': '6', 'OS': 'Windows_NT', 'USERPROFILE': 'C:\\Documents and Settings\\Administrator'}
i find google-app-engine set user_id in os.version not in session,look here at line 96-100 and line 257 , and aeoid at line 177 ,
and i want to know : which file stored os.environ ,and store where , disk c: ,or disk d: ?
thanks
A:
I am not sure I understand your question correctly. Do you ask where the file os.environ is located on disk? If yes, the answer is:
There is no such file.
os.environ is a collection of environment variables and informations about the host system, provided by the python interpreter.
| which file stored os.environ,and store where , disk c: or disk d: | my code is :
os.environ['ss']='ssss'
print os.environ
and it show :
{'TMP': 'C:\\DOCUME~1\\ADMINI~1\\LOCALS~1\\Temp', 'COMPUTERNAME': 'PC-200908062210', 'USERDOMAIN': 'PC-200908062210', 'COMMONPROGRAMFILES': 'C:\\Program Files\\Common Files', 'PROCESSOR_IDENTIFIER': 'x86 Family 6 Model 15 Stepping 2, GenuineIntel', 'PROGRAMFILES': 'C:\\Program Files', 'PROCESSOR_REVISION': '0f02', 'SYSTEMROOT': 'C:\\WINDOWS', 'PATH': 'C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\Program Files\\Hewlett-Packard\\IAM\\bin;C:\\Program Files\\Common Files\\Thunder Network\\KanKan\\Codecs;D:\\Program Files\\TortoiseSVN\\bin;d:\\Program Files\\Mercurial\\;D:\\Program Files\\Graphviz2.26.3\\bin;D:\\TDDOWNLOAD\\ok\\gettext\\bin;D:\\Python25;C:\\Program Files\\StormII\\Codec;C:\\Program Files\\StormII;D:\\zjm_code\\;D:\\Python25\\Scripts;D:\\MinGW\\bin;d:\\Program Files\\Google\\google_appengine\\', 'TEMP': 'C:\\DOCUME~1\\ADMINI~1\\LOCALS~1\\Temp', 'BID': '56727834-D5C3-4EBF-BFAA-FA0933E4E721', 'PROCESSOR_ARCHITECTURE': 'x86', 'ALLUSERSPROFILE': 'C:\\Documents and Settings\\All Users', 'SESSIONNAME': 'Console', 'HOMEPATH': '\\Documents and Settings\\Administrator', 'USERNAME': 'Administrator', 'LOGONSERVER': '\\\\PC-200908062210', 'COMSPEC': 'C:\\WINDOWS\\system32\\cmd.exe', 'PATHEXT': '.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH', 'CLIENTNAME': 'Console', 'FP_NO_HOST_CHECK': 'NO', 'WINDIR': 'C:\\WINDOWS', 'APPDATA': 'C:\\Documents and Settings\\Administrator\\Application Data', 'HOMEDRIVE': 'C:', 'SS': 'ssss', 'SYSTEMDRIVE': 'C:', 'NUMBER_OF_PROCESSORS': '2', 'PROCESSOR_LEVEL': '6', 'OS': 'Windows_NT', 'USERPROFILE': 'C:\\Documents and Settings\\Administrator'}
i find google-app-engine set user_id in os.version not in session,look here at line 96-100 and line 257 , and aeoid at line 177 ,
and i want to know : which file stored os.environ ,and store where , disk c: ,or disk d: ?
thanks
| [
"I am not sure I understand your question correctly. Do you ask where the file os.environ is located on disk? If yes, the answer is:\nThere is no such file.\nos.environ is a collection of environment variables and informations about the host system, provided by the python interpreter.\n"
] | [
4
] | [] | [] | [
"environment_variables",
"google_app_engine",
"python"
] | stackoverflow_0002988019_environment_variables_google_app_engine_python.txt |
Q:
How to prevent BeautifulSoup from stripping lines
I'm trying to translate an online html page into text.
I have a problem with this structure:
<div align="justify"><b>Available in
<a href="http://www.example.com.be/book.php?number=1">
French</a> and
<a href="http://www.example.com.be/book.php?number=5">
English</a>.
</div>
Here is its representation as a python string:
'<div align="justify"><b>Available in \r\n<a href="http://www.example.com.be/book.php?number=1">\r\nFrench</a>; \r\n<a href="http://www.example.com.be/book.php?number=5">\r\nEnglish</a>.\r\n</div>'
When using:
html_content = get_html_div_from_above()
para = BeautifulSoup(html_content)
txt = para.text
BeautifulSoup translate it (in the 'txt' variable) as:
u'Available inFrenchandEnglish.'
It probably strips each line in the original html string.
Do you have a clean solution about this problem ?
Thanks.
A:
I finally got a good solution:
def clean_line(line):
return re.sub(r'[ ]{2,}', ' ', re.sub(r'[\r\n]', '', line))
html_content = get_html_div_from_above()
para = BeautifulSoup(html_content)
''.join([clean_line(line) for line in para.findAll(text=True)])
Which outputs:
u'Available in French and English. '
A:
I got a solution:
html_content = get_html_div_from_above()
para = BeautifulSoup(html_content)
txt = para.getText(separator=' ')
But it's not optimal because it puts spaces between each tag:
u'Available in French and English . '
Notice the space before the dot.
| How to prevent BeautifulSoup from stripping lines | I'm trying to translate an online html page into text.
I have a problem with this structure:
<div align="justify"><b>Available in
<a href="http://www.example.com.be/book.php?number=1">
French</a> and
<a href="http://www.example.com.be/book.php?number=5">
English</a>.
</div>
Here is its representation as a python string:
'<div align="justify"><b>Available in \r\n<a href="http://www.example.com.be/book.php?number=1">\r\nFrench</a>; \r\n<a href="http://www.example.com.be/book.php?number=5">\r\nEnglish</a>.\r\n</div>'
When using:
html_content = get_html_div_from_above()
para = BeautifulSoup(html_content)
txt = para.text
BeautifulSoup translate it (in the 'txt' variable) as:
u'Available inFrenchandEnglish.'
It probably strips each line in the original html string.
Do you have a clean solution about this problem ?
Thanks.
| [
"I finally got a good solution:\ndef clean_line(line):\n return re.sub(r'[ ]{2,}', ' ', re.sub(r'[\\r\\n]', '', line))\n\nhtml_content = get_html_div_from_above()\npara = BeautifulSoup(html_content)\n''.join([clean_line(line) for line in para.findAll(text=True)])\n\nWhich outputs:\nu'Available in French and English. '\n\n",
"I got a solution:\nhtml_content = get_html_div_from_above()\npara = BeautifulSoup(html_content)\ntxt = para.getText(separator=' ')\n\nBut it's not optimal because it puts spaces between each tag:\nu'Available in French and English . '\n\nNotice the space before the dot.\n"
] | [
2,
1
] | [] | [] | [
"beautifulsoup",
"python"
] | stackoverflow_0002988229_beautifulsoup_python.txt |
Q:
Where can I find good ajax support in Java/Python?
I want a framework (or anything) that helps me make rich client guis. I know my server-side, but I don't like programming in ajax, javascript, css etc.
Something that wraps the ajax code in some objects/methods with clean syntax, would do the trick. I want to write code in java instead of defining css and html tags.
Does Java Spring, JSF, Django support this ?
Languages: Java, Python
Thank you
A:
Look into Google Web Toolkit (aka GWT). It's a Java framework that "is a development toolkit for building and optimizing complex browser-based applications. GWT is used by many products at Google, including Google Wave and Google AdWords."
I think GWT aims to do exactly what you're looking for, though I have no experience with it personally.
A:
The Python equivalent of GWT is pyjamas. There are also libraries specifically for Django (dajax for example), though I have no direct experience of those.
Personally, having first tried GWT [[and pyjamas a long-ish time ago]], I then gave a try to Javascript with a good framework -- jQuery, dojo, and Closure all are quite good -- and I now prefer that route... JS plus a good framework is a truly different programming experience than "bare JS" would be with all of the various browser-specific quirks and incompatibilites.
A:
I don't believe you will find a good solution without HTML and CSS. It's actually very good for it original purpose. (static HTML pages)
When it comes to generating dynamic contents, my preferred choice in the Java world is to use Apache Wicket. This framework separates design and logic in different files. One static html file for the design with a corresponding Java file with the dynamic data.
Then Wicket generates a new html with dynamic contents from the models defined in the Java file.
An example that adds AJAX support when you click on a link:
Part of the HTML page:
<a href="#" wicket:id="link">click me</a>
The corresponding Java component:
add(new AjaxFallbackLink("link") {
public void onClick(AjaxRequestTarget target) {
if (target != null) {
// target is only available in an ajax request
target.addComponent(label);
}
}
});
The id "link" is the connection between the html and Java component.
To get a better understanding of how it works, you should try the online examples at http://www.wicketstuff.org/wicket14/ajax/. Here you can see many AJAX components in action, the HTML and the Java code for each component.
A:
IceFaces is a JSF Framework with AJax. You would take a look and see if it fits your needs. Also You should find the demo there.
| Where can I find good ajax support in Java/Python? | I want a framework (or anything) that helps me make rich client guis. I know my server-side, but I don't like programming in ajax, javascript, css etc.
Something that wraps the ajax code in some objects/methods with clean syntax, would do the trick. I want to write code in java instead of defining css and html tags.
Does Java Spring, JSF, Django support this ?
Languages: Java, Python
Thank you
| [
"Look into Google Web Toolkit (aka GWT). It's a Java framework that \"is a development toolkit for building and optimizing complex browser-based applications. GWT is used by many products at Google, including Google Wave and Google AdWords.\"\nI think GWT aims to do exactly what you're looking for, though I have no experience with it personally.\n",
"The Python equivalent of GWT is pyjamas. There are also libraries specifically for Django (dajax for example), though I have no direct experience of those.\nPersonally, having first tried GWT [[and pyjamas a long-ish time ago]], I then gave a try to Javascript with a good framework -- jQuery, dojo, and Closure all are quite good -- and I now prefer that route... JS plus a good framework is a truly different programming experience than \"bare JS\" would be with all of the various browser-specific quirks and incompatibilites.\n",
"I don't believe you will find a good solution without HTML and CSS. It's actually very good for it original purpose. (static HTML pages)\nWhen it comes to generating dynamic contents, my preferred choice in the Java world is to use Apache Wicket. This framework separates design and logic in different files. One static html file for the design with a corresponding Java file with the dynamic data.\nThen Wicket generates a new html with dynamic contents from the models defined in the Java file.\nAn example that adds AJAX support when you click on a link:\nPart of the HTML page:\n<a href=\"#\" wicket:id=\"link\">click me</a>\n\nThe corresponding Java component:\nadd(new AjaxFallbackLink(\"link\") {\n public void onClick(AjaxRequestTarget target) {\n if (target != null) {\n // target is only available in an ajax request\n target.addComponent(label);\n }\n }\n});\n\nThe id \"link\" is the connection between the html and Java component.\nTo get a better understanding of how it works, you should try the online examples at http://www.wicketstuff.org/wicket14/ajax/. Here you can see many AJAX components in action, the HTML and the Java code for each component.\n",
"IceFaces is a JSF Framework with AJax. You would take a look and see if it fits your needs. Also You should find the demo there.\n"
] | [
4,
3,
0,
0
] | [] | [] | [
"ajax",
"css",
"java",
"python",
"spring"
] | stackoverflow_0002969054_ajax_css_java_python_spring.txt |
Q:
OverflowError: math range error
>>> import math
>>> math.pow(2, 3000)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OverflowError: math range error
How can I fix it?
A:
Use the built-in operator.
2**3000
| OverflowError: math range error | >>> import math
>>> math.pow(2, 3000)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OverflowError: math range error
How can I fix it?
| [
"Use the built-in operator.\n2**3000\n\n"
] | [
18
] | [] | [] | [
"math",
"python"
] | stackoverflow_0002988634_math_python.txt |
Q:
what should i do after openid (or twitter ,facebook) user login my site ,on gae
how to integration local user and openid(or facebook twitter) user ,
did you know some framework have already done this ,
updated
my mean is : how to deal with 'local user' and 'openid user',
and how to mix them in one model .
please give me a framework that realize 'local user' and 'openid user'
A:
I understand your question.
You wish to be able to maintain a list of users that have signed up with your service, and also want to record users using OpenID to authenticate.
In order to solve this I would do either of the following:
Create a new user in your users table for each new user logged in under OpenID, and store their OpenID in this table to allow you to join the two.
Move your site to OpenID and change all references to your current users to OpenID users.
I'd probably go with Option 1 if you already have this app in production.
Note: More experienced Open ID users will probably correct me!
| what should i do after openid (or twitter ,facebook) user login my site ,on gae | how to integration local user and openid(or facebook twitter) user ,
did you know some framework have already done this ,
updated
my mean is : how to deal with 'local user' and 'openid user',
and how to mix them in one model .
please give me a framework that realize 'local user' and 'openid user'
| [
"I understand your question.\nYou wish to be able to maintain a list of users that have signed up with your service, and also want to record users using OpenID to authenticate.\nIn order to solve this I would do either of the following:\n\nCreate a new user in your users table for each new user logged in under OpenID, and store their OpenID in this table to allow you to join the two.\nMove your site to OpenID and change all references to your current users to OpenID users.\n\nI'd probably go with Option 1 if you already have this app in production.\nNote: More experienced Open ID users will probably correct me!\n"
] | [
1
] | [] | [] | [
"google_app_engine",
"integration",
"openid",
"python"
] | stackoverflow_0002986766_google_app_engine_integration_openid_python.txt |
Q:
How to show why "try" failed in python
is there anyway to show why a "try" failed, and skipped to "except", without writing out all the possible errors by hand, and without ending the program?
example:
try:
1/0
except:
someway to show
"Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
1/0
ZeroDivisionError: integer division or modulo by zero"
i dont want to doif:print error 1, elif: print error 2, elif: etc.... i want to see the error that would be shown had try not been there
A:
Try:
>>> try:
... 1/0
... except Exception, e:
... print e
...
integer division or modulo by zero
There are other syntactical variants, e.g.:
>>> try:
... 1/0
... except Exception as e:
... print e
...
integer division or modulo by zero
More information can be found in the errors tutorial.
A:
I often use traceback to log such exception to log or show on stderr:
import traceback
import sys
try:
print 1/0
except Exception:
s = traceback.format_exc()
serr = "there were errors:\n%s\n" % (s)
sys.stderr.write(serr)
Output will show info about line is source where exception occured:
there were errors:
Traceback (most recent call last):
File "c:\test\ex.py", line 5, in <module>
print 1/0
ZeroDivisionError: integer division or modulo by zero
| How to show why "try" failed in python | is there anyway to show why a "try" failed, and skipped to "except", without writing out all the possible errors by hand, and without ending the program?
example:
try:
1/0
except:
someway to show
"Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
1/0
ZeroDivisionError: integer division or modulo by zero"
i dont want to doif:print error 1, elif: print error 2, elif: etc.... i want to see the error that would be shown had try not been there
| [
"Try:\n>>> try:\n... 1/0\n... except Exception, e:\n... print e\n... \ninteger division or modulo by zero\n\nThere are other syntactical variants, e.g.:\n>>> try:\n... 1/0\n... except Exception as e:\n... print e\n... \ninteger division or modulo by zero\n\nMore information can be found in the errors tutorial.\n",
"I often use traceback to log such exception to log or show on stderr:\nimport traceback\nimport sys\n\ntry:\n print 1/0\nexcept Exception:\n s = traceback.format_exc()\n serr = \"there were errors:\\n%s\\n\" % (s)\n sys.stderr.write(serr) \n\nOutput will show info about line is source where exception occured:\nthere were errors:\nTraceback (most recent call last):\n File \"c:\\test\\ex.py\", line 5, in <module>\n print 1/0\nZeroDivisionError: integer division or modulo by zero\n\n"
] | [
10,
8
] | [] | [] | [
"error_handling",
"python"
] | stackoverflow_0002988751_error_handling_python.txt |
Q:
manyToManyField question
Hay guys, I'm writing a simple app which logs recipes.
I'm working out my models and have stumbled across a problem
My Dish models needs to have many Ingredients. This is no problem because i would do something like this
ingredients = models.ManyToManyfield(Ingredient)
No problems, my dish now can have many ingrendients.
However, the problem is that the ingredient needs to come in different quantities.
I.E 4 eggs, 7 tablespoons sugar
My Ingredient Model is very simple at the moment
class Ingredient(models.Model):
name = models.TextField(blank=False)
slug = models.SlugField(blank=True)
How would i go about work out this problem? What fields would i need to add, would i need to use a 'through' attribute on my ManyToManyfield to solve this problem?
A:
I think you got the right answer with a "through" table ( http://docs.djangoproject.com/en/dev/topics/db/models/#intermediary-manytomany )
Model
class Recipe(models.Model):
name = models.TextField(blank=False)
ingredients = models.ManyToManyField(Ingredient, through='Components')
class Ingredient(models.Model):
name = models.TextField(blank=False)
slug = models.SlugField(blank=True)
class Components(models.Model):
recipe = models.ForeignKey(Recipe)
ingredient = models.ForeignKey(Ingredient)
quantity = models.DecimalField()
You can put unit of quantity (gram, kilo, tablespoon, etc) on Ingredient level, but I think it is better on Ingredients level (for example you can have 1 recipe with 10 Cl of milk but one other with 1L ... So "different" units for a same ingredient.
Data Creation
By Dish you mean Recipe right ? If you have a look to previous link (http://docs.djangoproject.com/en/dev/topics/db/models/#intermediary-manytomany), they give you a good example (based on the beatles).
Basically :
1.Create a Recipe:
cake=Recipe.objects.create(name="Simple Cake")
2.Create several Ingredient (if they doesn't already exist from a previous recipe ;)):
egg = Ingredient.objects.create(name="Egg")
milk = Ingredient.objects.create(name="milk")
3.Create the relationship:
cake_ing1 = Components.objects.create(recipe=cake, ingredient=egg,quantity = 2)
cake_ing2 = Components.objects.create(recipe=cake, ingredient=milk,quantity = 200)
and so on. Plus, I'm now quite sure that unit should go to Components level, with a default unit as "piece" (that would be for yours eggs ...), and would be something like "mL" for milk.
Data Access
In order to get ingredients (Components) of a recipe just do :
cake = Recipe.objects.get(name = "Simple Cake")
components_cake = Components.objects.get(recipe = cake)
| manyToManyField question | Hay guys, I'm writing a simple app which logs recipes.
I'm working out my models and have stumbled across a problem
My Dish models needs to have many Ingredients. This is no problem because i would do something like this
ingredients = models.ManyToManyfield(Ingredient)
No problems, my dish now can have many ingrendients.
However, the problem is that the ingredient needs to come in different quantities.
I.E 4 eggs, 7 tablespoons sugar
My Ingredient Model is very simple at the moment
class Ingredient(models.Model):
name = models.TextField(blank=False)
slug = models.SlugField(blank=True)
How would i go about work out this problem? What fields would i need to add, would i need to use a 'through' attribute on my ManyToManyfield to solve this problem?
| [
"I think you got the right answer with a \"through\" table ( http://docs.djangoproject.com/en/dev/topics/db/models/#intermediary-manytomany )\nModel\nclass Recipe(models.Model):\n name = models.TextField(blank=False)\n ingredients = models.ManyToManyField(Ingredient, through='Components')\n\nclass Ingredient(models.Model):\n name = models.TextField(blank=False)\n slug = models.SlugField(blank=True)\n\nclass Components(models.Model):\n recipe = models.ForeignKey(Recipe)\n ingredient = models.ForeignKey(Ingredient)\n quantity = models.DecimalField()\n\nYou can put unit of quantity (gram, kilo, tablespoon, etc) on Ingredient level, but I think it is better on Ingredients level (for example you can have 1 recipe with 10 Cl of milk but one other with 1L ... So \"different\" units for a same ingredient.\nData Creation\nBy Dish you mean Recipe right ? If you have a look to previous link (http://docs.djangoproject.com/en/dev/topics/db/models/#intermediary-manytomany), they give you a good example (based on the beatles). \nBasically :\n1.Create a Recipe:\ncake=Recipe.objects.create(name=\"Simple Cake\")\n\n2.Create several Ingredient (if they doesn't already exist from a previous recipe ;)):\negg = Ingredient.objects.create(name=\"Egg\")\nmilk = Ingredient.objects.create(name=\"milk\")\n\n3.Create the relationship:\ncake_ing1 = Components.objects.create(recipe=cake, ingredient=egg,quantity = 2) \ncake_ing2 = Components.objects.create(recipe=cake, ingredient=milk,quantity = 200)\n\nand so on. Plus, I'm now quite sure that unit should go to Components level, with a default unit as \"piece\" (that would be for yours eggs ...), and would be something like \"mL\" for milk.\nData Access\nIn order to get ingredients (Components) of a recipe just do :\ncake = Recipe.objects.get(name = \"Simple Cake\")\ncomponents_cake = Components.objects.get(recipe = cake)\n\n"
] | [
2
] | [] | [] | [
"django",
"manytomanyfield",
"python"
] | stackoverflow_0002988471_django_manytomanyfield_python.txt |
Q:
Open/Close database connection in django
I am using Django and Postgresql as my DBMS.
I wish to set a setting that enables to enable/disable database connection. When the connection is set to closed (in settings.py) the site will display a message such as "meintanence mode" or something like that. Django will not show any db connection error message (or mail them to admins). It is appreciated if django do not try to connect to the database at all.
A:
Maybe creating a middleware solves your problem. Put your new middleware "maintenancemiddleware" as the FIRST item of your settings.middleware tuple.
# code not tested, only for demonstration.
# maintenancemiddleware.py
from django.conf.settings import MAINTENANCE
class MaintenanceMiddleware(object):
def process_request(self, request):
if MAINTENANCE:
# redirect to a static url (like /media/maintenancemode.html)
Further info: http://docs.djangoproject.com/en/1.2/topics/http/middleware/#topics-http-middleware
| Open/Close database connection in django | I am using Django and Postgresql as my DBMS.
I wish to set a setting that enables to enable/disable database connection. When the connection is set to closed (in settings.py) the site will display a message such as "meintanence mode" or something like that. Django will not show any db connection error message (or mail them to admins). It is appreciated if django do not try to connect to the database at all.
| [
"Maybe creating a middleware solves your problem. Put your new middleware \"maintenancemiddleware\" as the FIRST item of your settings.middleware tuple.\n# code not tested, only for demonstration.\n# maintenancemiddleware.py\nfrom django.conf.settings import MAINTENANCE\n\nclass MaintenanceMiddleware(object):\n def process_request(self, request):\n if MAINTENANCE:\n # redirect to a static url (like /media/maintenancemode.html)\n\nFurther info: http://docs.djangoproject.com/en/1.2/topics/http/middleware/#topics-http-middleware\n"
] | [
3
] | [] | [] | [
"database_connection",
"django",
"python"
] | stackoverflow_0002989066_database_connection_django_python.txt |
Q:
Facebook authentication with extended permission request
I'm wondering which authentication method to use if i need extended permissions (e.g. if i want to use the users photos in my application). The methods are either the single-sign on using JavaScript SDK or by using the OAuth 2.0 protocol. Both methods are explained here: Authentication - Facebook developers. The JS SDK would be easiest but haven't found a solution yet.
The application will utilize the Google Appengine with Django environment.
A:
At the moment the entire Facebook API is undergoing changes.
I would suggest you use the OAuth API - it appears as this is the direction they will adopt.
A word of caution - the "official" documentation in the page you mentioned are not complete at best and some of the options are still buggy (try setting display=wap with scope=... you will get an error).
Check out the Facebook developer forum on the issue I stated above.
I believe you can find answers to other Facebook related issues there.
| Facebook authentication with extended permission request | I'm wondering which authentication method to use if i need extended permissions (e.g. if i want to use the users photos in my application). The methods are either the single-sign on using JavaScript SDK or by using the OAuth 2.0 protocol. Both methods are explained here: Authentication - Facebook developers. The JS SDK would be easiest but haven't found a solution yet.
The application will utilize the Google Appengine with Django environment.
| [
"At the moment the entire Facebook API is undergoing changes.\nI would suggest you use the OAuth API - it appears as this is the direction they will adopt. \nA word of caution - the \"official\" documentation in the page you mentioned are not complete at best and some of the options are still buggy (try setting display=wap with scope=... you will get an error). \nCheck out the Facebook developer forum on the issue I stated above.\nI believe you can find answers to other Facebook related issues there.\n"
] | [
1
] | [] | [] | [
"django",
"facebook",
"google_app_engine",
"javascript",
"python"
] | stackoverflow_0002988981_django_facebook_google_app_engine_javascript_python.txt |
Q:
Pure python implementation of greenlet API
The greenlet package is used by gevent and eventlet for asynchronous IO. It is written as a C-extension and therefore doesn't work with Jython or IronPython. If performance is of no concern, what is the easiest approach to implementing the greenlet API in pure Python.
A simple example:
def test1():
print 12
gr2.switch()
print 34
def test2():
print 56
gr1.switch()
print 78
gr1 = greenlet(test1)
gr2 = greenlet(test2)
gr1.switch()
Should print 12, 56, 34 (and not 78).
A:
This kind of thing can be achieved with co-routines which have been built-in to the standard Python distribution since version 2.5. If IronPython and co are fully compliant with all Python 2.5 features (I believe they are) you should be able to use this idiom.
See this post for more information on how they can be used :) Specifically, you'll be interested in the PDF where the author builds a system using nothing but pure Python that provides similar capabilities to either stackless Python or the Greenlet module.
You may also want to look either Gogen or Kamelia for ideas: these projects both have pure python coroutine implementations which you could either adopt or use as a reference for your own implementation. Take a look at this page for a gentle introduction to the cogen way of doing things.
Note there are some differences between the co-routine implementations here and the greenletimplementation. The pure python implementations all use some kind of external scheduler but the idea is essentially the same: they provide you with a way to run lightweight, co-operative tasks without the need to resort to threads. Additionally both the frameworks linked to above are geared towards asynchronous IO very much like greenlet itself.
Here's the example you posted but rewritten using cogen:
from cogen.core.coroutines import coroutine
from cogen.core.schedulers import Scheduler
from cogen.core import events
@coroutine
def test1():
print 12
yield events.AddCoro(test2)
yield events.WaitForSignal(test1)
print 34
@coroutine
def test2():
print 56
yield events.Signal(test1)
yield events.WaitForSignal(test2)
print 78
sched = Scheduler()
sched.add(test1)
sched.run()
>>> 12
>>> 56
>>> 34
It's a little more explicit than the greenlet version (for example using WaitForSignal to explicitly create a resume point) but you should get the general idea.
edit: I just confirmed that this works using jython
KidA% jython test.py
12
56
34
A:
It's not possible to implement greenlet in pure Python.
UPDATE:
faking greenlet API with threads could be indeed doable, even if completely useless for all practical purposes
generators cannot be used for this as they only save the state of a single frame. Greenlets save the whole stack. This means gevent can use any protocol implemented on top of the standard socket (e.g. httplib and urllib2 modules). Generator-based frameworks require generators in all layers of your software, so httplib and tons of other packages are thrown away.
| Pure python implementation of greenlet API | The greenlet package is used by gevent and eventlet for asynchronous IO. It is written as a C-extension and therefore doesn't work with Jython or IronPython. If performance is of no concern, what is the easiest approach to implementing the greenlet API in pure Python.
A simple example:
def test1():
print 12
gr2.switch()
print 34
def test2():
print 56
gr1.switch()
print 78
gr1 = greenlet(test1)
gr2 = greenlet(test2)
gr1.switch()
Should print 12, 56, 34 (and not 78).
| [
"This kind of thing can be achieved with co-routines which have been built-in to the standard Python distribution since version 2.5. If IronPython and co are fully compliant with all Python 2.5 features (I believe they are) you should be able to use this idiom.\nSee this post for more information on how they can be used :) Specifically, you'll be interested in the PDF where the author builds a system using nothing but pure Python that provides similar capabilities to either stackless Python or the Greenlet module.\nYou may also want to look either Gogen or Kamelia for ideas: these projects both have pure python coroutine implementations which you could either adopt or use as a reference for your own implementation. Take a look at this page for a gentle introduction to the cogen way of doing things. \nNote there are some differences between the co-routine implementations here and the greenletimplementation. The pure python implementations all use some kind of external scheduler but the idea is essentially the same: they provide you with a way to run lightweight, co-operative tasks without the need to resort to threads. Additionally both the frameworks linked to above are geared towards asynchronous IO very much like greenlet itself. \nHere's the example you posted but rewritten using cogen:\nfrom cogen.core.coroutines import coroutine\nfrom cogen.core.schedulers import Scheduler\nfrom cogen.core import events\n\n@coroutine\ndef test1():\n print 12\n yield events.AddCoro(test2)\n yield events.WaitForSignal(test1)\n print 34\n\n@coroutine\ndef test2():\n print 56\n yield events.Signal(test1)\n yield events.WaitForSignal(test2)\n print 78\n\nsched = Scheduler()\nsched.add(test1)\nsched.run()\n\n>>> 12\n>>> 56\n>>> 34\n\nIt's a little more explicit than the greenlet version (for example using WaitForSignal to explicitly create a resume point) but you should get the general idea. \nedit: I just confirmed that this works using jython\nKidA% jython test.py \n12\n56\n34\n\n",
"It's not possible to implement greenlet in pure Python.\nUPDATE:\n\nfaking greenlet API with threads could be indeed doable, even if completely useless for all practical purposes\ngenerators cannot be used for this as they only save the state of a single frame. Greenlets save the whole stack. This means gevent can use any protocol implemented on top of the standard socket (e.g. httplib and urllib2 modules). Generator-based frameworks require generators in all layers of your software, so httplib and tons of other packages are thrown away.\n\n"
] | [
12,
10
] | [] | [] | [
"ironpython",
"jython",
"python"
] | stackoverflow_0002939678_ironpython_jython_python.txt |
Q:
cherrypy when to know that the server has started
I am trying to write some unit tests for a small web service written with Cherrypy and I am wondering what's the best way to figure out that the server has started, so i don't get connection refused if I try to connect too early to the service ?
A:
I got it figured out:
cherrypy.engine.start(); cherrypy.server.wait()
it's the way to go.
Otherwise, I think you can get away with some tricks with
cherrypy.server.bus.states
| cherrypy when to know that the server has started | I am trying to write some unit tests for a small web service written with Cherrypy and I am wondering what's the best way to figure out that the server has started, so i don't get connection refused if I try to connect too early to the service ?
| [
"I got it figured out:\ncherrypy.engine.start(); cherrypy.server.wait()\nit's the way to go.\nOtherwise, I think you can get away with some tricks with\ncherrypy.server.bus.states\n"
] | [
4
] | [] | [] | [
"cherrypy",
"python"
] | stackoverflow_0002988636_cherrypy_python.txt |
Q:
File copy completion?
In Linux, how can we know if a file has completed copying before reading it? In Windows, an OSError is raised.
A:
In Linux, you can open a file while another process is writing to it without Python throwing an OSError, so in general, you cannot know for sure whether the other side has finished writing into that file. You can try some hacks, though:
You can check the file size regularly to see whether it increased since the last check. If it hasn't increased in, say, five seconds, you might be safe to assume that the copy has finished. I'm saying might since this is not true in all circumstances. If the other process that is writing the file is blocked for whatever reason, it might temporarily stop writing to the file and resume it later. So this is not 100% fool-proof, but might work for local file copies if the system is never under a heavy load that would stall the writing process.
You can check the output of fuser (this is a shell command), which will list the process IDs for all the files that are holding a file handle to a given file name. If this list includes any process other than yours, you can assume that the copying process hasn't finished yet. However, you will have to make sure that fuser is installed on the target system in order to make it work.
A:
You can use the inotify mechanisms (via pyinotify) to catch events like CREATE, WRITE, CLOSE and based on them you can assume wether the copy has finished or not.
However, since you provided no details on what are you trying to do, I can't tell if inotify would be suitable for you (btw, inotify is Linux specific so you can't use it on Windows or other platforms)
| File copy completion? | In Linux, how can we know if a file has completed copying before reading it? In Windows, an OSError is raised.
| [
"In Linux, you can open a file while another process is writing to it without Python throwing an OSError, so in general, you cannot know for sure whether the other side has finished writing into that file. You can try some hacks, though:\n\nYou can check the file size regularly to see whether it increased since the last check. If it hasn't increased in, say, five seconds, you might be safe to assume that the copy has finished. I'm saying might since this is not true in all circumstances. If the other process that is writing the file is blocked for whatever reason, it might temporarily stop writing to the file and resume it later. So this is not 100% fool-proof, but might work for local file copies if the system is never under a heavy load that would stall the writing process.\nYou can check the output of fuser (this is a shell command), which will list the process IDs for all the files that are holding a file handle to a given file name. If this list includes any process other than yours, you can assume that the copying process hasn't finished yet. However, you will have to make sure that fuser is installed on the target system in order to make it work.\n\n",
"You can use the inotify mechanisms (via pyinotify) to catch events like CREATE, WRITE, CLOSE and based on them you can assume wether the copy has finished or not. \nHowever, since you provided no details on what are you trying to do, I can't tell if inotify would be suitable for you (btw, inotify is Linux specific so you can't use it on Windows or other platforms)\n"
] | [
1,
1
] | [] | [] | [
"file_io",
"python"
] | stackoverflow_0002989388_file_io_python.txt |
Q:
stdout and stderr anomalies
from the interactive prompt:
>>> import sys
>>> sys.stdout.write('is the')
is the6
what is '6' doing there?
another example:
>>> for i in range(3):
... sys.stderr.write('new black')
...
9
9
9
new blacknew blacknew black
where are the numbers coming from?
A:
In 3.x the write method of a file object returns the number of bytes written, and the interactive prompt prints out the return value of whatever you are running. So you print out 'is the' (6 bytes), and the interpreter then prints out 6 (the return from write). See the relevant docs for 3.1.
This does not happen before 3.0 as the write method returned None, and therefore nothing was printed.
A:
They are the return values from write, printed by the interactive shell.
Try
>>> 3
what happens?
A:
This happens on Python3, but not Python2.
Mike is correct that the write in Python3 is returning the number of bytes written, which is then being printed by the interactive shell.
the write in Python2 returned None, so nothing was printed
| stdout and stderr anomalies | from the interactive prompt:
>>> import sys
>>> sys.stdout.write('is the')
is the6
what is '6' doing there?
another example:
>>> for i in range(3):
... sys.stderr.write('new black')
...
9
9
9
new blacknew blacknew black
where are the numbers coming from?
| [
"In 3.x the write method of a file object returns the number of bytes written, and the interactive prompt prints out the return value of whatever you are running. So you print out 'is the' (6 bytes), and the interpreter then prints out 6 (the return from write). See the relevant docs for 3.1.\nThis does not happen before 3.0 as the write method returned None, and therefore nothing was printed.\n",
"They are the return values from write, printed by the interactive shell.\nTry \n>>> 3\n\nwhat happens?\n",
"This happens on Python3, but not Python2.\nMike is correct that the write in Python3 is returning the number of bytes written, which is then being printed by the interactive shell.\nthe write in Python2 returned None, so nothing was printed\n"
] | [
2,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0002989591_python.txt |
Q:
How to make string from regex and value of group
I have regexp for twitter profile url and someone's twitter profile url. I can easily extract username from url.
>>> twitter_re = re.compile('twitter.com/(?P<username>\w+)/')
>>> twitter_url = 'twitter.com/dir01/'
>>> username = twitter_re.search(twitter_url).groups()[0]
>>> _
'dir01'
But if I have regexp and username, how do I get url?
A:
Regexen are no two-way street. You can use them for parsing strings, but not for generating strings back from the result. You should probably look into another way of getting the URLs back, like basic string interpolation, or URI templates (see http://code.google.com/p/uri-templates/)
A:
If you are not looking for a general solution to convert any regex into a formatting string, but something that you can hardcode:
twitter_url = 'twitter.com/%(username)s/' % {'username': 'dir01'}
...should give you what you need.
If you want a more general (but not incredibly robust solution):
import re
def format_to_re(format):
# Replace Python string formatting syntax with named group re syntax.
return re.compile(re.sub(r'%\((\w+)\)s', r'(?P<\1>\w+)', format))
twitter_format = 'twitter.com/%(username)s/'
twitter_re = format_to_re(twitter_format)
m = twitter_re.search('twitter.com/dir01/')
print m.groupdict()
print twitter_format % m.groupdict()
Gives me:
{'username': 'dir01'}
twitter.com/dir01/
And finally, the slightly larger and more complete solution that I have been using myself can be found in the Pattern class here.
| How to make string from regex and value of group | I have regexp for twitter profile url and someone's twitter profile url. I can easily extract username from url.
>>> twitter_re = re.compile('twitter.com/(?P<username>\w+)/')
>>> twitter_url = 'twitter.com/dir01/'
>>> username = twitter_re.search(twitter_url).groups()[0]
>>> _
'dir01'
But if I have regexp and username, how do I get url?
| [
"Regexen are no two-way street. You can use them for parsing strings, but not for generating strings back from the result. You should probably look into another way of getting the URLs back, like basic string interpolation, or URI templates (see http://code.google.com/p/uri-templates/)\n",
"If you are not looking for a general solution to convert any regex into a formatting string, but something that you can hardcode:\ntwitter_url = 'twitter.com/%(username)s/' % {'username': 'dir01'}\n\n...should give you what you need.\nIf you want a more general (but not incredibly robust solution):\nimport re\n\ndef format_to_re(format):\n # Replace Python string formatting syntax with named group re syntax.\n return re.compile(re.sub(r'%\\((\\w+)\\)s', r'(?P<\\1>\\w+)', format))\n\ntwitter_format = 'twitter.com/%(username)s/'\ntwitter_re = format_to_re(twitter_format)\n\nm = twitter_re.search('twitter.com/dir01/')\nprint m.groupdict()\nprint twitter_format % m.groupdict()\n\nGives me:\n{'username': 'dir01'}\ntwitter.com/dir01/\n\nAnd finally, the slightly larger and more complete solution that I have been using myself can be found in the Pattern class here.\n"
] | [
1,
0
] | [
"Why do you need the regex for that - just append the strings.\nbase_url = \"twitter.com/\"\ntwt_handle = \"dir01\"\ntwit_url = base_url + twt_handle\n\n"
] | [
-1
] | [
"python",
"regex",
"replace"
] | stackoverflow_0002989715_python_regex_replace.txt |
Q:
Django: How to write the reverse function for the following
The urlconf and view is as follows:
url(r'^register/$',
register,
{ 'backend': 'registration.backends.default.DefaultBackend' },
name='registration_register'),
def register(request, backend, success_url=None, form_class=None,
disallowed_url='registration_disallowed',
template_name='registration/registration_form.html',
extra_context=None):
What i want to do is redirect users to the register page and specify a success_url. I tried reverse('registration.views.register', kwargs={'success_url':'/test/' }) but that doesn't seem to work. I've been trying for hours and can't get my mind around getting it right. Thanks
A:
If you want to be able to specify reverse() with parameters, those parameters have to be defined in the URL configuration itself (regexp). Something like:
url(r'^register/(?P<success_url>[\w\/]+)/$',
register,
{ 'backend': 'registration.backends.default.DefaultBackend' },
name='registration_register'),
You can wrap that URL section in ()? to make it optional (So that it matches just simple register/ too)
The difference between args and kwargs is that with args you can specify unnamed/named URL params while with kwargs only named.
So:
r'^register/(?P<success_url>\w+)/$'
reverse('url_name', args=[my_success_url])
reverse('url_name', kwargs={'success_url': my_success_url}) // both work and do the same
r'^register/(\w+)/$'
reverse('url_name', args=[my_success_url]) // only this works
Edit:
For success_url params, if you want to be able to match any full relative URL, including possible GET params in the relative URL, the actual regexp could get pretty complex.
Something like (untested):
r'^register/(?P<success_url>[\w\/]+(((\?)([a-zA-Z]*=\w*)){1}((&)([a-zA-Z]*=\w*))*)?)/$'
A:
Edit: Sorry, completely misread the question - I didn't look at the function definition. Actually, the issue here is that your URLconf is designed in such a way as to make it impossible to set the success_url dynamically. It has to be passed explicitly to the function via the extra_context dictionary - ie the one where you have currently defined backend. Since there is nothing in the URL itself to accept this parameter, it has to be hard-coded there.
| Django: How to write the reverse function for the following | The urlconf and view is as follows:
url(r'^register/$',
register,
{ 'backend': 'registration.backends.default.DefaultBackend' },
name='registration_register'),
def register(request, backend, success_url=None, form_class=None,
disallowed_url='registration_disallowed',
template_name='registration/registration_form.html',
extra_context=None):
What i want to do is redirect users to the register page and specify a success_url. I tried reverse('registration.views.register', kwargs={'success_url':'/test/' }) but that doesn't seem to work. I've been trying for hours and can't get my mind around getting it right. Thanks
| [
"If you want to be able to specify reverse() with parameters, those parameters have to be defined in the URL configuration itself (regexp). Something like:\nurl(r'^register/(?P<success_url>[\\w\\/]+)/$',\n register,\n { 'backend': 'registration.backends.default.DefaultBackend' },\n name='registration_register'),\n\nYou can wrap that URL section in ()? to make it optional (So that it matches just simple register/ too)\nThe difference between args and kwargs is that with args you can specify unnamed/named URL params while with kwargs only named.\nSo:\nr'^register/(?P<success_url>\\w+)/$'\nreverse('url_name', args=[my_success_url])\nreverse('url_name', kwargs={'success_url': my_success_url}) // both work and do the same\n\nr'^register/(\\w+)/$'\nreverse('url_name', args=[my_success_url]) // only this works\n\nEdit:\nFor success_url params, if you want to be able to match any full relative URL, including possible GET params in the relative URL, the actual regexp could get pretty complex.\nSomething like (untested):\nr'^register/(?P<success_url>[\\w\\/]+(((\\?)([a-zA-Z]*=\\w*)){1}((&)([a-zA-Z]*=\\w*))*)?)/$'\n\n",
"Edit: Sorry, completely misread the question - I didn't look at the function definition. Actually, the issue here is that your URLconf is designed in such a way as to make it impossible to set the success_url dynamically. It has to be passed explicitly to the function via the extra_context dictionary - ie the one where you have currently defined backend. Since there is nothing in the URL itself to accept this parameter, it has to be hard-coded there.\n"
] | [
1,
0
] | [] | [] | [
"django",
"keyword_argument",
"python",
"reverse",
"url"
] | stackoverflow_0002988978_django_keyword_argument_python_reverse_url.txt |
Q:
How can I find all the possible combinations of a list of lists (in Python)?
I have the following structure in Python:
letters = [['a', 'b', 'c'], ['p', 'q', 'r', 's'], ['j', 'k', 'l']]
I would like to find all the possible combinations of letters in the order that they currently exist. For the example above this would be:
apj
apk
apl
aqj
aqk
aql
...
csk
csl
This seems like it should be a very simple thing to do but I cannot figure it out.
A:
In Python 2.6 or newer you can use itertools.product:
>>> import itertools
>>> map(''.join, itertools.product(*letters))
apj
apk
apl
aqj
aqk
aql
...etc...
csk
csl
| How can I find all the possible combinations of a list of lists (in Python)? | I have the following structure in Python:
letters = [['a', 'b', 'c'], ['p', 'q', 'r', 's'], ['j', 'k', 'l']]
I would like to find all the possible combinations of letters in the order that they currently exist. For the example above this would be:
apj
apk
apl
aqj
aqk
aql
...
csk
csl
This seems like it should be a very simple thing to do but I cannot figure it out.
| [
"In Python 2.6 or newer you can use itertools.product:\n>>> import itertools\n>>> map(''.join, itertools.product(*letters))\napj\napk\napl\naqj\naqk\naql\n...etc...\ncsk\ncsl\n\n"
] | [
14
] | [] | [] | [
"python"
] | stackoverflow_0002990003_python.txt |
Q:
Problems with sys.stdout.write() with time.sleep() in a function
What I wanted is printing out 5 dots that a dot printed per a second using time.sleep(), but the result was 5 dots were printed at once after 5 seconds delay.
Tried both print and sys.stdout.write, same result.
Thanks for any advices.
import time
import sys
def wait_for(n):
"""Wait for {n} seconds. {n} should be an integer greater than 0."""
if not isinstance(n, int):
print 'n in wait_for(n) should be an integer.'
return
elif n < 1:
print 'n in wait_for(n) should be greater than 0.'
return
for i in range(0, n):
sys.stdout.write('.')
time.sleep(1)
sys.stdout.write('\n')
def main():
wait_for(5) # FIXME: doesn't work as expected
if __name__ == '__main__':
try:
main()
except KeyboardInterrupt:
print '\nAborted.'
A:
You need to flush after writing.
sys.stdout.write('foo')
sys.stdout.flush()
wastetime()
sys.stdout.write('bar')
sys.stdout.flush()
A:
You should use sys.stderr.write for progress bars; stderr has the (not at all coincidental) advantage of not being buffered, so no sys.stderr.flush calls are needed.
See also this answer.
| Problems with sys.stdout.write() with time.sleep() in a function | What I wanted is printing out 5 dots that a dot printed per a second using time.sleep(), but the result was 5 dots were printed at once after 5 seconds delay.
Tried both print and sys.stdout.write, same result.
Thanks for any advices.
import time
import sys
def wait_for(n):
"""Wait for {n} seconds. {n} should be an integer greater than 0."""
if not isinstance(n, int):
print 'n in wait_for(n) should be an integer.'
return
elif n < 1:
print 'n in wait_for(n) should be greater than 0.'
return
for i in range(0, n):
sys.stdout.write('.')
time.sleep(1)
sys.stdout.write('\n')
def main():
wait_for(5) # FIXME: doesn't work as expected
if __name__ == '__main__':
try:
main()
except KeyboardInterrupt:
print '\nAborted.'
| [
"You need to flush after writing.\nsys.stdout.write('foo')\nsys.stdout.flush()\nwastetime()\nsys.stdout.write('bar')\nsys.stdout.flush()\n\n",
"You should use sys.stderr.write for progress bars; stderr has the (not at all coincidental) advantage of not being buffered, so no sys.stderr.flush calls are needed.\nSee also this answer.\n"
] | [
9,
4
] | [] | [] | [
"buffering",
"python"
] | stackoverflow_0002808832_buffering_python.txt |
Q:
python on the web, does it compile down to bytecode or is it more like php?
Does python compile down to some byte code or is it rendered on the fly each time like php/asp?
From my readings I read it has its own byte code format, so i figured it was like java/.net where it compiles into a intermediate language/byte code.
so it is more effecient in that respect that php right?
A:
Given a language X, and a way the server can be aware of it (a module or whatever) or a proper "intermediate" CGI program mX, this mX can be programmed so that it indeed interprets directly plain text script in X (like php), or bytecode compiled code (originally written in X). So, provided the existance of the proper mX, it could be both options. But I think the most common one is the same as php and asp.
Coping with bytecodes can be more efficient than interpreting scripts (even though modern interpreters are not implemented in the simple way and use "tricks" to boost performance)
A:
You can easily see the bytecode:
>>> import dis
>>> dis.dis(lambda x: x*2)
1 0 LOAD_FAST 0 (x)
3 LOAD_CONST 0 (2)
6 BINARY_MULTIPLY
7 RETURN_VALUE
>>>
A:
You can compile python to PYC ( http://effbot.org/zone/python-compile.htm ). Just executing a python script, however, does not do this compilation automatically, so a CGI web setup with python will not automatically compile.
A:
When you use a module in Python, it (generally) gets compiled if it hasn't been already. For example, if you have a Django app deployed with just .py files, they'll get compiled (and output as .pyc files) as the modules are imported by the app.
A:
Python modules are 'compiled' to .pyc files when they are imported. This isn't the same as compiling Java or .NET though, what's left has an almost 1-1 correspondence with your source; it means the file doesn't have to be parsed next time, but that's about all.
You can use the compile or compileall modules to pre-compile a bundle of scripts, or to compile a script which wouldn't otherwise be compiled. I don't think that running a script from the command line (or from CGI) would use the .pyc though.
| python on the web, does it compile down to bytecode or is it more like php? | Does python compile down to some byte code or is it rendered on the fly each time like php/asp?
From my readings I read it has its own byte code format, so i figured it was like java/.net where it compiles into a intermediate language/byte code.
so it is more effecient in that respect that php right?
| [
"Given a language X, and a way the server can be aware of it (a module or whatever) or a proper \"intermediate\" CGI program mX, this mX can be programmed so that it indeed interprets directly plain text script in X (like php), or bytecode compiled code (originally written in X). So, provided the existance of the proper mX, it could be both options. But I think the most common one is the same as php and asp.\nCoping with bytecodes can be more efficient than interpreting scripts (even though modern interpreters are not implemented in the simple way and use \"tricks\" to boost performance)\n",
"You can easily see the bytecode:\n>>> import dis\n>>> dis.dis(lambda x: x*2)\n 1 0 LOAD_FAST 0 (x)\n 3 LOAD_CONST 0 (2)\n 6 BINARY_MULTIPLY \n 7 RETURN_VALUE \n>>> \n\n",
"You can compile python to PYC ( http://effbot.org/zone/python-compile.htm ). Just executing a python script, however, does not do this compilation automatically, so a CGI web setup with python will not automatically compile.\n",
"When you use a module in Python, it (generally) gets compiled if it hasn't been already. For example, if you have a Django app deployed with just .py files, they'll get compiled (and output as .pyc files) as the modules are imported by the app.\n",
"Python modules are 'compiled' to .pyc files when they are imported. This isn't the same as compiling Java or .NET though, what's left has an almost 1-1 correspondence with your source; it means the file doesn't have to be parsed next time, but that's about all.\nYou can use the compile or compileall modules to pre-compile a bundle of scripts, or to compile a script which wouldn't otherwise be compiled. I don't think that running a script from the command line (or from CGI) would use the .pyc though.\n"
] | [
1,
1,
0,
0,
0
] | [] | [] | [
"fastcgi",
"python"
] | stackoverflow_0002990301_fastcgi_python.txt |
Q:
Is there a recommended command for "hg bisect --command"?
I have an emergent bug that I've got to track down tomorrow. I know a previous hg revision which was good so I'm thinking about using hg bisect.
However, I'm on Windows and don't want to get into DOS scripting.
Ideally, I'd be able to write a Python unit test and have hg bisect use that. This is my first attempt.
bisector.py
#!/usr/bin/env python
import sys
import unittest
class TestCase(unittest.TestCase):
def test(self):
#raise Exception('Exception for testing.')
#self.fail("Failure for testing.")
pass
def main():
suite = unittest.defaultTestLoader.loadTestsFromTestCase(TestCase)
result = unittest.TestResult()
suite.run(result)
if result.errors:
# Skip the revision
return 125
if result.wasSuccessful():
return 0
else:
return 1
if '__main__' == __name__:
sys.exit(main())
Perhaps I could then run:
hg bisect --reset
hg bisect --bad
hg bisect --good -r 1
hg bisect --command=bisector.py
Is there a better way of doing it? Thanks for any advice.
A:
Thanks to all, especially to Will McCutchen.
The solution that worked best is below.
bisector.py
#!/usr/bin/env python
import unittest
class TestCase(unittest.TestCase):
def test(self):
# Raise an assertion error to mark the revision as bad
pass
if '__main__' == __name__:
unittest.main()
The hard part was getting the hg bisect commands right:
hg update tip
hg bisect --reset
hg bisect --bad
hg bisect --good 0
hg bisect --command ./bisector.py
or on Windows, the last command is:
hg bisect --command bisector.py
A:
I think you can remove your main() function and use the following block to run the tests:
if __name__ == '__main__':
unittest.main()
The call to unittest.main() will run the tests it finds in this file and exit with an appropriate status code depending on whether all the tests pass or fail.
A:
In case you have some unix tools at your disposal, note that 'grep' sets its exit status in a useful way. So if your unit test prints "PASS" when it passes, you can do:
hg bisect -c './unittest | grep PASS'
and that'll work pretty darn well.
| Is there a recommended command for "hg bisect --command"? | I have an emergent bug that I've got to track down tomorrow. I know a previous hg revision which was good so I'm thinking about using hg bisect.
However, I'm on Windows and don't want to get into DOS scripting.
Ideally, I'd be able to write a Python unit test and have hg bisect use that. This is my first attempt.
bisector.py
#!/usr/bin/env python
import sys
import unittest
class TestCase(unittest.TestCase):
def test(self):
#raise Exception('Exception for testing.')
#self.fail("Failure for testing.")
pass
def main():
suite = unittest.defaultTestLoader.loadTestsFromTestCase(TestCase)
result = unittest.TestResult()
suite.run(result)
if result.errors:
# Skip the revision
return 125
if result.wasSuccessful():
return 0
else:
return 1
if '__main__' == __name__:
sys.exit(main())
Perhaps I could then run:
hg bisect --reset
hg bisect --bad
hg bisect --good -r 1
hg bisect --command=bisector.py
Is there a better way of doing it? Thanks for any advice.
| [
"Thanks to all, especially to Will McCutchen.\nThe solution that worked best is below.\nbisector.py\n#!/usr/bin/env python\n\nimport unittest\n\nclass TestCase(unittest.TestCase):\n\n def test(self):\n # Raise an assertion error to mark the revision as bad\n pass\n\n\nif '__main__' == __name__:\n unittest.main()\n\nThe hard part was getting the hg bisect commands right:\nhg update tip\nhg bisect --reset\nhg bisect --bad\nhg bisect --good 0\nhg bisect --command ./bisector.py\n\nor on Windows, the last command is:\nhg bisect --command bisector.py\n\n",
"I think you can remove your main() function and use the following block to run the tests:\nif __name__ == '__main__':\n unittest.main()\n\nThe call to unittest.main() will run the tests it finds in this file and exit with an appropriate status code depending on whether all the tests pass or fail.\n",
"In case you have some unix tools at your disposal, note that 'grep' sets its exit status in a useful way. So if your unit test prints \"PASS\" when it passes, you can do:\nhg bisect -c './unittest | grep PASS'\n\nand that'll work pretty darn well.\n"
] | [
10,
4,
1
] | [] | [] | [
"mercurial",
"python"
] | stackoverflow_0002511704_mercurial_python.txt |
Q:
Python - Polymorphism in wxPython, What's wrong?
I am trying to write a simple custom button in wx.Python. My code is as follows, an error is thrown on line 19 of my "Custom_Button.py" - What is going on? I can find no help online for this error and have a suspicion that it has to do with the Polymorphism. (As a side note: I am relatively new to python having come from C++ and C# any help on syntax and function of the code would be great! - knowing that, it could be a simple error. thanks!)
Error
def __init__(self, parent, id=-1, NORM_BMP, PUSH_BMP, MOUSE_OVER_BMP, **kwargs):
SyntaxError: non-default argument follows default argument
Main.py
class MyFrame(wx.Frame):
def __init__(self, parent, ID, title):
wxFrame.__init__(self, parent, ID, title,
wxDefaultPosition, wxSize(400, 400))
self.CreateStatusBar()
self.SetStatusText("Program testing custom button overlays")
menu = wxMenu()
menu.Append(ID_ABOUT, "&About", "More information about this program")
menu.AppendSeparator()
menu.Append(ID_EXIT, "E&xit", "Terminate the program")
menuBar = wxMenuBar()
menuBar.Append(menu, "&File");
self.SetMenuBar(menuBar)
self.Button1 = Custom_Button(self, parent, -1,
"D:/Documents/Python/Normal.bmp",
"D:/Documents/Python/Clicked.bmp",
"D:/Documents/Python/Over.bmp",
"None", wx.Point(200,200), wx.Size(300,100))
EVT_MENU(self, ID_ABOUT, self.OnAbout)
EVT_MENU(self, ID_EXIT, self.TimeToQuit)
def OnAbout(self, event):
dlg = wxMessageDialog(self, "Testing the functions of custom "
"buttons using pyDev and wxPython",
"About", wxOK | wxICON_INFORMATION)
dlg.ShowModal()
dlg.Destroy()
def TimeToQuit(self, event):
self.Close(true)
class MyApp(wx.App):
def OnInit(self):
frame = MyFrame(NULL, -1, "wxPython | Buttons")
frame.Show(true)
self.SetTopWindow(frame)
return true
app = MyApp(0)
app.MainLoop()
Custom Button
import wx
from wxPython.wx import *
class Custom_Button(wx.PyControl):
############################################
##THE ERROR IS BEING THROWN SOME WHERE IN HERE ##
############################################
# The BMP's
Mouse_over_bmp = wx.Bitmap(0) # When the mouse is over
Norm_bmp = wx.Bitmap(0) # The normal BMP
Push_bmp = wx.Bitmap(0) # The down BMP
Pos_bmp = wx.Point(0,0) # The posisition of the button
def __init__(self, parent, NORM_BMP, PUSH_BMP, MOUSE_OVER_BMP, text="",
pos, size, id=-1, **kwargs):
wx.PyControl.__init__(self,parent, id, **kwargs)
# Set the BMP's to the ones given in the constructor
self.Mouse_over_bmp = wx.Bitmap(MOUSE_OVER_BMP)
self.Norm_bmp = wx.Bitmap(NORM_BMP)
self.Push_bmp = wx.Bitmap(PUSH_BMP)
self.Pos_bmp = pos
############################################
##THE ERROR IS BEING THROWN SOME WHERE IN HERE ##
############################################
self.Bind(wx.EVT_LEFT_DOWN, self._onMouseDown)
self.Bind(wx.EVT_LEFT_UP, self._onMouseUp)
self.Bind(wx.EVT_LEAVE_WINDOW, self._onMouseLeave)
self.Bind(wx.EVT_ENTER_WINDOW, self._onMouseEnter)
self.Bind(wx.EVT_ERASE_BACKGROUND,self._onEraseBackground)
self.Bind(wx.EVT_PAINT,self._onPaint)
self._mouseIn = self._mouseDown = False
def _onMouseEnter(self, event):
self._mouseIn = True
def _onMouseLeave(self, event):
self._mouseIn = False
def _onMouseDown(self, event):
self._mouseDown = True
def _onMouseUp(self, event):
self._mouseDown = False
self.sendButtonEvent()
def sendButtonEvent(self):
event = wx.CommandEvent(wx.wxEVT_COMMAND_BUTTON_CLICKED, self.GetId())
event.SetInt(0)
event.SetEventObject(self)
self.GetEventHandler().ProcessEvent(event)
def _onEraseBackground(self,event):
# reduce flicker
pass
def _onPaint(self, event):
dc = wx.BufferedPaintDC(self)
dc.SetFont(self.GetFont())
dc.SetBackground(wx.Brush(self.GetBackgroundColour()))
dc.Clear()
dc.DrawBitmap(self.Norm_bmp)
# draw whatever you want to draw
# draw glossy bitmaps e.g. dc.DrawBitmap
if self._mouseIn: # If the Mouse is over the button
dc.DrawBitmap(self, self.Mouse_over_bmp, self.Pos_bmp, useMask=False)
if self._mouseDown: # If the Mouse clicks the button
dc.DrawBitmap(self, self.Push_bmp, self.Pos_bmp, useMask=False)
A:
In function definitions, arguments with default values need to be listed after arguments without defaults, but before *args and **kwargs expansions
Before:
def __init__(self, parent, id=-1, NORM_BMP, PUSH_BMP, MOUSE_OVER_BMP, text="",
pos, size, **kwargs)
Corrected:
def __init__(self, parent, NORM_BMP, PUSH_BMP, MOUSE_OVER_BMP,
pos, size, id=-1, text="", **kwargs)
| Python - Polymorphism in wxPython, What's wrong? | I am trying to write a simple custom button in wx.Python. My code is as follows, an error is thrown on line 19 of my "Custom_Button.py" - What is going on? I can find no help online for this error and have a suspicion that it has to do with the Polymorphism. (As a side note: I am relatively new to python having come from C++ and C# any help on syntax and function of the code would be great! - knowing that, it could be a simple error. thanks!)
Error
def __init__(self, parent, id=-1, NORM_BMP, PUSH_BMP, MOUSE_OVER_BMP, **kwargs):
SyntaxError: non-default argument follows default argument
Main.py
class MyFrame(wx.Frame):
def __init__(self, parent, ID, title):
wxFrame.__init__(self, parent, ID, title,
wxDefaultPosition, wxSize(400, 400))
self.CreateStatusBar()
self.SetStatusText("Program testing custom button overlays")
menu = wxMenu()
menu.Append(ID_ABOUT, "&About", "More information about this program")
menu.AppendSeparator()
menu.Append(ID_EXIT, "E&xit", "Terminate the program")
menuBar = wxMenuBar()
menuBar.Append(menu, "&File");
self.SetMenuBar(menuBar)
self.Button1 = Custom_Button(self, parent, -1,
"D:/Documents/Python/Normal.bmp",
"D:/Documents/Python/Clicked.bmp",
"D:/Documents/Python/Over.bmp",
"None", wx.Point(200,200), wx.Size(300,100))
EVT_MENU(self, ID_ABOUT, self.OnAbout)
EVT_MENU(self, ID_EXIT, self.TimeToQuit)
def OnAbout(self, event):
dlg = wxMessageDialog(self, "Testing the functions of custom "
"buttons using pyDev and wxPython",
"About", wxOK | wxICON_INFORMATION)
dlg.ShowModal()
dlg.Destroy()
def TimeToQuit(self, event):
self.Close(true)
class MyApp(wx.App):
def OnInit(self):
frame = MyFrame(NULL, -1, "wxPython | Buttons")
frame.Show(true)
self.SetTopWindow(frame)
return true
app = MyApp(0)
app.MainLoop()
Custom Button
import wx
from wxPython.wx import *
class Custom_Button(wx.PyControl):
############################################
##THE ERROR IS BEING THROWN SOME WHERE IN HERE ##
############################################
# The BMP's
Mouse_over_bmp = wx.Bitmap(0) # When the mouse is over
Norm_bmp = wx.Bitmap(0) # The normal BMP
Push_bmp = wx.Bitmap(0) # The down BMP
Pos_bmp = wx.Point(0,0) # The posisition of the button
def __init__(self, parent, NORM_BMP, PUSH_BMP, MOUSE_OVER_BMP, text="",
pos, size, id=-1, **kwargs):
wx.PyControl.__init__(self,parent, id, **kwargs)
# Set the BMP's to the ones given in the constructor
self.Mouse_over_bmp = wx.Bitmap(MOUSE_OVER_BMP)
self.Norm_bmp = wx.Bitmap(NORM_BMP)
self.Push_bmp = wx.Bitmap(PUSH_BMP)
self.Pos_bmp = pos
############################################
##THE ERROR IS BEING THROWN SOME WHERE IN HERE ##
############################################
self.Bind(wx.EVT_LEFT_DOWN, self._onMouseDown)
self.Bind(wx.EVT_LEFT_UP, self._onMouseUp)
self.Bind(wx.EVT_LEAVE_WINDOW, self._onMouseLeave)
self.Bind(wx.EVT_ENTER_WINDOW, self._onMouseEnter)
self.Bind(wx.EVT_ERASE_BACKGROUND,self._onEraseBackground)
self.Bind(wx.EVT_PAINT,self._onPaint)
self._mouseIn = self._mouseDown = False
def _onMouseEnter(self, event):
self._mouseIn = True
def _onMouseLeave(self, event):
self._mouseIn = False
def _onMouseDown(self, event):
self._mouseDown = True
def _onMouseUp(self, event):
self._mouseDown = False
self.sendButtonEvent()
def sendButtonEvent(self):
event = wx.CommandEvent(wx.wxEVT_COMMAND_BUTTON_CLICKED, self.GetId())
event.SetInt(0)
event.SetEventObject(self)
self.GetEventHandler().ProcessEvent(event)
def _onEraseBackground(self,event):
# reduce flicker
pass
def _onPaint(self, event):
dc = wx.BufferedPaintDC(self)
dc.SetFont(self.GetFont())
dc.SetBackground(wx.Brush(self.GetBackgroundColour()))
dc.Clear()
dc.DrawBitmap(self.Norm_bmp)
# draw whatever you want to draw
# draw glossy bitmaps e.g. dc.DrawBitmap
if self._mouseIn: # If the Mouse is over the button
dc.DrawBitmap(self, self.Mouse_over_bmp, self.Pos_bmp, useMask=False)
if self._mouseDown: # If the Mouse clicks the button
dc.DrawBitmap(self, self.Push_bmp, self.Pos_bmp, useMask=False)
| [
"In function definitions, arguments with default values need to be listed after arguments without defaults, but before *args and **kwargs expansions\nBefore:\ndef __init__(self, parent, id=-1, NORM_BMP, PUSH_BMP, MOUSE_OVER_BMP, text=\"\", \n pos, size, **kwargs)\n\nCorrected:\ndef __init__(self, parent, NORM_BMP, PUSH_BMP, MOUSE_OVER_BMP, \n pos, size, id=-1, text=\"\", **kwargs)\n\n"
] | [
3
] | [] | [] | [
"polymorphism",
"pydev",
"python",
"wxpython"
] | stackoverflow_0002990446_polymorphism_pydev_python_wxpython.txt |
Q:
Many-to-one relationship in SQLAlchemy
This is a beginner-level question.
I have a catalog of mtypes:
mtype_id name
1 'mtype1'
2 'mtype2'
[etc]
and a catalog of Objects, which must have an associated mtype:
obj_id mtype_id name
1 1 'obj1'
2 1 'obj2'
3 2 'obj3'
[etc]
I am trying to do this in SQLAlchemy by creating the following schemas:
mtypes_table = Table('mtypes', metadata,
Column('mtype_id', Integer, primary_key=True),
Column('name', String(50), nullable=False, unique=True),
)
objs_table = Table('objects', metadata,
Column('obj_id', Integer, primary_key=True),
Column('mtype_id', None, ForeignKey('mtypes.mtype_id')),
Column('name', String(50), nullable=False, unique=True),
)
mapper(MType, mtypes_table)
mapper(MyObject, objs_table,
properties={'mtype':Relationship(MType, backref='objs', cascade="all, delete-orphan")}
)
When I try to add a simple element like:
mtype1 = MType('mtype1')
obj1 = MyObject('obj1')
obj1.mtype=mtype1
session.add(obj1)
I get the error:
AttributeError: 'NoneType' object has no attribute 'cascade_iterator'
Any ideas?
A:
Have you tried:
Column('mtype_id', ForeignKey('mtypes.mtype_id')),
instead of:
Column('mtype_id', None, ForeignKey('mtypes.mtype_id')),
See also: https://docs.sqlalchemy.org/en/13/core/constraints.html
A:
I was able to run the code you have shown above so I guess the problem was removed when you simplified it for the purpose of this question. Is that correct?
You didn't show a traceback so only some general tips can be given.
In SQLAlchemy (at least in 0.5.8 and above) there are only two objects with "cascade_iterator" attribute: sqlalchemy.orm.mapper.Mapper and sqlalchemy.orm.interfaces.MapperProperty.
Since you didn't get sqlalchemy.orm.exc.UnmappedClassError exception (all mappers are right where they should be) my wild guess is that some internal sqlalchemy code gets None somewhere where it should get a MapperProperty instance instead.
Put something like this just before session.add() call that causes the exception:
from sqlalchemy.orm import class_mapper
from sqlalchemy.orm.interfaces import MapperProperty
props = [p for p in class_mapper(MyObject).iterate_properties]
test = [isinstance(p, MapperProperty) for p in props]
invalid_prop = None
if False in test:
invalid_prop = props[test.index(False)]
and then use your favourite method (print, python -m, pdb.set_trace(), ...) to check the value of invalid_prop. It's likely that for some reason it won't be None and there lies your culprit.
If type(invalid_prop) is a sqlalchemy.orm.properties.RelationshipProperty then you have introduced a bug in mapper configuration (for relation named invalid_prop.key). Otherwise it's hard to tell without more information.
| Many-to-one relationship in SQLAlchemy | This is a beginner-level question.
I have a catalog of mtypes:
mtype_id name
1 'mtype1'
2 'mtype2'
[etc]
and a catalog of Objects, which must have an associated mtype:
obj_id mtype_id name
1 1 'obj1'
2 1 'obj2'
3 2 'obj3'
[etc]
I am trying to do this in SQLAlchemy by creating the following schemas:
mtypes_table = Table('mtypes', metadata,
Column('mtype_id', Integer, primary_key=True),
Column('name', String(50), nullable=False, unique=True),
)
objs_table = Table('objects', metadata,
Column('obj_id', Integer, primary_key=True),
Column('mtype_id', None, ForeignKey('mtypes.mtype_id')),
Column('name', String(50), nullable=False, unique=True),
)
mapper(MType, mtypes_table)
mapper(MyObject, objs_table,
properties={'mtype':Relationship(MType, backref='objs', cascade="all, delete-orphan")}
)
When I try to add a simple element like:
mtype1 = MType('mtype1')
obj1 = MyObject('obj1')
obj1.mtype=mtype1
session.add(obj1)
I get the error:
AttributeError: 'NoneType' object has no attribute 'cascade_iterator'
Any ideas?
| [
"Have you tried:\nColumn('mtype_id', ForeignKey('mtypes.mtype_id')),\n\ninstead of:\nColumn('mtype_id', None, ForeignKey('mtypes.mtype_id')),\n\nSee also: https://docs.sqlalchemy.org/en/13/core/constraints.html\n",
"I was able to run the code you have shown above so I guess the problem was removed when you simplified it for the purpose of this question. Is that correct?\nYou didn't show a traceback so only some general tips can be given.\nIn SQLAlchemy (at least in 0.5.8 and above) there are only two objects with \"cascade_iterator\" attribute: sqlalchemy.orm.mapper.Mapper and sqlalchemy.orm.interfaces.MapperProperty.\nSince you didn't get sqlalchemy.orm.exc.UnmappedClassError exception (all mappers are right where they should be) my wild guess is that some internal sqlalchemy code gets None somewhere where it should get a MapperProperty instance instead.\nPut something like this just before session.add() call that causes the exception:\nfrom sqlalchemy.orm import class_mapper\nfrom sqlalchemy.orm.interfaces import MapperProperty\n\nprops = [p for p in class_mapper(MyObject).iterate_properties]\ntest = [isinstance(p, MapperProperty) for p in props]\ninvalid_prop = None\nif False in test:\n invalid_prop = props[test.index(False)]\n\nand then use your favourite method (print, python -m, pdb.set_trace(), ...) to check the value of invalid_prop. It's likely that for some reason it won't be None and there lies your culprit.\nIf type(invalid_prop) is a sqlalchemy.orm.properties.RelationshipProperty then you have introduced a bug in mapper configuration (for relation named invalid_prop.key). Otherwise it's hard to tell without more information.\n"
] | [
2,
0
] | [] | [] | [
"database_design",
"many_to_one",
"python",
"sqlalchemy"
] | stackoverflow_0002952010_database_design_many_to_one_python_sqlalchemy.txt |
Q:
How do I loop through a list by twos?
I want to loop through a Python list and process 2 list items at a time. Something like this in another language:
for(int i = 0; i < list.length(); i+=2)
{
// do something with list[i] and list[i + 1]
}
What's the best way to accomplish this?
A:
You can use a range with a step size of 2:
Python 2
for i in xrange(0,10,2):
print(i)
Python 3
for i in range(0,10,2):
print(i)
Note: Use xrange in Python 2 instead of range because it is more efficient as it generates an iterable object, and not the whole list.
A:
You can also use this syntax (L[start:stop:step]):
mylist = [1,2,3,4,5,6,7,8,9,10]
for i in mylist[::2]:
print i,
# prints 1 3 5 7 9
for i in mylist[1::2]:
print i,
# prints 2 4 6 8 10
Where the first digit is the starting index (defaults to beginning of list or 0), 2nd is ending slice index (defaults to end of list), and the third digit is the offset or step.
A:
The simplest in my opinion is just this:
it = iter([1,2,3,4,5,6])
for x, y in zip(it, it):
print x, y
Out: 1 2
3 4
5 6
No extra imports or anything. And very elegant, in my opinion.
A:
If you're using Python 2.6 or newer you can use the grouper recipe from the itertools module:
from itertools import izip_longest
def grouper(n, iterable, fillvalue=None):
"grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx"
args = [iter(iterable)] * n
return izip_longest(fillvalue=fillvalue, *args)
Call like this:
for item1, item2 in grouper(2, l):
# Do something with item1 and item2
Note that in Python 3.x you should use zip_longest instead of izip_longest.
A:
nums = range(10)
for i in range(0, len(nums)-1, 2):
print nums[i]
Kinda dirty but it works.
A:
This might not be as fast as the izip_longest solution (I didn't actually test it), but it will work with python < 2.6 (izip_longest was added in 2.6):
from itertools import imap
def grouper(n, iterable):
"grouper(3, 'ABCDEFG') --> ('A,'B','C'), ('D','E','F'), ('G',None,None)"
args = [iter(iterable)] * n
return imap(None, *args)
If you need to go earlier than 2.3, you can substitute the built-in map for imap. The disadvantage is that it provides no ability to customize the fill value.
| How do I loop through a list by twos? | I want to loop through a Python list and process 2 list items at a time. Something like this in another language:
for(int i = 0; i < list.length(); i+=2)
{
// do something with list[i] and list[i + 1]
}
What's the best way to accomplish this?
| [
"You can use a range with a step size of 2:\nPython 2\nfor i in xrange(0,10,2):\n print(i)\n\nPython 3\nfor i in range(0,10,2):\n print(i)\n\nNote: Use xrange in Python 2 instead of range because it is more efficient as it generates an iterable object, and not the whole list.\n",
"You can also use this syntax (L[start:stop:step]):\nmylist = [1,2,3,4,5,6,7,8,9,10]\nfor i in mylist[::2]:\n print i,\n# prints 1 3 5 7 9\n\nfor i in mylist[1::2]:\n print i,\n# prints 2 4 6 8 10\n\nWhere the first digit is the starting index (defaults to beginning of list or 0), 2nd is ending slice index (defaults to end of list), and the third digit is the offset or step.\n",
"The simplest in my opinion is just this:\nit = iter([1,2,3,4,5,6])\nfor x, y in zip(it, it):\n print x, y\n\nOut: 1 2\n 3 4\n 5 6\n\nNo extra imports or anything. And very elegant, in my opinion.\n",
"If you're using Python 2.6 or newer you can use the grouper recipe from the itertools module:\nfrom itertools import izip_longest\n\ndef grouper(n, iterable, fillvalue=None):\n \"grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx\"\n args = [iter(iterable)] * n\n return izip_longest(fillvalue=fillvalue, *args)\n\nCall like this:\nfor item1, item2 in grouper(2, l):\n # Do something with item1 and item2\n\nNote that in Python 3.x you should use zip_longest instead of izip_longest.\n",
"nums = range(10)\nfor i in range(0, len(nums)-1, 2):\n print nums[i]\n\nKinda dirty but it works.\n",
"This might not be as fast as the izip_longest solution (I didn't actually test it), but it will work with python < 2.6 (izip_longest was added in 2.6):\nfrom itertools import imap\n\ndef grouper(n, iterable):\n \"grouper(3, 'ABCDEFG') --> ('A,'B','C'), ('D','E','F'), ('G',None,None)\"\n args = [iter(iterable)] * n\n\n return imap(None, *args)\n\nIf you need to go earlier than 2.3, you can substitute the built-in map for imap. The disadvantage is that it provides no ability to customize the fill value.\n"
] | [
478,
134,
75,
43,
7,
3
] | [
"If you have control over the structure of the list, the most pythonic thing to do would probably be to change it from:\nl=[1,2,3,4]\n\nto:\nl=[(1,2),(3,4)]\n\nThen, your loop would be:\nfor i,j in l:\n print i, j\n\n"
] | [
-2
] | [
"for_loop",
"iteration",
"list",
"loops",
"python"
] | stackoverflow_0002990121_for_loop_iteration_list_loops_python.txt |
Q:
How to save big "database-like" class in python
I'm doing a project with reasonalby big DataBase. It's not a probper DB file, but a class with format as follows:
DataBase.Nodes.Data=[[] for i in range(1,1000)] f.e. this DataBase is all together something like few thousands rows. Fisrt question - is the way I'm doing efficient, or is it better to use SQL, or any other "proper" DB, which I've never used actually.
And the main question - I'd like to save my DataBase class with all record, and then re-open it with Python in another session. Is that possible, what tool should I use? cPickle - it seems to be only for strings, any other?
In matlab there's very useful functionality named save workspace - it saves all Your variables to a file that You can open at another session - this would be vary useful in python!
A:
Pickle (cPickle) can handle any (picklable) Python object. So as long, as you're not trying to pickle thread or filehandle or something like that, you're ok.
A:
Pickle should be able to serialise the data for you so that you can save it to file.
Alternatively if you don't need the features of a full featured RDBMS you could use a lightweight solution like SQLLite or a document store like MongoDB
A:
The Pickle.dump function is very much like matlab's save feature. You just give it an object you wish to serialize and a file-like object to write it to. See the documentation for info and examples on how to use it.
The cPickle module is just like Pickle, but it is implemented in C so it can be much faster. You should probably use cPickle.
| How to save big "database-like" class in python | I'm doing a project with reasonalby big DataBase. It's not a probper DB file, but a class with format as follows:
DataBase.Nodes.Data=[[] for i in range(1,1000)] f.e. this DataBase is all together something like few thousands rows. Fisrt question - is the way I'm doing efficient, or is it better to use SQL, or any other "proper" DB, which I've never used actually.
And the main question - I'd like to save my DataBase class with all record, and then re-open it with Python in another session. Is that possible, what tool should I use? cPickle - it seems to be only for strings, any other?
In matlab there's very useful functionality named save workspace - it saves all Your variables to a file that You can open at another session - this would be vary useful in python!
| [
"Pickle (cPickle) can handle any (picklable) Python object. So as long, as you're not trying to pickle thread or filehandle or something like that, you're ok.\n",
"Pickle should be able to serialise the data for you so that you can save it to file.\nAlternatively if you don't need the features of a full featured RDBMS you could use a lightweight solution like SQLLite or a document store like MongoDB\n",
"The Pickle.dump function is very much like matlab's save feature. You just give it an object you wish to serialize and a file-like object to write it to. See the documentation for info and examples on how to use it.\nThe cPickle module is just like Pickle, but it is implemented in C so it can be much faster. You should probably use cPickle.\n"
] | [
3,
2,
1
] | [] | [] | [
"object_persistence",
"pickle",
"python",
"serialization"
] | stackoverflow_0002990995_object_persistence_pickle_python_serialization.txt |
Q:
Is there a way to use a substring function on variables in .vimrc?
I have gVim and portable python stored in a DropBox folder on several machines. The location of the DropBox folder is different on each computer. However, I'd like to be able to setup the .vimrc so that it automatically references the correct python folder no matter what computer it's on.
For example, I have gVim in C:\DropBox\gVimPortable.
In my .vimrc, I have the following:
let s:pp = 'C:\DropBox\PortablePython_1.1_py2.5.4' " netbook
if (has('win32') || has('win64')) && isdirectory(s:pp)
let $PATH .= ';' . s:pp
endif
Is there a way to do something like the following?
let s:pp = $VIMRUNTIME.substring(0, regex.match(/gVimPortable/))."\PortablePython_1.1_py2.5.4"
if (has('win32') || has('win64')) && isdirectory(s:pp)
let $PATH .= ';' . s:pp
endif
Basically, I don't want to hard code the folder where my python runtime exists. It will always be in the same place relative to the gVim application.
A:
I was able to do what I needed using the strpart and strlen functions.
let s:pp = strpart($VIMRUNTIME, 0, strlen($VIMRUNTIME)-strridx($VIMRUNTIME, "DropBox")+1) . "\\Apps\\PortablePython_1.1_py2.5.4\\App"
if (has('win32') || has('win64')) && isdirectory(s:pp)
let $PATH .= ';' . s:pp
endif
The script above will automatically add python to the PATH environment variable relative to the gVIM executeable.
| Is there a way to use a substring function on variables in .vimrc? | I have gVim and portable python stored in a DropBox folder on several machines. The location of the DropBox folder is different on each computer. However, I'd like to be able to setup the .vimrc so that it automatically references the correct python folder no matter what computer it's on.
For example, I have gVim in C:\DropBox\gVimPortable.
In my .vimrc, I have the following:
let s:pp = 'C:\DropBox\PortablePython_1.1_py2.5.4' " netbook
if (has('win32') || has('win64')) && isdirectory(s:pp)
let $PATH .= ';' . s:pp
endif
Is there a way to do something like the following?
let s:pp = $VIMRUNTIME.substring(0, regex.match(/gVimPortable/))."\PortablePython_1.1_py2.5.4"
if (has('win32') || has('win64')) && isdirectory(s:pp)
let $PATH .= ';' . s:pp
endif
Basically, I don't want to hard code the folder where my python runtime exists. It will always be in the same place relative to the gVim application.
| [
"I was able to do what I needed using the strpart and strlen functions.\nlet s:pp = strpart($VIMRUNTIME, 0, strlen($VIMRUNTIME)-strridx($VIMRUNTIME, \"DropBox\")+1) . \"\\\\Apps\\\\PortablePython_1.1_py2.5.4\\\\App\"\nif (has('win32') || has('win64')) && isdirectory(s:pp) \n let $PATH .= ';' . s:pp \nendif \n\nThe script above will automatically add python to the PATH environment variable relative to the gVIM executeable.\n"
] | [
2
] | [] | [] | [
"python",
"substring",
"vim"
] | stackoverflow_0002978911_python_substring_vim.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.