content stringlengths 85 101k | title stringlengths 0 150 | question stringlengths 15 48k | answers list | answers_scores list | non_answers list | non_answers_scores list | tags list | name stringlengths 35 137 |
|---|---|---|---|---|---|---|---|---|
Q:
Sort a list of Class Instances Python
I have a list of class instances -
x = [<iteminstance1>,...]
among other attributes the class has score attribute. How can I sort the items in ascending order based on this parameter?
EDIT: The list in python has something called sort. Could I use this here? How do I direct this function to use my score attribute?
A:
In addition to the solution you accepted, you could also implement the special __lt__() ("less than") method on the class. The sort() method (and the sorted() function) will then be able to compare the objects, and thereby sort them. This works best when you will only ever sort them on this attribute, however.
class Foo(object):
def __init__(self, score):
self.score = score
def __lt__(self, other):
return self.score < other.score
l = [Foo(3), Foo(1), Foo(2)]
l.sort()
A:
import operator
sorted_x = sorted(x, key=operator.attrgetter('score'))
if you want to sort x in-place, you can also:
x.sort(key=operator.attrgetter('score'))
| Sort a list of Class Instances Python | I have a list of class instances -
x = [<iteminstance1>,...]
among other attributes the class has score attribute. How can I sort the items in ascending order based on this parameter?
EDIT: The list in python has something called sort. Could I use this here? How do I direct this function to use my score attribute?
| [
"In addition to the solution you accepted, you could also implement the special __lt__() (\"less than\") method on the class. The sort() method (and the sorted() function) will then be able to compare the objects, and thereby sort them. This works best when you will only ever sort them on this attribute, however.\n... | [
218,
146
] | [] | [] | [
"python",
"sorting"
] | stackoverflow_0004010322_python_sorting.txt |
Q:
Django development version vs stable release
I am about to start ramping up on Django and develop a web app that I want to deploy on Google App Engine. I learn that Google has Django 0.96 already installed on the APP engine, but the latest "official" version of Django I see is at 1.2.3 and its a bit of an effort to install it there.
I am curious which version of Django is most widely used.
So, can you please guide me on which Django version I should ramp on and deploy based on the following criterion
Stability and suitability for production release
Availability for applications (or plugins) and which version is most supported by the community
Thanks a lot!
A:
Most people are currently using Django 1.2. You should not use or learn Django .96 - it's VERY old, and learning to use it won't prepare you for any non-app-engine Django work since things have changed significantly since then.
Django on App Engine is something of a pain, since you lose a lot of the ORM, which is a really nice reason to be working with Django. You also lose the ability to simply drop-in plugins and reusable apps that use any of the Django ORM. Anything with a models.py won't work.
Take a look at google-app-engine-django for help getting a more recent version running.
http://code.google.com/p/google-app-engine-django/
There is work to integrate the GAE storage engine into Django, and several projects have variously working implementations, but I wouldn't expect really good ORM support for a while yet - 1.3 (which is still several months from release) will include hooks that make it easier to write NoSQL backends, but Django probably won't ship with one.
While there are security releases for old versions of Django, you should really be developing using the latest stable version. Major releases of Django have a very strong backwards compatibility promise, so going from 1.2 to 1.3 when it comes out will be pretty seamless.
I strongly encourage you to think long and hard about what precisely App Engine offers your specific application before spending a lot of energy getting things working there. You lose application portability, scaling is still hard, and you don't save money if your application gets popular. App Engine is not a forgiving introductory platform.
For more conversation on this topic, take a look at this question:
Why use Django on Google App Engine?
particularly my answer there and the comments on it.
A:
app engine permits you to use other versions of django out of the box, with only a little pain, using google.appengine.dist.use_library. essentially, your main.py (the module specified by app.yaml to handle urls) should look like this:
import wsgiref.handlers
from google.appengine.ext import webapp
from google.appengine.ext.webapp import util
import os
os.environ['DJANGO_SETTINGS_MODULE'] = 'settings'
from google.appengine.dist import use_library # important bit
use_library('django', '1.1')
import django.core.handlers.wsgi
def main():
application = django.core.handlers.wsgi.WSGIHandler()
# Run the WSGI CGI handler with that application.
webapp.util.run_wsgi_app(application)
if __name__ == '__main__':
main()
A:
Another thing to consider is how you install. I'd be sure to install django from SVN, because it makes updating it MUCH easier.
I have been using the dev version for a while on my site, and haven't encountered a single bug yet, aside from one that affected the admin site in a minor way (which a svn up fixed).
I don't have a feel for whether people are using 1.2 or dev, but in my experience, dev is perfectly suitable. Any major errors that you may have in the code will get fixed very quickly, and svn up will get you to the latest code on the off chance that you get a revision with a major bug.
| Django development version vs stable release | I am about to start ramping up on Django and develop a web app that I want to deploy on Google App Engine. I learn that Google has Django 0.96 already installed on the APP engine, but the latest "official" version of Django I see is at 1.2.3 and its a bit of an effort to install it there.
I am curious which version of Django is most widely used.
So, can you please guide me on which Django version I should ramp on and deploy based on the following criterion
Stability and suitability for production release
Availability for applications (or plugins) and which version is most supported by the community
Thanks a lot!
| [
"Most people are currently using Django 1.2. You should not use or learn Django .96 - it's VERY old, and learning to use it won't prepare you for any non-app-engine Django work since things have changed significantly since then.\nDjango on App Engine is something of a pain, since you lose a lot of the ORM, which is... | [
3,
3,
0
] | [] | [] | [
"django",
"google_app_engine",
"python"
] | stackoverflow_0004010349_django_google_app_engine_python.txt |
Q:
Qt C++ tcp client with python twisted server
I'm trying to connect a very basic twisted "hello world" server with a basic Qt tcp client.
The client uses these Signals:
connect(&socket, SIGNAL(connected()), this, SLOT(startTransfer()));
connect(&socket, SIGNAL(readyRead()), this, SLOT(readServer()));
and then readServer() looks like this:
ui->resultLabel->setText("Reading..");
QDataStream in(&socket);
//in.setVersion(QT_4_0);
if (blockSize == 0) {
if (socket.bytesAvailable() < (int)sizeof(quint16))
return;
in >> blockSize;
}
if (socket.bytesAvailable() < blockSize)
return;
QString theResult;
in >> theResult;
qDebug() << in;
qDebug() << theResult;
ui->resultLabel->setText(theResult);
The server I'm using for testing purposes is simply an example grabbed off of twisted's docs
from twisted.internet.protocol import Protocol, Factory
from twisted.internet import reactor
### Protocol Implementation
# This is just about the simplest possible protocol
class Echo(Protocol):
def dataReceived(self, data):
"""
As soon as any data is received, write it back.
"""
self.transport.write(data)
def main():
f = Factory()
f.protocol = Echo
reactor.listenTCP(8000, f)
reactor.run()
if __name__ == '__main__':
main()
readServer() is being called just fine, but it never seems to collect any of the data. I've read somewhere that this might have to do with QDataStream's << operator because python isn't exactly sending it in pieces like Qt expects.
I admit I'm not very savvy with C++ or Qt, but the idea of the project is to write a client to work with an existing twisted server, so while the client can be changed I'm left with no choice but to make it work with this server.
Thanks in advance for any help.
A:
The issue turned out to be QDataStream, which is apparently more than just a little particular about the data it's reading.
Thankfully, I discovered QDataStream::readRawData which liked data being sent by python a lot better (further I discovered this had nothing to do with twisted, but the python socket implementation itself.) The final code looked like this:
//use socket to construct a QDataStream object, like before
QDataStream in(&socket);
//in.setVersion(QDataStream::Qt_4_0);
char buffer[1024] = {0};
//readRawData takes a char to dump to and the length,
//so I'm sure there is a better way to do this. It worked for my example.
in.readRawData(buffer, socket.bytesAvailable());
QString result;
result = buffer;
ui->resultLabel->setText(result);
A:
An important thing to understand about TCP is that it is not a transport for messages (or "chunks") of a particular size. It is a transport for a stream of bytes. When you write something like:
if (socket.bytesAvailable() < (int)sizeof(quint16))
return;
then you'd better have a loop somewhere that's invoking this code again, because there's no guarantee that the first time around you'll get all the bytes you need to get past this check.
You didn't share any of the code that's responsible for sending data to the echo server, so it's impossible to actually know why your Qt client isn't getting the data you expect it to get, but given the above I wouldn't be surprised if it involves incorrect buffering of incomplete data.
Make sure that if you decide not to read from the socket because there isn't enough data, you try reading from it again later after more data might have arrived. A good way to do this is to actually always read the data into an application buffer so that you can use select() or epoll() or what have you to tell you when there is more data available on the socket. Then just operate on the application buffer.
| Qt C++ tcp client with python twisted server | I'm trying to connect a very basic twisted "hello world" server with a basic Qt tcp client.
The client uses these Signals:
connect(&socket, SIGNAL(connected()), this, SLOT(startTransfer()));
connect(&socket, SIGNAL(readyRead()), this, SLOT(readServer()));
and then readServer() looks like this:
ui->resultLabel->setText("Reading..");
QDataStream in(&socket);
//in.setVersion(QT_4_0);
if (blockSize == 0) {
if (socket.bytesAvailable() < (int)sizeof(quint16))
return;
in >> blockSize;
}
if (socket.bytesAvailable() < blockSize)
return;
QString theResult;
in >> theResult;
qDebug() << in;
qDebug() << theResult;
ui->resultLabel->setText(theResult);
The server I'm using for testing purposes is simply an example grabbed off of twisted's docs
from twisted.internet.protocol import Protocol, Factory
from twisted.internet import reactor
### Protocol Implementation
# This is just about the simplest possible protocol
class Echo(Protocol):
def dataReceived(self, data):
"""
As soon as any data is received, write it back.
"""
self.transport.write(data)
def main():
f = Factory()
f.protocol = Echo
reactor.listenTCP(8000, f)
reactor.run()
if __name__ == '__main__':
main()
readServer() is being called just fine, but it never seems to collect any of the data. I've read somewhere that this might have to do with QDataStream's << operator because python isn't exactly sending it in pieces like Qt expects.
I admit I'm not very savvy with C++ or Qt, but the idea of the project is to write a client to work with an existing twisted server, so while the client can be changed I'm left with no choice but to make it work with this server.
Thanks in advance for any help.
| [
"The issue turned out to be QDataStream, which is apparently more than just a little particular about the data it's reading.\nThankfully, I discovered QDataStream::readRawData which liked data being sent by python a lot better (further I discovered this had nothing to do with twisted, but the python socket implemen... | [
2,
1
] | [] | [] | [
"c++",
"python",
"qt",
"twisted"
] | stackoverflow_0004006854_c++_python_qt_twisted.txt |
Q:
Having a problem getting my output to look a certain way
Thanks for viewing my question and thanks in advance for any help you may provide.
I am writing a program that reads lines from a txt file and then prints the output in a certain fashion. Here they both are
Here is the txt file I am reading from
JOE FRITZ AMERICAN GOVERNMENT B
JOE FRITZ CALCULUS I A
JOE FRITZ COMPUTER PROGRAMMING B
JOE FRITZ ENGLISH COMPOSITION A
LANE SMITH FUND. OF DATA PROCESSING B
LANE SMITH INTERMEDIATE SWIMMING A
LANE SMITH INTRO. TO BUSINESS C
JOHN SPITZ CHOIR C
JOHN SPITZ COLLEGE STATISTICS B
JOHN SPITZ ENGLISH LITERATURE D
JOHN SPITZ INTRO. TO BUSINESS B
I am trying to get my output to look like this:
GRADE REPORT
NAME COURSE GRADE
-----------------------------------------------------------
JOE FRITZ AMERICAN GOVERNMENT B
CALCULUS I A
COMPUTER PROGRAMMING B
ENGLISH COMPOSITION A
Total courses taken = 4
LANE SMITH FUND. OF DATA PROCESSING B
INTERMEDIATE SWIMMING A
INTRO. TO BUSINESS C
Total courses taken = 3
JOHN SPITZ CHOIR C
COLLEGE STATISTICS B
ENGLISH LITERATURE D
INTRO. TO BUSINESS B
Total courses taken = 4
Total courses taken by all students = 11
Run complete. Press the Enter key to exit.
EDIT
Thanks to your help, I finished this program.
I know it may be ugly, but ATM I am just happy to have the output right.
Here is the source that will display the correct output:
#-----------------------------------------------------------------------
# VARIABLE DEFINITIONS
name = ""
previousName = ""
course = ""
grade = ""
grandTotal = 0
courseCount = 0
eof = False
#-----------------------------------------------------------------------
# CONSTANT DEFINITIONS
#-----------------------------------------------------------------------
# FUNCTION DEFINITIONS
def startUp():
global gradeFile, grandTotal,courseCount, previousName, name
grandTotal = 0
courseCount = 0
gradeFile = open("grades.txt","r")
print
print ("grade report\n").center(60).upper()
print "name".upper(),"course".rjust(21).upper(),"grade".rjust(33).upper()
print "-" * 60
readRecord()
def readRecord():
global name, course, grade, eof, courseCount
studentRecord = gradeFile.readline()
if studentRecord == "":
eof = True
else:
name = studentRecord[0:20]
course = studentRecord[20:50]
grade = studentRecord[50:51]
eof = False
def processRecords():
global courseCount, previousName, name, grandTotal
while not eof:
if name != previousName:
if name == "JOE FRITZ ":
courseCount = 0
print name + course + " " + grade
previousName = name
courseCount += 1
else:
print "\t\t Total courses taken =",courseCount
print
courseCount = 0
print name + course + " " + grade
previousName = name
courseCount += 1
else:
print (" " * 20) + course + " " + grade
courseCount += 1
grandTotal +=1
readRecord()
print "\t\t Total courses taken =",courseCount
def closeUp():
gradeFile.close()
print "\nTotal courses taken by all students =",grandTotal
#-----------------------------------------------------------------------
# PROGRAM'S MAIN LOGIC
startUp()
processRecords()
closeUp()
raw_input("\nRun complete. Press the Enter key to exit.")
Thanks for your help everyone. I really do appreciate it. Sorry if I have frustrated anyone during the process. Have a good one. Peace
A:
i just figure it out one of the bugs in your code in the function readRecord() you are just reading the first line of your file just it ; you should loop over all lines or make readRecord() a generator .
def readRecord():
global name, course, grade, eof
studentRecord = gradeFile.readline() # <----- HERE
if studentRecord == "":
eof = True
else:
name = studentRecord[0:20]
course = studentRecord[20:50]
grade = studentRecord[50:51]
eof = False
but despite this , to be sincere i don't like your code much, here is what i will do if i were you:
1) get data from file by any way (csv, regex ...) ; i think we have already the answer in here
2) put the data in a dictionary or whatever (so you can manipulate them as you wich).
3) use itertools.groupby() to group by student and to calculate the sum of what you want.
4) use string Template(), because maybe the format will change, and don't hard code the format of your output like you did.
And please test yours function one by one after write in them , because if you don't it will be hard to figure out which part of the code is not working.
EDIT:
and i will not ask you why you want to do this, because if you want to put them in a file again you will have the same problem as before if you want to retrieve them again, and if you objective is just to make a beautiful output i will ask you is it worth it ?
One last advise use a well know format like csv, xml ...
And Good luck :)
A:
This may not be the 1960s COBOLlocks way that your benighted instructor wants you to do it, but the general ideas are:
(1) itertools.groupby can save you all that previous/current detecting-name-changed malarky
(2) you should extract your data records into a sensible format up front -- stripping trailing whitespace (always a good idea) gets rid of stray newlines that are not part of the data
(3) global variables used like you are doing are the utter pox (in ANY language).
import itertools
guff = """\
JOE FRITZ AMERICAN GOVERNMENT B
JOE FRITZ CALCULUS I A
JOE FRITZ COMPUTER PROGRAMMING B
JOE FRITZ ENGLISH COMPOSITION A
LANE SMITH FUND. OF DATA PROCESSING B
LANE SMITH INTERMEDIATE SWIMMING A
LANE SMITH INTRO. TO BUSINESS C
JOHN SPITZ CHOIR C
JOHN SPITZ COLLEGE STATISTICS B
JOHN SPITZ ENGLISH LITERATURE D
JOHN SPITZ INTRO. TO BUSINESS B
"""
data_source = guff.splitlines(True) # simulate file
columns = (slice(0, 20), slice(20, 50), slice(50, 51))
def data_reader(raw_record_iterator):
for line in raw_record_iterator:
yield [line[sl].rstrip() for sl in columns]
def process_file():
# f = open('my_file.text', 'r')
# or use a 'with' statement
# data_source = f
for key, grouper in itertools.groupby(
data_reader(data_source), lambda element: element[0]):
print "=== start student %s ===" % key
for cooked_record in grouper:
print cooked_record
print "=== end student %s ===" % key
print "=== Grand totals here ==="
# f.close()
if __name__ == "__main__":
process_file()
actual output:
=== start student JOE FRITZ ===
['JOE FRITZ', 'AMERICAN GOVERNMENT', 'B']
['JOE FRITZ', 'CALCULUS I', 'A']
['JOE FRITZ', 'COMPUTER PROGRAMMING', 'B']
['JOE FRITZ', 'ENGLISH COMPOSITION', 'A']
=== end student JOE FRITZ ===
=== start student LANE SMITH ===
['LANE SMITH', 'FUND. OF DATA PROCESSING', 'B']
['LANE SMITH', 'INTERMEDIATE SWIMMING', 'A']
['LANE SMITH', 'INTRO. TO BUSINESS', 'C']
=== end student LANE SMITH ===
=== start student JOHN SPITZ ===
['JOHN SPITZ', 'CHOIR', 'C']
['JOHN SPITZ', 'COLLEGE STATISTICS', 'B']
['JOHN SPITZ', 'ENGLISH LITERATURE', 'D']
['JOHN SPITZ', 'INTRO. TO BUSINESS', 'B']
=== end student JOHN SPITZ ===
=== Grand totals here ===
| Having a problem getting my output to look a certain way | Thanks for viewing my question and thanks in advance for any help you may provide.
I am writing a program that reads lines from a txt file and then prints the output in a certain fashion. Here they both are
Here is the txt file I am reading from
JOE FRITZ AMERICAN GOVERNMENT B
JOE FRITZ CALCULUS I A
JOE FRITZ COMPUTER PROGRAMMING B
JOE FRITZ ENGLISH COMPOSITION A
LANE SMITH FUND. OF DATA PROCESSING B
LANE SMITH INTERMEDIATE SWIMMING A
LANE SMITH INTRO. TO BUSINESS C
JOHN SPITZ CHOIR C
JOHN SPITZ COLLEGE STATISTICS B
JOHN SPITZ ENGLISH LITERATURE D
JOHN SPITZ INTRO. TO BUSINESS B
I am trying to get my output to look like this:
GRADE REPORT
NAME COURSE GRADE
-----------------------------------------------------------
JOE FRITZ AMERICAN GOVERNMENT B
CALCULUS I A
COMPUTER PROGRAMMING B
ENGLISH COMPOSITION A
Total courses taken = 4
LANE SMITH FUND. OF DATA PROCESSING B
INTERMEDIATE SWIMMING A
INTRO. TO BUSINESS C
Total courses taken = 3
JOHN SPITZ CHOIR C
COLLEGE STATISTICS B
ENGLISH LITERATURE D
INTRO. TO BUSINESS B
Total courses taken = 4
Total courses taken by all students = 11
Run complete. Press the Enter key to exit.
EDIT
Thanks to your help, I finished this program.
I know it may be ugly, but ATM I am just happy to have the output right.
Here is the source that will display the correct output:
#-----------------------------------------------------------------------
# VARIABLE DEFINITIONS
name = ""
previousName = ""
course = ""
grade = ""
grandTotal = 0
courseCount = 0
eof = False
#-----------------------------------------------------------------------
# CONSTANT DEFINITIONS
#-----------------------------------------------------------------------
# FUNCTION DEFINITIONS
def startUp():
global gradeFile, grandTotal,courseCount, previousName, name
grandTotal = 0
courseCount = 0
gradeFile = open("grades.txt","r")
print
print ("grade report\n").center(60).upper()
print "name".upper(),"course".rjust(21).upper(),"grade".rjust(33).upper()
print "-" * 60
readRecord()
def readRecord():
global name, course, grade, eof, courseCount
studentRecord = gradeFile.readline()
if studentRecord == "":
eof = True
else:
name = studentRecord[0:20]
course = studentRecord[20:50]
grade = studentRecord[50:51]
eof = False
def processRecords():
global courseCount, previousName, name, grandTotal
while not eof:
if name != previousName:
if name == "JOE FRITZ ":
courseCount = 0
print name + course + " " + grade
previousName = name
courseCount += 1
else:
print "\t\t Total courses taken =",courseCount
print
courseCount = 0
print name + course + " " + grade
previousName = name
courseCount += 1
else:
print (" " * 20) + course + " " + grade
courseCount += 1
grandTotal +=1
readRecord()
print "\t\t Total courses taken =",courseCount
def closeUp():
gradeFile.close()
print "\nTotal courses taken by all students =",grandTotal
#-----------------------------------------------------------------------
# PROGRAM'S MAIN LOGIC
startUp()
processRecords()
closeUp()
raw_input("\nRun complete. Press the Enter key to exit.")
Thanks for your help everyone. I really do appreciate it. Sorry if I have frustrated anyone during the process. Have a good one. Peace
| [
"i just figure it out one of the bugs in your code in the function readRecord() you are just reading the first line of your file just it ; you should loop over all lines or make readRecord() a generator .\n def readRecord():\n global name, course, grade, eof\n\n studentRecord = gradeFile.readline() # <---... | [
3,
0
] | [] | [] | [
"python"
] | stackoverflow_0004010714_python.txt |
Q:
unable to help(exec)
Ok, this is funny.
>>> exec("print")
>>> help(exec)
File "<stdin>", line 1
help(exec)
^
SyntaxError: invalid syntax
>>>
looks like exec is a statement, not a function, hence you cannot help() it. Is this expected or a bug? if expected, why? can you reproduce it on python3 ? I have Python 2.6.1 here.
A:
In Python 2.x, exec is a statement (and thus doesn't have a docstring associated with it.)
In Python 3.x, exec is now a function: http://docs.python.org/py3k/library/functions.html?highlight=exec#exec
So it can (and does) have a docstring.
You'd get this same behavior for help(print), which also became a function in 3.x.
A:
yes, like my followers said but for me i usually do :
>>> help("exec")
>>> help("print")
and it work for python 2.* and python 3k
A:
Just put quotes around it (works for assert, etc. too):
>>> help('exec')
A:
http://docs.python.org/release/3.0.1/library/functions.html#exec
In Python 3, exec() is a function. Apparently, in Python 2, exec is a statement but can be used similarly to a function.
http://docs.python.org/release/3.0.1/whatsnew/3.0.html#removed-syntax
Removed keyword: exec() is no longer a keyword; it remains as a function. (Fortunately the function syntax was also accepted in 2.x.)
| unable to help(exec) | Ok, this is funny.
>>> exec("print")
>>> help(exec)
File "<stdin>", line 1
help(exec)
^
SyntaxError: invalid syntax
>>>
looks like exec is a statement, not a function, hence you cannot help() it. Is this expected or a bug? if expected, why? can you reproduce it on python3 ? I have Python 2.6.1 here.
| [
"In Python 2.x, exec is a statement (and thus doesn't have a docstring associated with it.)\nIn Python 3.x, exec is now a function: http://docs.python.org/py3k/library/functions.html?highlight=exec#exec\nSo it can (and does) have a docstring.\nYou'd get this same behavior for help(print), which also became a functi... | [
2,
2,
2,
1
] | [] | [] | [
"python"
] | stackoverflow_0004010980_python.txt |
Q:
Registry Startup Entry with runas argument?
Is it possible to add a startup entry in the windows registry with a runas argument? So when it launches, it runs with the user specified?
A:
See this recipe:
Reading from and writing to the Windows Registry (Python)
It has python code to read from the registry to show which applications run at start as well as to write a new entry to launch a windows explorer at startup
| Registry Startup Entry with runas argument? | Is it possible to add a startup entry in the windows registry with a runas argument? So when it launches, it runs with the user specified?
| [
"See this recipe:\n\nReading from and writing to the Windows Registry (Python)\n\nIt has python code to read from the registry to show which applications run at start as well as to write a new entry to launch a windows explorer at startup\n"
] | [
1
] | [] | [] | [
"python",
"registry",
"runas",
"startup",
"windows"
] | stackoverflow_0004010875_python_registry_runas_startup_windows.txt |
Q:
iterating through all items in listA that don't appear in listB
How can I fix this statement:
for i in LISTA and i not in LISTB:
print i
A:
for i in LISTA:
if i not in LISTB:
print i
A:
new_list = set(LISTA) - set(LISTB) # if you don't have duplicate
for i in new_list:
print i
Or :
for i in LISTA:
if i in LISTB:
continue
print i
A:
A more sophisticated solution. This is a simple intersection complement.
a = set([1, 2, 3])
b = set([3, 4, 5])
print(a - b)
A:
for i in (i for i in LISTA if i not in LISTB):
print i
The part in parentheses is a generator expression. The benefit of this over other methods that is that it doesn't create duplicate (temporary) sets or list objects. This is especially important if LISTA and/or LISTB are really large.
| iterating through all items in listA that don't appear in listB | How can I fix this statement:
for i in LISTA and i not in LISTB:
print i
| [
"for i in LISTA:\n if i not in LISTB:\n print i\n\n",
"new_list = set(LISTA) - set(LISTB) # if you don't have duplicate\nfor i in new_list:\n print i\n\nOr :\nfor i in LISTA:\n if i in LISTB:\n continue\n print i\n\n",
"A more sophisticated solution. This is a simple intersection complement.\na ... | [
8,
7,
5,
2
] | [] | [] | [
"python"
] | stackoverflow_0004011217_python.txt |
Q:
how to iterate from a specific point in a sequence (Python)
[Edit]
From the feedback/answers I have received, I gather there is some confusion regarding the original question. Consequently, I have reduced the problem to its most rudimentary form
Here are the relevant facts of the problem:
I have a sorted sequence: S
I have an item (denoted by i) that is GUARANTEED to be contained in S
I want a find() algorithm that returns an iterator (iter) that points to i
After obtaining the iterator, I want to be able to iterate FORWARD (BACKWARD?) over the elements in S, starting FROM (and including) i
For my fellow C++ programmers who can also program in Python, what I am asking for, is the equivalent of:
const_iterator std::find (const key_type& x ) const;
The iterator returned can then be used to iterate the sequence. I am just trying to find (pun unintended), if there is a similar inbuilt algorithm in Python, to save me having to reinvent the wheel.
A:
yes , you can do like this:
import itertools
from datetime import datetime
data = {
"2008-11-10 17:53:59":"data",
"2005-11-10 17:53:59":"data",
}
list_ = data.keys()
new_list = [datetime.strptime(x, "%Y-%m-%d %H:%M:%S") for x in list_]
begin_date = datetime.strptime("2007-11-10 17:53:59", "%Y-%m-%d %H:%M:%S")
for i in itertools.ifilter(lambda x: x > begin_date, new_list):
print i
A:
If you know for a fact that the items in your sequence are sorted, you can just use a generator expression:
(item for item in seq if item >= 5)
This returns a generator; it doesn't actually traverse the list until you iterate over it, i.e.:
for item in (item for item in seq if item > 5)
print item
will only traverse seq once.
Using a generator expression like this is pretty much identical to using itertools.ifilter, which produces a generator that iterates over the list returning only values that meet the filter criterion:
>>> import itertools
>>> seq = [1, 2, 3, 4, 5, 6, 7]
>>> list(itertools.ifilter(lambda x: x>=3, seq))
[3, 4, 5, 6, 7]
I'm not sure why (except for backwards compatibility) we need itertools.ifilter anymore now that we have generator expressions, but other methods in itertools are invaluable.
If, for instance, you don't know that your sequence is sorted, and you still want to return everything in the sequence from a known item and beyond, you can't use a generator expression. Instead, use itertools.dropwhile. This produces a generator that iterates over the list skipping values until it finds one that meets the filter criterion:
>>> seq = [1, 2, 4, 3, 5, 6, 7]
>>> list(itertools.dropwhile(lambda x: x != 3, seq))
[3, 5, 6, 7]
As far as searching backwards goes, this will only work if the sequence you're using is actually a sequence (like a list, i.e. something that has an end and can be navigated backwards) and not just any iterable (e.g. a generator that returns the next prime number). To do this, use the reversed function, e.g.:
(item for item in reversed(seq) if item >= 5)
A:
Given your relevant facts:
>>> import bisect
>>> def find_fwd_iter(S, i):
... j = bisect.bisect_left(S, i)
... for k in xrange(j, len(S)):
... yield S[k]
...
>>> def find_bkwd_iter(S, i):
... j = bisect.bisect_left(S, i)
... for k in xrange(j, -1, -1):
... yield S[k]
...
>>> L = [100, 150, 200, 300, 400]
>>> list(find_fwd_iter(L, 200))
[200, 300, 400]
>>> list(find_bkwd_iter(L, 200))
[200, 150, 100]
>>>
A:
One simpler way (albeit slower) would be to use filter and filter for keys before/after that date. Filter has to process each element in the list as opposed to slicing not needing to.
A:
You can do
def on_or_after(date):
from itertools import dropwhile
sorted_items = sorted(date_dictionary.iteritems())
def before_date(pair):
return pair[0] < date
on_or_after_date = dropwhile(before_date, sorted_items)
which I think is about as efficient as it's going to get if you're just doing one such lookup on each sorted collection. on_or_after_date will iterate (date, value) pairs.
Another option would be to build a dictionary as a separate index into the sorted list:
sorted_items = sorted(date_dictionary.iteritems())
date_index = dict((key, i) for i, key in enumerate(sorted_items.keys()))
and then get the items on or after a date with
def on_or_after(date):
return sorted_items[date_index[date]:]
This second approach will be faster if you're going to be doing a lot of lookups on the same series of sorted dates (which it sounds like you are).
If you want really speedy slicing of the sorted dates, you might see some improvement by storing it in a tuple instead of a list. I could be wrong about that though.
note the above code is untested, let me know if it doesn't work and you can't sort out why.
A:
First off, this question isn't related to dicts. You're operating on a sorted list. You're using the results on a dict, but that's not relevant to the question.
You want the bisect module, which implements binary searching. Starting from your code:
import bisect
mydict = {
"2001-01-01":"data1",
"2005-01-02":"data2",
"2002-01-01":"data3",
"2004-01-02":"data4",
}
# ['2001-01-01', '2002-01-01', '2004-01-02', '2005-01-02']:
sorted_dates = sorted(mydict)
# Iterates over 2002-01-01, 2004-01-02 and 2005-01-02:
offset = bisect.bisect_left(sorted_dates, "2002-01-01")
for item in sorted_dates[offset:]:
print item
| how to iterate from a specific point in a sequence (Python) | [Edit]
From the feedback/answers I have received, I gather there is some confusion regarding the original question. Consequently, I have reduced the problem to its most rudimentary form
Here are the relevant facts of the problem:
I have a sorted sequence: S
I have an item (denoted by i) that is GUARANTEED to be contained in S
I want a find() algorithm that returns an iterator (iter) that points to i
After obtaining the iterator, I want to be able to iterate FORWARD (BACKWARD?) over the elements in S, starting FROM (and including) i
For my fellow C++ programmers who can also program in Python, what I am asking for, is the equivalent of:
const_iterator std::find (const key_type& x ) const;
The iterator returned can then be used to iterate the sequence. I am just trying to find (pun unintended), if there is a similar inbuilt algorithm in Python, to save me having to reinvent the wheel.
| [
"yes , you can do like this:\nimport itertools\nfrom datetime import datetime\n\ndata = {\n \"2008-11-10 17:53:59\":\"data\",\n \"2005-11-10 17:53:59\":\"data\",\n}\n\nlist_ = data.keys()\nnew_list = [datetime.strptime(x, \"%Y-%m-%d %H:%M:%S\") for x in list_]\n\nbegin_date = datetime.strptime(\"2007-11-1... | [
1,
1,
1,
0,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0004006850_python.txt |
Q:
Python regular expression to remove "problem=None," from "Abc(problem=None, fds=5, sff=(2, 1, 0))"
I've got a python string
s = "Abc(problem=None, fds=5, sff=(2, 1, 0))"
s2 = "Abc(problem=None)"
What I want to do is remove the "problem=None, "
So it'll looks like
s = "Abc(fds=5, sff=(2, 1, 0))"
s2 = "Abc()"
Please mind the ','
How to achieve this? Thanks very much!!
A:
Removing all syntactically valid whitespace:
>>> import re
>>> re.sub(r"\s*problem\s*=\s*None\s*,?\s*", "", "abc( problem = None , )")
'abc()'
>>> re.sub(r"\s*problem\s*=\s*None\s*,?\s*", "", "abc( problem = None )")
'abc()'
>>>
A:
A regex that will work in both cases is:
/problem=None,?\s*/
The ? makes the comma optional and the \s* will strip any trailing whitespace.
A:
re.sub(r"problem=None,? +", "", s)
| Python regular expression to remove "problem=None," from "Abc(problem=None, fds=5, sff=(2, 1, 0))" | I've got a python string
s = "Abc(problem=None, fds=5, sff=(2, 1, 0))"
s2 = "Abc(problem=None)"
What I want to do is remove the "problem=None, "
So it'll looks like
s = "Abc(fds=5, sff=(2, 1, 0))"
s2 = "Abc()"
Please mind the ','
How to achieve this? Thanks very much!!
| [
"Removing all syntactically valid whitespace:\n>>> import re\n>>> re.sub(r\"\\s*problem\\s*=\\s*None\\s*,?\\s*\", \"\", \"abc( problem = None , )\")\n'abc()'\n>>> re.sub(r\"\\s*problem\\s*=\\s*None\\s*,?\\s*\", \"\", \"abc( problem = None )\")\n'abc()'\n>>>\n\n",
"A regex that will work in both cases is:\n/probl... | [
3,
2,
0
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0004011237_python_regex.txt |
Q:
facebook python-sdk post_to_wall attachment
Hi
I am using the python-sdk (http://github.com/facebook/python-sdk.git) on google appengine.
I am runnig the "newsfeed" example.
In facebook.py i had to import urllib2
and then change
file = urllib.urlopen("https://graph.facebook.com/" + path + "?" +
urllib.urlencode(args), post_data)
to
file = urllib2.urlopen("https://graph.facebook.com/" + path + "?" +
urllib.urlencode(args), post_data)
Now the basic application works. However if I change
in facebookclient.py
try:
self.graph.put_wall_post(message)
except Exception, e:
self.response.out.write(e)
return
to
try:
attachment = {}
message = message
caption = 'test caption'
attachment['caption'] = caption
attachment['name'] = 'test name'
attachment['description'] = 'test description'
self.graph.put_wall_post(message, attachment=attachment)
except Exception, e:
self.response.out.write(e)
return
i will get the error (on http://localhost:8080) :
HTTP Error 400: Bad Request
and the appengine development server complains:
INFO 2010-10-24 17:20:44,483 dev_appserver.py:3275] "POST /post HTTP/1.1" 302 -
WARNING 2010-10-24 17:20:44,570 urlfetch_stub.py:284] Stripped prohibited headers from URLFetch request: ['Host']
INFO 2010-10-24 17:20:48,167 dev_appserver.py:3275] "GET / HTTP/1.1" 200 -
INFO 2010-10-24 17:20:48,292 dev_appserver.py:3275] "GET /static/base.css HTTP/1.1" 200 -
WARNING 2010-10-24 17:21:19,343 urlfetch_stub.py:284] Stripped prohibited headers from URLFetch request: ['Content-Length', 'Host']
INFO 2010-10-24 17:21:20,634 dev_appserver.py:3275] "POST /post HTTP/1.1" 200 -
A:
Solved the problem by using put_object instead of post_to_wall:
see http://developers.facebook.com/docs/reference/api/post for an example on how to post with curl
self.graph.put_object("me", "feed", message=message,
link="http://leona-nachhilfe.appspot.com",
picture="http://leona-nachhilfe.appspot.com/static/images/logo.png",
name = "LeONa-Quiz",
description = "Orges erreichte 45.Punkte",
actions = {'name': 'Zu den Quiz-Aufgaben', 'link': 'http://leona-nachhilfe.appspot.com'},
privacy = {'value': 'ALL_FRIENDS'}
)
| facebook python-sdk post_to_wall attachment | Hi
I am using the python-sdk (http://github.com/facebook/python-sdk.git) on google appengine.
I am runnig the "newsfeed" example.
In facebook.py i had to import urllib2
and then change
file = urllib.urlopen("https://graph.facebook.com/" + path + "?" +
urllib.urlencode(args), post_data)
to
file = urllib2.urlopen("https://graph.facebook.com/" + path + "?" +
urllib.urlencode(args), post_data)
Now the basic application works. However if I change
in facebookclient.py
try:
self.graph.put_wall_post(message)
except Exception, e:
self.response.out.write(e)
return
to
try:
attachment = {}
message = message
caption = 'test caption'
attachment['caption'] = caption
attachment['name'] = 'test name'
attachment['description'] = 'test description'
self.graph.put_wall_post(message, attachment=attachment)
except Exception, e:
self.response.out.write(e)
return
i will get the error (on http://localhost:8080) :
HTTP Error 400: Bad Request
and the appengine development server complains:
INFO 2010-10-24 17:20:44,483 dev_appserver.py:3275] "POST /post HTTP/1.1" 302 -
WARNING 2010-10-24 17:20:44,570 urlfetch_stub.py:284] Stripped prohibited headers from URLFetch request: ['Host']
INFO 2010-10-24 17:20:48,167 dev_appserver.py:3275] "GET / HTTP/1.1" 200 -
INFO 2010-10-24 17:20:48,292 dev_appserver.py:3275] "GET /static/base.css HTTP/1.1" 200 -
WARNING 2010-10-24 17:21:19,343 urlfetch_stub.py:284] Stripped prohibited headers from URLFetch request: ['Content-Length', 'Host']
INFO 2010-10-24 17:21:20,634 dev_appserver.py:3275] "POST /post HTTP/1.1" 200 -
| [
"Solved the problem by using put_object instead of post_to_wall:\nsee http://developers.facebook.com/docs/reference/api/post for an example on how to post with curl\n self.graph.put_object(\"me\", \"feed\", message=message, \n link=\"http://leona-nachhilfe.appspot.com\", \n ... | [
2
] | [] | [] | [
"attachment",
"facebook",
"python"
] | stackoverflow_0004009441_attachment_facebook_python.txt |
Q:
Get the width of a TreeViewColumn that would like to be if it were autosized
Is there a way to get the width, in pixels, that a gtk.TreeViewColumn would want to be if the sizing mode was set as gtk.TREE_VIEW_COLUMN_AUTOSIZE, even if it's currently gtk.TREE_VIEW_COLUMN_FIXED?
A:
I think gtk.TreeViewColumn.cell_get_size() will do the job.
Sample code:
def show_size(treeview):
col = treeview.get_column(0)
cell = col.get_cell_renderers()[0]
size = col.cell_get_position(cell)
print 'current size: position=%s, width=%s' % size
size = col.cell_get_size()
print 'autosize: %s, x=%s, y=%s, w=%s, h=%s' % size
| Get the width of a TreeViewColumn that would like to be if it were autosized | Is there a way to get the width, in pixels, that a gtk.TreeViewColumn would want to be if the sizing mode was set as gtk.TREE_VIEW_COLUMN_AUTOSIZE, even if it's currently gtk.TREE_VIEW_COLUMN_FIXED?
| [
"I think gtk.TreeViewColumn.cell_get_size() will do the job.\nSample code:\ndef show_size(treeview):\n col = treeview.get_column(0)\n cell = col.get_cell_renderers()[0]\n size = col.cell_get_position(cell)\n print 'current size: position=%s, width=%s' % size\n size = col.cell_get_size()\n print 'a... | [
0
] | [] | [] | [
"gtk",
"gtktreeview",
"pygtk",
"python"
] | stackoverflow_0004000071_gtk_gtktreeview_pygtk_python.txt |
Q:
Laplacian smoothing to Biopython
I am trying to add Laplacian smoothing support to Biopython's Naive Bayes code 1 for my Bioinformatics project.
I have read many documents about Naive Bayes algorithm and Laplacian smoothing and I think I got the basic idea but I just can't integrate this with that code (actually I cannot see which part I will add 1 -laplacian number).
I am not familiar with Python and I am a newbie coder. I appreciate if anyone familiar with Biopython can give me some suggestions.
A:
Try using this definition of the _contents() method instead:
def _contents(items, laplace=False):
# count occurrences of values
counts = {}
for item in items:
counts[item] = counts.get(item,0) + 1.0
# normalize
for k in counts:
if laplace:
counts[k] += 1.0
counts[k] /= (len(items)+len(counts))
else:
counts[k] /= len(items)
return counts
Then change the call on Line 194 into:
# Estimate P(value|class,dim)
nb.p_conditional[i][j] = _contents(values, True)
use True to enable the smoothing, and False to disable it.
Here's a comparison of the output with/without the smoothing:
# without
>>> carmodel.p_conditional
[[{'Red': 0.40000000000000002, 'Yellow': 0.59999999999999998},
{'SUV': 0.59999999999999998, 'Sports': 0.40000000000000002},
{'Domestic': 0.59999999999999998, 'Imported': 0.40000000000000002}],
[{'Red': 0.59999999999999998, 'Yellow': 0.40000000000000002},
{'SUV': 0.20000000000000001, 'Sports': 0.80000000000000004},
{'Domestic': 0.40000000000000002, 'Imported': 0.59999999999999998}]]
# with
>>> carmodel.p_conditional
[[{'Red': 0.42857142857142855, 'Yellow': 0.5714285714285714},
{'SUV': 0.5714285714285714, 'Sports': 0.42857142857142855},
{'Domestic': 0.5714285714285714, 'Imported': 0.42857142857142855}],
[{'Red': 0.5714285714285714, 'Yellow': 0.42857142857142855},
{'SUV': 0.2857142857142857, 'Sports': 0.7142857142857143},
{'Domestic': 0.42857142857142855, 'Imported': 0.5714285714285714}]]
Aside from the above, I think there might be a bug with the code:
The code splits the instances according to their class, and then for each class, and giving each dimension, it counts how many times each of this dimension values appear.
The problem is if for a subset of the instances belonging to one class, it happens that not all values of a dimension appear in that subset, then when the _contents() function is called, it will not see all possible values, and thus will return the wrong probabilities...
I think you need to keep track of the all unique values for each dimension (from the entire dataset), and take that into consideration during the counting process.
| Laplacian smoothing to Biopython | I am trying to add Laplacian smoothing support to Biopython's Naive Bayes code 1 for my Bioinformatics project.
I have read many documents about Naive Bayes algorithm and Laplacian smoothing and I think I got the basic idea but I just can't integrate this with that code (actually I cannot see which part I will add 1 -laplacian number).
I am not familiar with Python and I am a newbie coder. I appreciate if anyone familiar with Biopython can give me some suggestions.
| [
"Try using this definition of the _contents() method instead:\ndef _contents(items, laplace=False):\n # count occurrences of values\n counts = {}\n for item in items:\n counts[item] = counts.get(item,0) + 1.0\n # normalize\n for k in counts:\n if laplace:\n counts[k] += 1.0\n... | [
4
] | [] | [] | [
"bayesian",
"biopython",
"machine_learning",
"python"
] | stackoverflow_0004011115_bayesian_biopython_machine_learning_python.txt |
Q:
Pylons and Memcached
Anyone happen to use this combination in their web application? I'm having a bit of trouble finding some sort of tutorial or guideline for configuring this. Also seeing as how I started using Pylons recently I'm not familiar so please keep the advice very newbie friendly ( I haven't even used modules like Beaker all that much ).
I'm using MySQL, running pastie HTTP server, just installed memcached package on Debian, using SQLAlchemy ORM to interact with my DB in my Pylons app, now I'm not sure what to do.
A:
memcached is nice and framework-agnostic, and you just have to write a bit of code to interact with it. The general idea of memcached is:
object = try_memcached()
if not object:
object = real_query()
put_in_memcached(object)
That will likely be done in your SQLAlchemy abstraction, in your case. Since I'm unfamiliar with your entire platform (and only memcached), I did a bit of Googling.
This blogger appears to have implemented them together, and has helpfully provided a link to the code he uses. The relevant code appears to be this, which might make sense to you:
#!/usr/bin/python
"""
memcached objects for use with SQLAlchemy
"""
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import memcache
import sqlalchemy
import sqlalchemy.orm
SQLA_SESSION = sqlalchemy.orm.sessionmaker()
MEMCACHED_CLIENT = memcache.Client(['127.0.0.1:11211'])
class DetachedORMObject(object):
"""
Session-detached object for use with ORM Mapping. As the SQLAlchemy
documentation indicates, creating and closing a session is not analogous
to creating and closing a database connection. Connections are pooled by
the database engine by default. Creating new sessions is of a minimal
cost. Also, objects using this wrapper will not likely interact in with
the database through the full power of SQLAlchemy queries.
"""
@classmethod
def fetch_by_field(cls, field, value):
"""Fetch a mapped orm object with the give field and value"""
session = SQLA_SESSION()
try:
class_object = session.query(cls).filter(field == value).one()
except sqlalchemy.orm.exc.NoResultFound:
class_object = None
finally:
session.close()
return class_object
def update(self):
"""Update the database with the values of the object"""
session = SQLA_SESSION()
session.add(self)
session.commit()
session.refresh(self)
session.close()
def refresh(self):
"""Refresh the object with the values of the database"""
session = SQLA_SESSION()
session.add(self)
session.refresh(self)
session.close()
def delete(self):
"""Delete the object from the database"""
session = SQLA_SESSION()
session.add(self)
session.delete(self)
session.commit()
session.close()
class MemcachedObject(object):
"""
Object Wrapper for serializing objects in memcached. Utilizes an abstract
method, get_isntance_key, to understand how to get and set objects that
impliment this class.
"""
@classmethod
def get_cached_instance(cls, instance_key):
"""Retrieve and return the object matching the instance_key"""
key = str(cls.__module__ + '.' + cls.__name__ + ':' \
+ str(instance_key))
print "Memcached Getting:", key
return MEMCACHED_CLIENT.get(key)
def set_cached_instance(self, time=0, min_compress_len=0):
"""Set the cached instance of an object"""
print "Memcached Setting:", self.get_cache_key()
return MEMCACHED_CLIENT.set(self.get_cache_key(), self, time, \
min_compress_len)
def delete_cached_instance(self, time=0):
"""Wrapper for the memcached delete method"""
print "Memcached Deleting:", self.get_cache_key()
return MEMCACHED_CLIENT.delete(self.get_cache_key(), time)
def get_cache_key(self):
"""Prepends the full class path of the object to the instance key"""
return self.__class__.__module__ + '.' + \
self.__class__.__name__ + ':' + self.get_instance_key()
def get_instance_key(self):
"""Get the instance key, must be implemented by child objects"""
raise NotImplementedError \
("'GetInstanceKey' method has not been defined.")
class MemcachedORMObject(DetachedORMObject, MemcachedObject):
"""
Putting it all together now. Implements both of the above classes. Worth
noting is the method for checking to see if the fetch_by_field method is
invoked using a primary key of the class. The same technique is used to
generate an instance key for an instance of the class.
"""
@classmethod
def fetch_by_field(cls, field, value):
"""Fetch the requested object from the cache and database"""
orm_object = None
matched_primary_key = True
for key in cls._sa_class_manager.mapper.primary_key:
if field.key != key.key:
matched_primary_key = False
if matched_primary_key:
orm_object = cls.get_cached_instance('(' + str(value) + ')')
if orm_object is None:
orm_object = super(MemcachedORMObject, cls). \
fetch_by_field(field, value)
if orm_object is not None:
orm_object.set_cached_instance()
return orm_object
def update(self):
"""Update the object in the database and memcached"""
DetachedORMObject.update(self)
self.set_cached_instance()
def refresh(self):
"""Refresh the object from the database and memcached"""
DetachedORMObject.refresh(self)
self.set_cached_instance()
def delete(self):
"""Delete the object from the database and memcached"""
DetachedORMObject.delete(self)
self.delete_cached_instance()
def get_instance_key(self):
"""Get the instance key, implimenting abstract method in base"""
key = []
for column in self._sa_instance_state.manager.mapper.primary_key:
key.append('(' + str(getattr(self, column.key)) + ')')
return ''.join(key)
Not sure if that helps, but there you have it. You can see that memcached idiom in use:
if matched_primary_key:
orm_object = cls.get_cached_instance('(' + str(value) + ')')
if orm_object is None:
orm_object = super(MemcachedORMObject, cls). \
fetch_by_field(field, value)
A:
Pylons recommends Beaker for caching and it has a Memcache backend. See here
| Pylons and Memcached | Anyone happen to use this combination in their web application? I'm having a bit of trouble finding some sort of tutorial or guideline for configuring this. Also seeing as how I started using Pylons recently I'm not familiar so please keep the advice very newbie friendly ( I haven't even used modules like Beaker all that much ).
I'm using MySQL, running pastie HTTP server, just installed memcached package on Debian, using SQLAlchemy ORM to interact with my DB in my Pylons app, now I'm not sure what to do.
| [
"memcached is nice and framework-agnostic, and you just have to write a bit of code to interact with it. The general idea of memcached is:\nobject = try_memcached()\nif not object:\n object = real_query()\n put_in_memcached(object)\n\nThat will likely be done in your SQLAlchemy abstraction, in your case. Sin... | [
7,
1
] | [] | [] | [
"memcached",
"pylons",
"python"
] | stackoverflow_0001738250_memcached_pylons_python.txt |
Q:
Installing MySQLdb on Snow Leopard
I followed this tutorial to install Django with MySQL on my Snow Lepard :
http://programmingzen.com/2007/12/22/how-to-install-django-with-mysql-on-mac-os-x/
When I run this command :
python setup.py build
I get a lot of errors, the last one is :
error: command 'gcc-4.0' failed with exit status 1
These are the first lines that I get after executing the command line
running build
running build_py
copying MySQLdb/release.py -> build/lib.macosx-10.5-fat3-2.7/MySQLdb
running build_ext
building '_mysql' extension
gcc-4.0 -fno-strict-aliasing -fno-common -dynamic -arch i386 -arch ppc -arch x86_64 -g -O2 -DNDEBUG -g -O3 -Dversion_info=(1,2,3,'final',0) -D__version__=1.2.3 -I/Applications/MAMP/Library/include/mysql -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c _mysql.c -o build/temp.macosx-10.5-fat3-2.7/_mysql.o -fno-omit-frame-pointer -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DIGNORE_SIGHUP_SIGQUIT -DDONT_DECLARE_CXA_PURE_VIRTUAL
_mysql.c:36:23: error: my_config.h: No such file or directory
_mysql.c:38:19: error: mysql.h: No such file or directory
_mysql.c:39:26: error: mysqld_error.h: No such file or directory
_mysql.c:40:20: error: errmsg.h: No such file or directory
_mysql.c:76: error: syntax error before ‘MYSQL’
_mysql.c:76: warning: no semicolon at end of struct or union
_mysql.c:79: error: syntax error before ‘}’ token
Can somebody help me to fix that ?
Thank you :-)
A:
Looks like you're missing MySQL development headers. Apple's support site has a (possible) solution: http://support.apple.com/kb/TA25017
| Installing MySQLdb on Snow Leopard | I followed this tutorial to install Django with MySQL on my Snow Lepard :
http://programmingzen.com/2007/12/22/how-to-install-django-with-mysql-on-mac-os-x/
When I run this command :
python setup.py build
I get a lot of errors, the last one is :
error: command 'gcc-4.0' failed with exit status 1
These are the first lines that I get after executing the command line
running build
running build_py
copying MySQLdb/release.py -> build/lib.macosx-10.5-fat3-2.7/MySQLdb
running build_ext
building '_mysql' extension
gcc-4.0 -fno-strict-aliasing -fno-common -dynamic -arch i386 -arch ppc -arch x86_64 -g -O2 -DNDEBUG -g -O3 -Dversion_info=(1,2,3,'final',0) -D__version__=1.2.3 -I/Applications/MAMP/Library/include/mysql -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c _mysql.c -o build/temp.macosx-10.5-fat3-2.7/_mysql.o -fno-omit-frame-pointer -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DIGNORE_SIGHUP_SIGQUIT -DDONT_DECLARE_CXA_PURE_VIRTUAL
_mysql.c:36:23: error: my_config.h: No such file or directory
_mysql.c:38:19: error: mysql.h: No such file or directory
_mysql.c:39:26: error: mysqld_error.h: No such file or directory
_mysql.c:40:20: error: errmsg.h: No such file or directory
_mysql.c:76: error: syntax error before ‘MYSQL’
_mysql.c:76: warning: no semicolon at end of struct or union
_mysql.c:79: error: syntax error before ‘}’ token
Can somebody help me to fix that ?
Thank you :-)
| [
"Looks like you're missing MySQL development headers. Apple's support site has a (possible) solution: http://support.apple.com/kb/TA25017\n"
] | [
0
] | [] | [] | [
"django",
"mysql",
"python"
] | stackoverflow_0004011500_django_mysql_python.txt |
Q:
Medical information extraction using Python
I am a nurse and I know python but I am not an expert, just used it to process DNA sequences
We got hospital records written in human languages and I am supposed to insert these data into a database or csv file but they are more than 5000 lines and this can be so hard. All the data are written in a consistent format let me show you an example
11/11/2010 - 09:00am : He got nausea, vomiting and died 4 hours later
I should get the following data
Sex: Male
Symptoms: Nausea
Vomiting
Death: True
Death Time: 11/11/2010 - 01:00pm
Another example
11/11/2010 - 09:00am : She got heart burn, vomiting of blood and died 1 hours later in the operation room
And I get
Sex: Female
Symptoms: Heart burn
Vomiting of blood
Death: True
Death Time: 11/11/2010 - 10:00am
the order is not consistent by when I say in ....... so in is a keyword and all the text after is a place until i find another keyword
At the beginnning He or She determine sex, got ........ whatever follows is a group of symptoms that i should split according to the separator which can be a comma, hypen or whatever but it's consistent for the same line
died ..... hours later also should get how many hours, sometimes the patient is stil alive and discharged ....etc
That's to say we have a lot of conventions and I think if i can tokenize the text with keywords and patterns i can get the job done. So please if you know a useful function/modules/tutorial/tool for doing that preferably in python (if not python so a gui tool would be nice)
Some few information:
there are a lot of rules to express various medical data but here are few examples
- Start with the same date/time format followed by a space followd by a colon followed by a space followed by He/She followed space followed by rules separated by and
- Rules:
* got <symptoms>,<symptoms>,....
* investigations were done <investigation>,<investigation>,<investigation>,......
* received <drug or procedure>,<drug or procedure>,.....
* discharged <digit> (hour|hours) later
* kept under observation
* died <digit> (hour|hours) later
* died <digit> (hour|hours) later in <place>
other rules do exist but they follow the same idea
A:
Here are some possible way you can solve this -
Using Regular Expressions - Define them according to the patterns in your text. Match the expressions, extract pattern and you repeat for all records. This approach needs good understanding of the format in which the data is & of course regular expressions :)
String Manipulation - This approach is relatively simpler. Again one needs a good understanding of the format in which the data is. This is what I have done below.
Machine Learning - You could define all you rules & train a model on these rules. After this the model tries to extract data using the rules you provided. This is a lot more generic approach than the first two. Also the toughest to implement.
See if this work for you. Might need some adjustments.
new_file = open('parsed_file', 'w')
for rec in open("your_csv_file"):
tmp = rec.split(' : ')
date = tmp[0]
reason = tmp[1]
if reason[:2] == 'He':
sex = 'Male'
symptoms = reason.split(' and ')[0].split('He got ')[1]
else:
sex = 'Female'
symptoms = reason.split(' and ')[0].split('She got ')[1]
symptoms = [i.strip() for i in symptoms.split(',')]
symptoms = '\n'.join(symptoms)
if 'died' in rec:
died = 'True'
else:
died = 'False'
new_file.write("Sex: %s\nSymptoms: %s\nDeath: %s\nDeath Time: %s\n\n" % (sex, symptoms, died, date))
Ech record is newline separated \n & since you did not mention one patient record is 2 newlines separated \n\n from the other.
LATER: @Nurse what did you end up doing? Just curious.
A:
This uses dateutil to parse the date (e.g. '11/11/2010 - 09:00am'), and parsedatetime to parse the relative time (e.g. '4 hours later'):
import dateutil.parser as dparser
import parsedatetime.parsedatetime as pdt
import parsedatetime.parsedatetime_consts as pdc
import time
import datetime
import re
import pprint
pdt_parser = pdt.Calendar(pdc.Constants())
record_time_pat=re.compile(r'^(.+)\s+:')
sex_pat=re.compile(r'\b(he|she)\b',re.IGNORECASE)
death_time_pat=re.compile(r'died\s+(.+hours later).*$',re.IGNORECASE)
symptom_pat=re.compile(r'[,-]')
def parse_record(astr):
match=record_time_pat.match(astr)
if match:
record_time=dparser.parse(match.group(1))
astr,_=record_time_pat.subn('',astr,1)
else: sys.exit('Can not find record time')
match=sex_pat.search(astr)
if match:
sex=match.group(1)
sex='Female' if sex.lower().startswith('s') else 'Male'
astr,_=sex_pat.subn('',astr,1)
else: sys.exit('Can not find sex')
match=death_time_pat.search(astr)
if match:
death_time,date_type=pdt_parser.parse(match.group(1),record_time)
if date_type==2:
death_time=datetime.datetime.fromtimestamp(
time.mktime(death_time))
astr,_=death_time_pat.subn('',astr,1)
is_dead=True
else:
death_time=None
is_dead=False
astr=astr.replace('and','')
symptoms=[s.strip() for s in symptom_pat.split(astr)]
return {'Record Time': record_time,
'Sex': sex,
'Death Time':death_time,
'Symptoms': symptoms,
'Death':is_dead}
if __name__=='__main__':
tests=[('11/11/2010 - 09:00am : He got nausea, vomiting and died 4 hours later',
{'Sex':'Male',
'Symptoms':['got nausea', 'vomiting'],
'Death':True,
'Death Time':datetime.datetime(2010, 11, 11, 13, 0),
'Record Time':datetime.datetime(2010, 11, 11, 9, 0)}),
('11/11/2010 - 09:00am : She got heart burn, vomiting of blood and died 1 hours later in the operation room',
{'Sex':'Female',
'Symptoms':['got heart burn', 'vomiting of blood'],
'Death':True,
'Death Time':datetime.datetime(2010, 11, 11, 10, 0),
'Record Time':datetime.datetime(2010, 11, 11, 9, 0)})
]
for record,answer in tests:
result=parse_record(record)
pprint.pprint(result)
assert result==answer
print
yields:
{'Death': True,
'Death Time': datetime.datetime(2010, 11, 11, 13, 0),
'Record Time': datetime.datetime(2010, 11, 11, 9, 0),
'Sex': 'Male',
'Symptoms': ['got nausea', 'vomiting']}
{'Death': True,
'Death Time': datetime.datetime(2010, 11, 11, 10, 0),
'Record Time': datetime.datetime(2010, 11, 11, 9, 0),
'Sex': 'Female',
'Symptoms': ['got heart burn', 'vomiting of blood']}
Note: Be careful parsing dates. Does '8/9/2010' mean August 9th, or September 8th? Do all the record keepers use the same convention? If you choose to use dateutil (and I really think that's the best option if the date string is not rigidly structured) be sure to read the section on "Format precedence" in the dateutil documentation so you can (hopefully) resolve '8/9/2010' properly.
If you can't guarantee that all the record keepers use the same convention for specifying dates, then the results of this script would have be checked manually. That might be wise in any case.
A:
Maybe this can help you too , it's not tested
import collections
import datetime
import re
retrieved_data = []
Data = collections.namedtuple('Patient', 'Sex, Symptoms, Death, Death_Time')
dict_data = {'Death':'',
'Death_Time':'',
'Sex' :'',
'Symptoms':''}
with open('data.txt') as f:
for line in iter(f.readline, ""):
date, text = line.split(" : ")
if 'died' in text:
dict_data['Death'] = True
dict_data['Death_Time'] = datetime.datetime.strptime(date,
'%d/%m/%Y - %I:%M%p')
hours = re.findall('[\d]+', datetime.text)
if hours:
dict_data['Death_Time'] += datetime.timedelta(hours=int(hours[0]))
if 'she' in text:
dict_data['Sex'] = 'Female'
else:
dict_data['Sex'] = 'Male'
symptoms = text[text.index('got'):text.index('and')].split(',')
dict_data['Symptoms'] = '\n'.join(symptoms)
retrieved_data.append(Data(**dict_data))
# EDIT : Reset the data dictionary.
dict_data = {'Death':'',
'Death_Time':'',
'Sex' :'',
'Symptoms':''}
A:
It would be relatively easy to do most of the processing with regards to sex, date/time, etc., as those before you have shown, since you can really just define a set of keywords that would indicate these things and use those keywords.
However, the matter of processing symptoms is a bit different, as a definitive list of keywords representing symptoms would be difficult and most likely impossible.
Here's the choice you have to make: does processing this data really represent enough work to spend days writing a program to do it for me? If that's the case, then you should look into natural language processing (or machine learning, as someone before me said). I've heard pretty good things about nltk, a natural language toolkit for Python. If the format is as consistent as you say it is, the natural language processing might not be too difficult.
But, if you're not willing to expend the time and effort to tackle a truly difficult CS problem (and believe me, natural language processing is), then you ought to do most of the processing in Python by parsing dates, gender-specific pronouns, etc. and enter in the tougher parts by hand (e.g. symptoms).
Again, it depends on whether or not you think the programmatic or the manual solution will take less time in the long run.
| Medical information extraction using Python | I am a nurse and I know python but I am not an expert, just used it to process DNA sequences
We got hospital records written in human languages and I am supposed to insert these data into a database or csv file but they are more than 5000 lines and this can be so hard. All the data are written in a consistent format let me show you an example
11/11/2010 - 09:00am : He got nausea, vomiting and died 4 hours later
I should get the following data
Sex: Male
Symptoms: Nausea
Vomiting
Death: True
Death Time: 11/11/2010 - 01:00pm
Another example
11/11/2010 - 09:00am : She got heart burn, vomiting of blood and died 1 hours later in the operation room
And I get
Sex: Female
Symptoms: Heart burn
Vomiting of blood
Death: True
Death Time: 11/11/2010 - 10:00am
the order is not consistent by when I say in ....... so in is a keyword and all the text after is a place until i find another keyword
At the beginnning He or She determine sex, got ........ whatever follows is a group of symptoms that i should split according to the separator which can be a comma, hypen or whatever but it's consistent for the same line
died ..... hours later also should get how many hours, sometimes the patient is stil alive and discharged ....etc
That's to say we have a lot of conventions and I think if i can tokenize the text with keywords and patterns i can get the job done. So please if you know a useful function/modules/tutorial/tool for doing that preferably in python (if not python so a gui tool would be nice)
Some few information:
there are a lot of rules to express various medical data but here are few examples
- Start with the same date/time format followed by a space followd by a colon followed by a space followed by He/She followed space followed by rules separated by and
- Rules:
* got <symptoms>,<symptoms>,....
* investigations were done <investigation>,<investigation>,<investigation>,......
* received <drug or procedure>,<drug or procedure>,.....
* discharged <digit> (hour|hours) later
* kept under observation
* died <digit> (hour|hours) later
* died <digit> (hour|hours) later in <place>
other rules do exist but they follow the same idea
| [
"Here are some possible way you can solve this - \n\nUsing Regular Expressions - Define them according to the patterns in your text. Match the expressions, extract pattern and you repeat for all records. This approach needs good understanding of the format in which the data is & of course regular expressions :)\nSt... | [
9,
9,
3,
1
] | [] | [] | [
"information_extraction",
"machine_learning",
"nlp",
"parsing",
"python"
] | stackoverflow_0004011526_information_extraction_machine_learning_nlp_parsing_python.txt |
Q:
svg image - zoom and borders
I have the following small svg file hosted at: http://dl.dropbox.com/u/7393/hmap.svg and I am trying to make it look like this: http://dl.dropbox.com/u/7393/final.png
The docs where I got this from mentions that this change can be easily obtained by changing a couple of tags on the svg file. However I have tried changing everything and I still can't 1) Zoom, 2) Crop, 3) Make the borders white.
Does any one here have familiarity with svg (xml) image file format to point me in the right direction here?
Thanks.
A:
To zoom change the viewbox setting. The file uses one on the first and only svg tag at the top of the file. The viewbox you are using is for the full image viewBox="0 -500 2752.766 2537.631" and you are placing it into a box with height="476.7276" width="634.26801" pixels. So a line 1 unit wide would be about 1/5 of a pixel.
To zoom in on say Europe change it to ... viewBox="1000 -500 1000 1000" ... this basically says start at unit 1000 from the left and -500 units from the top and crop at 1000 units to the right and 1000 units down.
The white lines are stroke-width:0.99986893; units. to make them a whole pixel you would need to zoom in closer. viewBox="1200 -5 450 450"
You are using both style at the top of the document and inline styles, the inline styles overwrite any other styles.
<g class="land fr" id="fr"
transform="matrix(1.229834,0,0,1.1888568,-278.10861,-149.0924)"
style="fill-opacity:1;stroke:#ffffff;stroke-width:10;
stroke-miterlimit:3.97446823;stroke-dasharray:none;stroke-opacity:1;
fill:#00ff00">
A stroke width of about 10 should work for the full sized being displayed in a 640 x 480 (the width and height set in the first and only svg tag) screen - but to do this it needs to be done on every element
<path d="M 2218.0062,810.62352 C 2217.5173,811.14292 2217.698,811.38357
2218.5472,811.34547 C 2218.3665,811.10481 2218.1868,810.86417
2218.0062,810.62352"
id="path2404"
style="fill-opacity:1;stroke:#ffffff;stroke-width:0.99986994;
stroke-miterlimit:3.97446823;stroke-dasharray:none;stroke-opacity:1;
fill:#6BAED6">
or the style needs to be removed from the elements to use the parent style.
<path d="M 2218.0062,810.62352 C 2217.5173,811.14292 2217.698,811.38357
2218.5472,811.34547 C 2218.3665,811.10481 2218.1868,810.86417
2218.0062,810.62352" id="path2404">
| svg image - zoom and borders | I have the following small svg file hosted at: http://dl.dropbox.com/u/7393/hmap.svg and I am trying to make it look like this: http://dl.dropbox.com/u/7393/final.png
The docs where I got this from mentions that this change can be easily obtained by changing a couple of tags on the svg file. However I have tried changing everything and I still can't 1) Zoom, 2) Crop, 3) Make the borders white.
Does any one here have familiarity with svg (xml) image file format to point me in the right direction here?
Thanks.
| [
"To zoom change the viewbox setting. The file uses one on the first and only svg tag at the top of the file. The viewbox you are using is for the full image viewBox=\"0 -500 2752.766 2537.631\" and you are placing it into a box with height=\"476.7276\" width=\"634.26801\" pixels. So a line 1 unit wide would be abou... | [
1
] | [] | [] | [
"python",
"svg"
] | stackoverflow_0003998254_python_svg.txt |
Q:
How to get the smallest list of duplicate items from list of lists?
I'm a Python newbie, I worked with list for 2 months and I have some questions. I have some list and they have duplicate items. I can get duplicate items between 2 lists, now I want the number of lists and the deepness increased like this example:
http://i219.photobucket.com/albums/cc213/DoSvn/example.png.
I want to get parents of duplicate items from red part, not blue part or list of these duplicate items. How can I do it ?
Thank you :)
Update:
Thank for your answers :D I have used Set and it's great. But I guess if I don't know about the size of the list of lists and nothing more, they are dynamic lists, can I get all of the red parts like that example: http://i219.photobucket.com/albums/cc213/DoSvn/example02.png ?
A:
If you are searching something like this: http://i219.photobucket.com/albums/cc213/DoSvn/example02.png
Then you can try the Counter (available in Python 2.7+). It should work like this:
from collections import Counter
c = Counter()
for s in (listOfLists):
c.update(s)
for item, nbItems in c.iteritems():
if nbItems == 3:
print '%s belongs to three lists.' % item
Or with older Pythons:
counter = {}
for s in (listOfLists):
for elem in s:
counter[elem] = counter.get(elem, 0) + 1
for item, nbItems in counter.iteritems():
if nbItems == 3:
print '%s belongs to three lists.' % item
A:
Use sets and you can get intersection, union, subtraction or any complex combination
s1 = set([1, 2, 3, 4, 5])
s2 = set([4, 5, 6, 7, 8])
s3 = set([1, 3, 5, 7, 9])
# now to get duplicate between s1, s2 and s2 take intersection
print s1&s2&s3
output:
set([5])
| How to get the smallest list of duplicate items from list of lists? | I'm a Python newbie, I worked with list for 2 months and I have some questions. I have some list and they have duplicate items. I can get duplicate items between 2 lists, now I want the number of lists and the deepness increased like this example:
http://i219.photobucket.com/albums/cc213/DoSvn/example.png.
I want to get parents of duplicate items from red part, not blue part or list of these duplicate items. How can I do it ?
Thank you :)
Update:
Thank for your answers :D I have used Set and it's great. But I guess if I don't know about the size of the list of lists and nothing more, they are dynamic lists, can I get all of the red parts like that example: http://i219.photobucket.com/albums/cc213/DoSvn/example02.png ?
| [
"If you are searching something like this: http://i219.photobucket.com/albums/cc213/DoSvn/example02.png\nThen you can try the Counter (available in Python 2.7+). It should work like this:\nfrom collections import Counter\n\nc = Counter()\nfor s in (listOfLists):\n c.update(s)\n\nfor item, nbItems in c.iteritems(... | [
1,
0
] | [] | [] | [
"algorithm",
"python"
] | stackoverflow_0004011967_algorithm_python.txt |
Q:
In mako template: call python function within html string
How do I do this in mako:
<% import cherrypy %>
...
<link rel="stylesheet" href="${cherrypy.url('/media/layout.css')}" type="text/css" />
AttributeError: 'Undefined' object has no attribute 'url'
A:
Answer: Instead of <% import cherrypy %> one needs <%! import cherrypy %>
| In mako template: call python function within html string | How do I do this in mako:
<% import cherrypy %>
...
<link rel="stylesheet" href="${cherrypy.url('/media/layout.css')}" type="text/css" />
AttributeError: 'Undefined' object has no attribute 'url'
| [
"Answer: Instead of <% import cherrypy %> one needs <%! import cherrypy %>\n"
] | [
3
] | [] | [] | [
"cherrypy",
"mako",
"python"
] | stackoverflow_0004007898_cherrypy_mako_python.txt |
Q:
Python2.6 xmpp Jabber Error
I am using xmpp with python and I want create a simple client to communicate with a gmail
id.
#!/usr/bin/python
import xmpp
login = 'Your.Login' # @gmail.com
pwd = 'YourPassword'
cnx = xmpp.Client('gmail.com')
cnx.connect( server=('talk.google.com',5223) )
cnx.auth(login,pwd, 'botty')
cnx.send( xmpp.Message( "YourFriend@gmail.com" ,"Hello World form Python" ) )
When I run the last line I get an exception
IOError: Disconnected from server.
Also when I run the other statements I get debug messages in the console.
What could be the issue and how can I resolve it ?
A:
Here is how it did on my PyTalk client.
Don't forget the @gmail.com in the userID.
I think you should try to connect talk.google.com on the 5222 port.
Also try to specify a ressource for the auth.
import xmpp
import sys
userID = 'Your.Login@gmail.com'
password = 'YourPassword'
ressource = 'Script'
jid = xmpp.protocol.JID(userID)
jabber = xmpp.Client(jid.getDomain(), debug=[])
connection = jabber.connect(('talk.google.com',5222))
if not connection:
sys.stderr.write('Could not connect\n')
else:
sys.stderr.write('Connected with %s\n' % connection)
auth = jabber.auth(jid.getNode(), password, ressource)
if not auth:
sys.stderr.write("Could not authenticate\n")
else:
sys.stderr.write('Authenticate using %s\n' % auth)
jabber.sendInitPresence(requestRoster=1)
jabber.send(xmpp.Message( "YourFriend@gmail.com" ,"Hello World form Python" ))
By the way, it looks very close from Philip Answer
A:
Try this code snippet. I didn't handle the error conditions for simplicity's sake.
import xmpp
login = 'Your.Login' # @gmail.com
pwd = 'YourPassword'
jid = xmpp.protocol.JID(login)
cl = xmpp.Client(jid.getDomain(), debug=[])
if cl.connect(('talk.google.com',5223)):
print "Connected"
else:
print "Connectioned failed"
if cl.auth(jid.getNode(), pwd):
cl.sendInitPresence()
cl.send(xmpp.Message( "YourFriend@gmail.com" ,"Hello World form Python" ))
else:
print "Authentication failed"
To switch off the debugging messages, pass debug=[] for the 2nd parameter on the Client class's constructor:
cl = xmpp.Client(jid.getDomain(), debug=[])
A:
i think you must write this. i test it in python 2.7 with xmpppy 0.5.0rc1 and work IT very nice :P :) :
import xmpp
login = 'your mail@gmail.com' # @gmail.com
pwd = 'your pass'
text='Hello worlD!'
tojid='your friend @gmail.com'
jid = xmpp.protocol.JID(login)
cl = xmpp.Client(jid.getDomain(), debug=[])
if cl.connect(('talk.google.com',5223)):
print "Connected"
else:
print "Connectioned failed"
if cl.auth(jid.getNode(), pwd):
cl.sendInitPresence()
cl.send(xmpp.protocol.Message(tojid,text))
else:
print "Authentication failed"
A:
I think you need to call sendInitPresence before sending the first message:
...
cnx.auth(login,pwd, 'botty')
cnx.sendInitPresence()
cnx.send( xmpp.Message( "YourFriend@gmail.com" ,"Hello World form Python" ) )
| Python2.6 xmpp Jabber Error | I am using xmpp with python and I want create a simple client to communicate with a gmail
id.
#!/usr/bin/python
import xmpp
login = 'Your.Login' # @gmail.com
pwd = 'YourPassword'
cnx = xmpp.Client('gmail.com')
cnx.connect( server=('talk.google.com',5223) )
cnx.auth(login,pwd, 'botty')
cnx.send( xmpp.Message( "YourFriend@gmail.com" ,"Hello World form Python" ) )
When I run the last line I get an exception
IOError: Disconnected from server.
Also when I run the other statements I get debug messages in the console.
What could be the issue and how can I resolve it ?
| [
"Here is how it did on my PyTalk client.\nDon't forget the @gmail.com in the userID.\nI think you should try to connect talk.google.com on the 5222 port.\nAlso try to specify a ressource for the auth.\nimport xmpp\nimport sys\n\nuserID = 'Your.Login@gmail.com' \npassword = 'YourPassword'\nressource = 'Script'\n\n... | [
6,
1,
1,
0
] | [] | [] | [
"python",
"xmpp"
] | stackoverflow_0001301303_python_xmpp.txt |
Q:
A question about lists in Python
I'm having some trouble with a couple of hmwk questions and i can't find th answer-
How would you write an expression that removes the first or last element of a list?
i.e. One of my questions reads "Given a list named 'alist' , write an expression that removes the last element of 'alist'"
A:
Have you looked at this? http://docs.python.org/tutorial/datastructures.html
Particularly at pop([i])?
Your assignment sounds like a standard question in functional programming. Are you supposed to use lambdas?
A:
I'm pretty sure its as simple as "alist.pop()"
A:
Here's how you do it in Python -
x = range(10) #creaete list
no_first = x[1:]
no_last = x[:-1]
no_first_last = x[1:-1]
UPDATE: del in list? Never heard of that. Do you mean pop?
A:
>>> a=[1,2,3,4]
>>> a
[1, 2, 3, 4]
>>> del a[0] # delete the first element
>>> a
[2, 3, 4]
>>> del a[-1] # delete the last element
>>> a
[2, 3]
It's also possible to delete them both at once
>>>
>>> a=[1,2,3,4,5,6]
>>> del a[::len(a)-1]
>>> a
[2, 3, 4, 5]
| A question about lists in Python | I'm having some trouble with a couple of hmwk questions and i can't find th answer-
How would you write an expression that removes the first or last element of a list?
i.e. One of my questions reads "Given a list named 'alist' , write an expression that removes the last element of 'alist'"
| [
"Have you looked at this? http://docs.python.org/tutorial/datastructures.html\nParticularly at pop([i])? \nYour assignment sounds like a standard question in functional programming. Are you supposed to use lambdas?\n",
"I'm pretty sure its as simple as \"alist.pop()\"\n",
"Here's how you do it in Python -\nx = ... | [
2,
1,
0,
0
] | [] | [] | [
"list",
"python"
] | stackoverflow_0004011446_list_python.txt |
Q:
Python (creating a negative of this black and white image)
I am trying to create a negative of this black and white image. The opposite of white (255) is black (0) and vice versa. The opposite of a pixel with a value of 100 is 155.
I cannot use convert, invert, point, eval, lambda.
Here is my code but it doesnt work yet. Could you please let me know which part i am wrong.
def bw_negative(filename):
"""
This function creates a black and white negative of a bitmap image
using the following parameters:
filename is the name of the bitmap image
"""
#Create the handle and then create a list of pixels.
image = Image.open(filename)
pixels = list(image.getdata())
pixel[255] = 0
pixel[0] = 255
for i in range(255,0):
for j in range(0,255):
pixel[i] = j
print pixels[i]
image.putdata(pixels)
image.save ('new.bmp')
A:
Python is an interpreted language, which has the advantage that you can use an interactive interpreter-session to try out things. Try to open the image file in an interactive session and look at the list you get from list(image.getdata()). Once you understand what that list contains, you can think about a way to invert the image.
| Python (creating a negative of this black and white image) | I am trying to create a negative of this black and white image. The opposite of white (255) is black (0) and vice versa. The opposite of a pixel with a value of 100 is 155.
I cannot use convert, invert, point, eval, lambda.
Here is my code but it doesnt work yet. Could you please let me know which part i am wrong.
def bw_negative(filename):
"""
This function creates a black and white negative of a bitmap image
using the following parameters:
filename is the name of the bitmap image
"""
#Create the handle and then create a list of pixels.
image = Image.open(filename)
pixels = list(image.getdata())
pixel[255] = 0
pixel[0] = 255
for i in range(255,0):
for j in range(0,255):
pixel[i] = j
print pixels[i]
image.putdata(pixels)
image.save ('new.bmp')
| [
"Python is an interpreted language, which has the advantage that you can use an interactive interpreter-session to try out things. Try to open the image file in an interactive session and look at the list you get from list(image.getdata()). Once you understand what that list contains, you can think about a way to i... | [
1
] | [] | [] | [
"image",
"pixel",
"python"
] | stackoverflow_0004012841_image_pixel_python.txt |
Q:
python running multiple instances
hi lets assume i have a simple programm in python. This programm is running every five minutes throught cron. but i dont know how to write it so the programm will allow to run multiple processes of its self simultaneously. i want to speed things up ...
A:
I'd handle the forking and process control inside your main python program. Let the cron spawn only a single process and that process be a master for (possible multiple) worker processes.
As for how you can create multiple workers, there's the threading module for multi threading and multiprocessing module for multi processing. You can also keep your actual worker code as separate files and use the subprocess module.
Now that I think about it, maybe you should use supervisord to do the actual process control and simply write the actual work code.
| python running multiple instances | hi lets assume i have a simple programm in python. This programm is running every five minutes throught cron. but i dont know how to write it so the programm will allow to run multiple processes of its self simultaneously. i want to speed things up ...
| [
"I'd handle the forking and process control inside your main python program. Let the cron spawn only a single process and that process be a master for (possible multiple) worker processes.\nAs for how you can create multiple workers, there's the threading module for multi threading and multiprocessing module for mu... | [
1
] | [] | [] | [
"fork",
"jobs",
"process",
"python",
"subprocess"
] | stackoverflow_0004012934_fork_jobs_process_python_subprocess.txt |
Q:
How do I forcefully clean a field and redisplay it in Django?
How can I clean the data in a form and have the cleaned data redisplayed instead of the submitted data?
There are several fields in my form, and every time the user submits it, it should be redisplayed with the values the user entered. However, some of the fields I would like to clean and update for the user. More specifically, I have a field FriendlyIntegerField(forms.CharField) in which I override to_python to not only call int(str(value)), but also set any negative number to 0 etc. I do not want to redisplay the form with the invalid data and have the user fix it himself (which is how Django wants me to do it).
I don't have a problem cleaning the data and use it for the rest of my view-function, but how can I update the actual form with this data?
By the way, the form does not reflect a structure in my data model, and so inherits from Form, not ModelForm.
Edit:
My Field (in a stripped down version) looks like this:
class FriendlyIntegerField(forms.CharField):
def to_python(self, value):
try:
return str(int(str(value).replace(' ','')))
except:
raise forms.ValidationError('some error msg')
My Form (in a stripped down version) looks like this:
class SearchForm(forms.Form):
price_from = FriendlyIntegerField()
price_to = FriendlyIntegerField()
And my view:
def search(request, key):
if request.method == 'POST':
form = SearchForm(request.REQUEST)
if not form.is_valid():
print "Form not valid"
else:
form = SearchForm()
return render_to_response('path_to_template', {'form' : form}
A:
If, after you've cleaned your form with is_valid(), you render that cleaned form with your view, rather than redirect to a new page, you'll see the cleaned data in your page.
(If you wanted the user to see this cleaned data and then properly submit it, you could use a hidden field to track whether the form data has already been cleaned, but this isn't without complications...)
| How do I forcefully clean a field and redisplay it in Django? | How can I clean the data in a form and have the cleaned data redisplayed instead of the submitted data?
There are several fields in my form, and every time the user submits it, it should be redisplayed with the values the user entered. However, some of the fields I would like to clean and update for the user. More specifically, I have a field FriendlyIntegerField(forms.CharField) in which I override to_python to not only call int(str(value)), but also set any negative number to 0 etc. I do not want to redisplay the form with the invalid data and have the user fix it himself (which is how Django wants me to do it).
I don't have a problem cleaning the data and use it for the rest of my view-function, but how can I update the actual form with this data?
By the way, the form does not reflect a structure in my data model, and so inherits from Form, not ModelForm.
Edit:
My Field (in a stripped down version) looks like this:
class FriendlyIntegerField(forms.CharField):
def to_python(self, value):
try:
return str(int(str(value).replace(' ','')))
except:
raise forms.ValidationError('some error msg')
My Form (in a stripped down version) looks like this:
class SearchForm(forms.Form):
price_from = FriendlyIntegerField()
price_to = FriendlyIntegerField()
And my view:
def search(request, key):
if request.method == 'POST':
form = SearchForm(request.REQUEST)
if not form.is_valid():
print "Form not valid"
else:
form = SearchForm()
return render_to_response('path_to_template', {'form' : form}
| [
"If, after you've cleaned your form with is_valid(), you render that cleaned form with your view, rather than redirect to a new page, you'll see the cleaned data in your page. \n(If you wanted the user to see this cleaned data and then properly submit it, you could use a hidden field to track whether the form data ... | [
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0004010507_django_python.txt |
Q:
How to speed up transfer of images from client to server
I am solving a problem of transferring images from a camera in a loop from a client (a robot with camera) to a server (PC).
I am trying to come up with ideas how to maximize the transfer speed so I can get the best possible FPS (that is because I want to create a live video stream out of the transferred images). Disregarding the physical limitations of WIFI stick on the robot, what would you suggest?
So far I have decided:
to use YUV colorspace instead of RGB
to use UDP protocol instead of TCP/IP
Is there anything else I could do to get the maximum fps possible?
A:
This might be quite a bit of work but if your client can handle the computations in real time you could use the same method that video encoders use. Send a key frame every say 5 frames and in between only send the information that changed not the whole frame. I don't know the details of how this is done, but try Googling p-frames or video compression.
A:
Compress the difference between successive images. Add some checksum. Provide some way for the receiver to request full image data for the case where things get out of synch.
There are probably a host of protocols doing that already.
So, search for live video stream protocols.
Cheers & hth.,
| How to speed up transfer of images from client to server | I am solving a problem of transferring images from a camera in a loop from a client (a robot with camera) to a server (PC).
I am trying to come up with ideas how to maximize the transfer speed so I can get the best possible FPS (that is because I want to create a live video stream out of the transferred images). Disregarding the physical limitations of WIFI stick on the robot, what would you suggest?
So far I have decided:
to use YUV colorspace instead of RGB
to use UDP protocol instead of TCP/IP
Is there anything else I could do to get the maximum fps possible?
| [
"This might be quite a bit of work but if your client can handle the computations in real time you could use the same method that video encoders use. Send a key frame every say 5 frames and in between only send the information that changed not the whole frame. I don't know the details of how this is done, but try G... | [
4,
2
] | [] | [] | [
"algorithm",
"c#",
"c++",
"performance",
"python"
] | stackoverflow_0004013046_algorithm_c#_c++_performance_python.txt |
Q:
how to make installer or exe of python code
Possible Duplicate:
py2exe - generate single executable file
Hi,
I have a python project and wanted to make a exe for windows.
In my project I am using pyqt,python and MiKTeX , xlrd.
How to bundle the project so that user in windows can run the exe and every package get install.
I am not able to find any reading materials on this.
Any link will be helpful.
A:
py2exe, unless you want to hide your logic. I use it to deploy my programs inside the company I work at and it works perfectly well
| how to make installer or exe of python code |
Possible Duplicate:
py2exe - generate single executable file
Hi,
I have a python project and wanted to make a exe for windows.
In my project I am using pyqt,python and MiKTeX , xlrd.
How to bundle the project so that user in windows can run the exe and every package get install.
I am not able to find any reading materials on this.
Any link will be helpful.
| [
"py2exe, unless you want to hide your logic. I use it to deploy my programs inside the company I work at and it works perfectly well\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0004013248_python.txt |
Q:
How many bytes does a string have
Is there some function which will tell me how many bytes does a string occupy in memory?
I need to set a size of a socket buffer in order to transfer the whole string at once.
A:
If it's a Python 2.x str, get its len. If it's a Python 3.x str (or a Python 2.x unicode), first encode to bytes (or a str, respectively) using your preferred encoding ('utf-8' is a good choice) and then get the len of the encoded bytes/str object.
For example, ASCII characters use 1 byte each:
>>> len("hello".encode("utf8"))
5
whereas Chinese ones use 3 bytes each:
>>> len("你好".encode("utf8"))
6
A:
import sys
sys.getsizeof(s)
# getsizeof(object, default) -> int
# Return the size of object in bytes.
But actually you need to know its represented length, so something like len(s) should be enough.
| How many bytes does a string have | Is there some function which will tell me how many bytes does a string occupy in memory?
I need to set a size of a socket buffer in order to transfer the whole string at once.
| [
"If it's a Python 2.x str, get its len. If it's a Python 3.x str (or a Python 2.x unicode), first encode to bytes (or a str, respectively) using your preferred encoding ('utf-8' is a good choice) and then get the len of the encoded bytes/str object.\n\nFor example, ASCII characters use 1 byte each:\n>>> len(\"hello... | [
95,
63
] | [] | [] | [
"python"
] | stackoverflow_0004013230_python.txt |
Q:
Best way for a total noob to create a simple web application
I want to make an interactive, simple web application. Where should I start? Preferably python/django because it sounds easiest? you tell me. tia.
A:
Easiest is not necessarily the best route for any development work. I would want the most suitable framework for the job with the best learning support and best available tools. Some of the worst coding I have ever seen was done the easiest way.
If you are looking for a powerful framework with good supporting community then you have a range of options. If you are a beginner then maybe start with Ruby on Rails as the convention over configuration methodology would lead you down a path of good practices for a web app. If you simply need dynamic content within html pages then probably look at PHP or ASP.NET, again dependant on your platform and experience.
Which ever route you go, I would recommend a period of intense learning and research before you code anything, otherwise you will look back at the project afterwards with the "I wish I knew that before I started" kind of thoughts in your head. Anyway the best platform is the one you enjoy using, good luck finding it.
A:
Depends on what you mean by web application and the scope of your project. More details would help, but with such a general question your going to get a lot of general answers.
For the client side, there's a plethora of javascript toolkits/frameworks to choose from. Most like jQuery, I like Dojo. In my opinion, it doesn't really matter which one you choose. All the popular ones have similar capabilities. Another alternative is Flash.
Server side, you can do dynamic pages with technologies like jsp or php. Pure server side, for doing rest calls for AJAX back-ends, you can still use scripting languages like php, any type of cgi scripting, etc. I build my server-side code with Java/servlets.
But again, with no details of what you're actually trying to do, it's impossible to say what you should use.
A:
I would recommend PHP since it's one of the quickest and easiest languages. A good way to start would be to look at w3schools PHP tutorial.
A:
Python/Django is a fantastic starting point, with great documentation and tutorials on getting started. If you already know Python, I'd definitely recommend it. If you don't, think about doing a Python tutorial first.
| Best way for a total noob to create a simple web application | I want to make an interactive, simple web application. Where should I start? Preferably python/django because it sounds easiest? you tell me. tia.
| [
"Easiest is not necessarily the best route for any development work. I would want the most suitable framework for the job with the best learning support and best available tools. Some of the worst coding I have ever seen was done the easiest way. \nIf you are looking for a powerful framework with good supporting co... | [
3,
2,
0,
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0004013163_django_python.txt |
Q:
Python and Web applications
I want to start writing python to support some of my web applications.
Mainly i'm trying to fetch pages, send POST data to urls and some string manipulation.
I understand that there are some disadvantages with the urllib in the new versions.
Can anyone please tell me which release is best for my needs?
Thanks
A:
Step 1. Use urllib2 to fetch pages. Save them in flat files
http://docs.python.org/library/urllib2.html
Step 2. Use a WSGI-based server like werkzeug to serve those pages.
http://docs.python.org/library/wsgiref.html
http://werkzeug.pocoo.org/
When you get that working, plug it into a proper web server (like Apache) with mod_wsgi.
http://code.google.com/p/modwsgi/
A:
I don't know your needs ... but did you try twill. It can fetch pages, fill forms, and whatever you need. It contains a "scripting languages" and can be embedded in your python application.
http://twill.idyll.org/
| Python and Web applications | I want to start writing python to support some of my web applications.
Mainly i'm trying to fetch pages, send POST data to urls and some string manipulation.
I understand that there are some disadvantages with the urllib in the new versions.
Can anyone please tell me which release is best for my needs?
Thanks
| [
"Step 1. Use urllib2 to fetch pages. Save them in flat files\nhttp://docs.python.org/library/urllib2.html\nStep 2. Use a WSGI-based server like werkzeug to serve those pages. \nhttp://docs.python.org/library/wsgiref.html\nhttp://werkzeug.pocoo.org/\nWhen you get that working, plug it into a proper web server (... | [
2,
1
] | [] | [] | [
"python",
"urllib"
] | stackoverflow_0004013512_python_urllib.txt |
Q:
how do I generate a cartesian product of several variables using python iterators?
Dear all,
Given a variable that takes on, say, three values, I'm trying to generate all possible combinations of, say, triplets of these variables.
While this code does the trick,
site_range=[0,1,2]
states = [(s0,s1,s2) for s0 in site_range for s1 in site_range for s2 in site_range]
it's somewhat, uhm, clumsy, and is only getting worse if I try to do the same for combinations of more than three variables
Hence, my Python 101 questions:
How do I go about rewriting the code above using iterators? I mean, is it possible to have an iterator which would yield the elements of the "states" above?
Is it possible to extend this for generating not only triplets, but also 4-plets, 5-plets and so on?
A:
import itertools
site_range=[0,1,2]
[x for x in itertools.product(site_range, repeat=len(site_range))]
A:
Use itertools.product:
>>> site_range=[0,1]
>>> list(product(site_range, repeat=3))
[000 001 010 011 100 101 110 111]
Edit As @Glenn Maynard points out in a comment, this is not the cartesian product. For this, you will have to check his answer.
| how do I generate a cartesian product of several variables using python iterators? | Dear all,
Given a variable that takes on, say, three values, I'm trying to generate all possible combinations of, say, triplets of these variables.
While this code does the trick,
site_range=[0,1,2]
states = [(s0,s1,s2) for s0 in site_range for s1 in site_range for s2 in site_range]
it's somewhat, uhm, clumsy, and is only getting worse if I try to do the same for combinations of more than three variables
Hence, my Python 101 questions:
How do I go about rewriting the code above using iterators? I mean, is it possible to have an iterator which would yield the elements of the "states" above?
Is it possible to extend this for generating not only triplets, but also 4-plets, 5-plets and so on?
| [
"import itertools\nsite_range=[0,1,2]\n[x for x in itertools.product(site_range, repeat=len(site_range))]\n\n",
"Use itertools.product:\n>>> site_range=[0,1]\n>>> list(product(site_range, repeat=3))\n[000 001 010 011 100 101 110 111]\n\nEdit As @Glenn Maynard points out in a comment, this is not the cartesian pro... | [
4,
3
] | [] | [] | [
"cartesian_product",
"iterator",
"python"
] | stackoverflow_0004013730_cartesian_product_iterator_python.txt |
Q:
smallest value of hash() function?
in python (3), what is the smallest value that hash(x) can return?
i want to use hashes to give a quick 'fingerprint' to database values (basically making it easy to see whether two longish, similar texts are actually equal or not), and want to get rid of negative numbers (for simplicity), so i thought i'd just add the smallest possible value to obtain values of zero and up. the manual is very helpfully stating "Hash values are integers." which is about as much as i knew before.
i was a bit surprised today when i found that my hand-compiled python on a 64bit ubuntu apparently uses 64 bits or so for its hashing function; i have always thought that should be 32bit. does machine architecture have an impact on the hash() function?
also, when i compiled python, i did not set any option to compile for a 64bit architecture (hoping it would "just work"). does python adjust that by itself or do i now have a 32bit python on a 64bit machine? not a silly question i believe as many times you are offered separate packages depending on the processer.
edit: i strongly suspect the answer will be closely related to sys.maxint which has been sadly removed from python 3. my suspicion is that i should def xhash( x ): return hash( x ) - ( -maxint - 1 ) if maxint was available. i know this value 'lost its value' due to the unification of ints and longs, but here might be one area where it could still prove useful. anybody have an idea how to implement an analogue?
A:
hash() can return any integer, and as you have seen, the size of the integer can vary with the architecture. This is one of the reasons dictionary ordering is arbitrary: the same set of operations on two different platforms can give different results because the hashes used along the way can differ.
If all you are doing is showing a hash for a quick fingerprint, then simply keep a subset of the bits. It's still valid as a hash. The only requirement of a hash function is that equal values must have equal hashes. After that, differences among hashes simply affect the efficiency of the algorithms using the hash, because the chances of collision go up or down.
So for example, you could decide you want an 8-digit hash, and get it by using:
hash(x) % 100000000
Or you could get an eight-character alphanumeric hash to display with:
md5(hash(x)).hexdigest()[:8]
A:
hash functions usually use the full range of the return value. The reason is that they usually are constructed with bit operations (shifting, xoring, etc) -- the bits in the return value are all used during the algorithm.
Why are positive values easier or harder than negative ones?
A:
The answer to your question should be:
assert(hash(100) == 100 and hash(-100) == -100)
smallest_hash_value= -2**min(range(256), key=lambda i: hash(-2**i))
This depends on the fact that Python uses the integer itself as a hash (with the exception of -1) iff the integer is a valid hash() result. The algorithm normally should remain the same whatever the architecture.
A:
so today i was luckier at the google casino, and this is what i found:
(1) system architecture whether a given python is running on a 64 or a 32bit machine can be found by
from platform import architecture
print( architecture() )
from the documentation: "Queries the given executable (defaults to the Python interpreter binary) for various architecture information. Returns a tuple (bits, linkage) which contain information about the bit architecture and the linkage format used for the executable. Both values are returned as strings." on my machine, that's ('64bit', 'ELF'). bingo.
(2) smallest integer there is no sys.maxint in python 3 no more, but there is sys.maxsize. the docs say "An integer giving the maximum value a variable of type Py_ssize_t can take. It’s usually 2**31 - 1 on a 32-bit platform and 2**63 - 1 on a 64-bit platform." therefore,
from sys import maxsize
assert maxsize == 2**63 - 1
works on my machine.
(3) to directly answer the original question: "the smallest value of the hash() function should be minus whatever sys.maxsize reports. for this reason, it can be expected that
def xhash( x ): return hash( x ) + sys.maxsize + 1
will only ever report values ≥ 0."
| smallest value of hash() function? | in python (3), what is the smallest value that hash(x) can return?
i want to use hashes to give a quick 'fingerprint' to database values (basically making it easy to see whether two longish, similar texts are actually equal or not), and want to get rid of negative numbers (for simplicity), so i thought i'd just add the smallest possible value to obtain values of zero and up. the manual is very helpfully stating "Hash values are integers." which is about as much as i knew before.
i was a bit surprised today when i found that my hand-compiled python on a 64bit ubuntu apparently uses 64 bits or so for its hashing function; i have always thought that should be 32bit. does machine architecture have an impact on the hash() function?
also, when i compiled python, i did not set any option to compile for a 64bit architecture (hoping it would "just work"). does python adjust that by itself or do i now have a 32bit python on a 64bit machine? not a silly question i believe as many times you are offered separate packages depending on the processer.
edit: i strongly suspect the answer will be closely related to sys.maxint which has been sadly removed from python 3. my suspicion is that i should def xhash( x ): return hash( x ) - ( -maxint - 1 ) if maxint was available. i know this value 'lost its value' due to the unification of ints and longs, but here might be one area where it could still prove useful. anybody have an idea how to implement an analogue?
| [
"hash() can return any integer, and as you have seen, the size of the integer can vary with the architecture. This is one of the reasons dictionary ordering is arbitrary: the same set of operations on two different platforms can give different results because the hashes used along the way can differ.\nIf all you a... | [
5,
4,
1,
1
] | [] | [] | [
"32bit_64bit",
"c",
"hash",
"python",
"python_3.x"
] | stackoverflow_0004010859_32bit_64bit_c_hash_python_python_3.x.txt |
Q:
Stuck in a while loop, can you please help?
I am currently writing a program that reads records from a txt file and prints the data on the screen as such:
GRADE REPORT
NAME COURSE GRADE
-----------------------------------------------------------
JOE FRITZ AMERICAN GOVERNMENT B
CALCULUS I A
COMPUTER PROGRAMMING B
ENGLISH COMPOSITION A
Total courses taken = 4
LANE SMITH FUND. OF DATA PROCESSING B
INTERMEDIATE SWIMMING A
INTRO. TO BUSINESS C
Total courses taken = 3
JOHN SPITZ CHOIR C
COLLEGE STATISTICS B
ENGLISH LITERATURE D
INTRO. TO BUSINESS B
Total courses taken = 4
Total courses taken by all students = 11
Run complete. Press the Enter key to exit.
This is the text file it reads from:
JOE FRITZ AMERICAN GOVERNMENT B
JOE FRITZ CALCULUS I A
JOE FRITZ COMPUTER PROGRAMMING B
JOE FRITZ ENGLISH COMPOSITION A
LANE SMITH FUND. OF DATA PROCESSING B
LANE SMITH INTERMEDIATE SWIMMING A
LANE SMITH INTRO. TO BUSINESS C
JOHN SPITZ CHOIR C
JOHN SPITZ COLLEGE STATISTICS B
JOHN SPITZ ENGLISH LITERATURE D
JOHN SPITZ INTRO. TO BUSINESS B
Here is my code:
# VARIABLE DEFINITIONS
name = ""
course = ""
grade = ""
recordCount = 0
eof = False
gradeFile = ""
#-----------------------------------------------------------------------
# CONSTANT DEFINITIONS
#-----------------------------------------------------------------------
# FUNCTION DEFINITIONS
def startUp():
global gradeFile
gradeFile = open("grades.txt","r")
print ("grade report\n").center(60).upper()
print "name".upper(),"course".rjust(22).upper(),"grade".rjust(32).upper()
print "-" * 60
readRecord()
def readRecord():
global name, course, grade
studentRecord = gradeFile.readline()
if studentRecord == "":
eof = True
else:
name = studentRecord[0:20]
course = studentRecord[20:50]
grade = studentRecord[50:51]
eof = False
def processRecords():
numOfRecs = 0
while not eof:
numOfRecs += 1
printLine()
readRecord()
return numOfRecs
def printLine():
print name, course.rjust(3), grade.rjust(3)
def closeUp():
gradeFile.close()
print "\nTotal courses taken by all students = ",recordCount
#-----------------------------------------------------------------------
# PROGRAM'S MAIN LOGIC
startUp()
recordCount = processRecords()
closeUp()
raw_input("\nRun complete. Press the Enter key to exit.")
The results just print the very last line of the txt file and is stuck in a loop. Any assistance would be greatly appreciated. Thank you for your time.
A:
Why don't you do it all in a single function -
def processRecords():
print ("grade report\n").center(60).upper()
print "name".upper(),"course".rjust(22).upper(),"grade".rjust(32).upper()
print "-" * 60
rec_count = 0
for line in open("grades.txt","r"):
name = line[0:20]
course = line[20:50]
grade = line[50:51]
print name, course.rjust(3), grade.rjust(3)
rec_count += 1
return rec_count
All those functions compressed in this one single function. You seem to be programming much like C code. This is Python!
Also try to avoid using globals unless you must. Just a principle I follow. Clearly in this situation you don't need to.
A:
You have to declare eof as global in readRecord():
def readRecord():
global eof, name, course, grade
Otherwise, the changes you make to eof when studentRecord is empty won't survive outside readRecord()'s scope .
A:
In this design, "eof" needs to be addded to the globals list in readRecord()
Otherwise assigning it creates a new local variable, which processRecords() never sees.
A:
You need to add eof to the global variables in readRecord():
...
def readRecord():
global name, course, grade, eof
...
But your solution is a bit un-pythonic. How about something shorter and more flexible:
import re
print ("grade report\n").center(60).upper()
print "name".upper(),"course".rjust(22).upper(),"grade".rjust(32).upper()
print "-" * 60
for line in open("grades.txt"):
name, course, grade = re.split(" *", line.strip())
print "%-21s%-34s%-21s" % (name, course, grade)
raw_input("\nRun complete. Press the Enter key to exit.")
The regular expression is a very simple one that splits on multiple spaces. If you delimiter is something else, then replace the regular expression " *" with your delimiter.
And here is a version that uses python dicts to track the courses and grades by student (i.e. your target output):
import re
print ("grade report\n").center(60).upper()
print "name".upper(),"course".rjust(22).upper(),"grade".rjust(32).upper()
print "-" * 60
grades = {}
total_courses = 0
for line in open("grades.txt"):
name, course, grade = re.split(" *", line.strip())
if not grades.get(name): grades[name] = []
grades[name].append([course, grade])
for name, data in grades.items():
for course, grade in data:
print "%-21s%-34s%s" % (name, course, grade)
name = ""
print "%-21sTotal courses taken = %d\n" % (" ", len(data))
total_courses += len(data)
print "Total courses taken by all students = %d" % total_courses
raw_input("\nRun complete. Press the Enter key to exit.")
BTW, it sounds like you need to learn more about python (and the python way of programming). I recommend Dive Into Python. IMO it's the fastest (and most entertaining) way to come up to speed in python if you are have some programming experience.
A:
You're missing a global here, while your while loop checks the global variable eof, your readRecord function does in fact set the local variable eof.
A:
You have to add eof in the list of globals in readRecord.
However, you said any help, so here's another version:
import itertools as it, operator as op
import collections
Record= collections.namedtuple("Record", "name course grade")
grouper= op.itemgetter(0) # or op.attrgetter('name')
def file_reader(fobj_in):
for line in fobj_in:
name= line[:20].rstrip()
course= line[20:50].rstrip()
grade= line[50:].rstrip()
yield Record(name, course, grade)
def process(fn_in, fobj_out):
for name, records in it.groupby(file_reader(fobj_in), grouper):
out_name= name
for index, record in enumerate(records, 1):
fobj_out.write(
"%-20.19s%-36.35s%s\n" % (out_name, record.course, record.grade)
)
out_name= ''
fobj_out.write("%20sTotal courses taken = %d\n" % ('', index))
if __name__ == "__main__":
import sys
with open('so4009899.txt', 'r') as fobj_in:
process(fobj_in, sys.stdout)
A:
If you have to use many functions for the purposes of 'structure', consider passing parameters to the functions instead of using globals. Here is a small change that illustrates my meaning.
def startUp():
print ("grade report\n").center(60).upper()
print "name".upper(),"course".rjust(22).upper(),"grade".rjust(32).upper()
print "-" * 60
processRecords()
def processRecords():
numOfRecs = 0
for line in open("grades.txt","r"):
numOfRecs += 1
printLine(line)
return numOfRecs
def printLine(studentRecord):
name = studentRecord[0:20]
course = studentRecord[20:50]
grade = studentRecord[50:51]
print name, course.rjust(3), grade.rjust(3)
def closeUp(recordCount):
print "\nTotal courses taken by all students = ",recordCount
startUp()
| Stuck in a while loop, can you please help? | I am currently writing a program that reads records from a txt file and prints the data on the screen as such:
GRADE REPORT
NAME COURSE GRADE
-----------------------------------------------------------
JOE FRITZ AMERICAN GOVERNMENT B
CALCULUS I A
COMPUTER PROGRAMMING B
ENGLISH COMPOSITION A
Total courses taken = 4
LANE SMITH FUND. OF DATA PROCESSING B
INTERMEDIATE SWIMMING A
INTRO. TO BUSINESS C
Total courses taken = 3
JOHN SPITZ CHOIR C
COLLEGE STATISTICS B
ENGLISH LITERATURE D
INTRO. TO BUSINESS B
Total courses taken = 4
Total courses taken by all students = 11
Run complete. Press the Enter key to exit.
This is the text file it reads from:
JOE FRITZ AMERICAN GOVERNMENT B
JOE FRITZ CALCULUS I A
JOE FRITZ COMPUTER PROGRAMMING B
JOE FRITZ ENGLISH COMPOSITION A
LANE SMITH FUND. OF DATA PROCESSING B
LANE SMITH INTERMEDIATE SWIMMING A
LANE SMITH INTRO. TO BUSINESS C
JOHN SPITZ CHOIR C
JOHN SPITZ COLLEGE STATISTICS B
JOHN SPITZ ENGLISH LITERATURE D
JOHN SPITZ INTRO. TO BUSINESS B
Here is my code:
# VARIABLE DEFINITIONS
name = ""
course = ""
grade = ""
recordCount = 0
eof = False
gradeFile = ""
#-----------------------------------------------------------------------
# CONSTANT DEFINITIONS
#-----------------------------------------------------------------------
# FUNCTION DEFINITIONS
def startUp():
global gradeFile
gradeFile = open("grades.txt","r")
print ("grade report\n").center(60).upper()
print "name".upper(),"course".rjust(22).upper(),"grade".rjust(32).upper()
print "-" * 60
readRecord()
def readRecord():
global name, course, grade
studentRecord = gradeFile.readline()
if studentRecord == "":
eof = True
else:
name = studentRecord[0:20]
course = studentRecord[20:50]
grade = studentRecord[50:51]
eof = False
def processRecords():
numOfRecs = 0
while not eof:
numOfRecs += 1
printLine()
readRecord()
return numOfRecs
def printLine():
print name, course.rjust(3), grade.rjust(3)
def closeUp():
gradeFile.close()
print "\nTotal courses taken by all students = ",recordCount
#-----------------------------------------------------------------------
# PROGRAM'S MAIN LOGIC
startUp()
recordCount = processRecords()
closeUp()
raw_input("\nRun complete. Press the Enter key to exit.")
The results just print the very last line of the txt file and is stuck in a loop. Any assistance would be greatly appreciated. Thank you for your time.
| [
"Why don't you do it all in a single function -\ndef processRecords():\n print (\"grade report\\n\").center(60).upper()\n print \"name\".upper(),\"course\".rjust(22).upper(),\"grade\".rjust(32).upper()\n print \"-\" * 60\n\n rec_count = 0\n for line in open(\"grades.txt\",\"r\"):\n name = li... | [
5,
4,
3,
3,
2,
2,
1
] | [] | [] | [
"python",
"variable_assignment"
] | stackoverflow_0004009899_python_variable_assignment.txt |
Q:
PyGTK - Adding Rows to gtk.TreeStore
After following the official tutorial here: tutorial
I'm still having issues adding rows/creating a TreeIter object. Here's what my code looks like:
builder = gtk.Builder()
self.treeview = builder.get_object("treeview")
self.treestore = gtk.TreeStore(str)
self.treeview.set_model(self.treestore)
self.id = gtk.TreeViewColumn('ID')
self.type = gtk.TreeViewColumn("Type")
self.readName = gtk.TreeViewColumn("Filename")
self.set = gtk.TreeViewColumn("Set")
self.treeview.append_column(self.id)
self.treeview.append_column(self.readName)
self.treeview.append_column(self.type)
self.treeview.append_column(self.set)
self.cell = gtk.CellRendererText()
self.cell1 = gtk.CellRendererText()
self.cell2 = gtk.CellRendererText()
self.cell3 = gtk.CellRendererText()
self.id.pack_start(self.cell, True)
self.readName.pack_start(self.cell1, True)
self.type.pack_start(self.cell2, True)
self.set.pack_start(self.cell3, True)
self.id.add_attribute(self.cell, 'text', 0)
self.readName.add_attribute(self.cell1, 'text', 1)
self.type.add_attribute(self.cell2, 'text', 2)
self.set.add_attribute(self.cell3, 'text', 3)
self.treeview.set_reorderable(True)
self.readListVP.add(self.treeview)
iter = self.treestore.get_iter(self.treestore.get_path(iter)) #here's where my problem lies
self.treestore.set_value(None, 0, self.fileCountStr)
self.treestore.set_value(None, 1, "paired-end")
self.treestore.set_value(None, 2, self.file)
self.treestore.set_value(None, 3, self.readSetStr)
A:
I spot a number of general problems with the code as well:
You're creating too many CellRenderer's! Use just one for the whole table.
Don't use the Builder()! It's just stupidly overcomplicating things.
You're not adding columns the most efficent way.
Look into the question I've already asked.
| PyGTK - Adding Rows to gtk.TreeStore | After following the official tutorial here: tutorial
I'm still having issues adding rows/creating a TreeIter object. Here's what my code looks like:
builder = gtk.Builder()
self.treeview = builder.get_object("treeview")
self.treestore = gtk.TreeStore(str)
self.treeview.set_model(self.treestore)
self.id = gtk.TreeViewColumn('ID')
self.type = gtk.TreeViewColumn("Type")
self.readName = gtk.TreeViewColumn("Filename")
self.set = gtk.TreeViewColumn("Set")
self.treeview.append_column(self.id)
self.treeview.append_column(self.readName)
self.treeview.append_column(self.type)
self.treeview.append_column(self.set)
self.cell = gtk.CellRendererText()
self.cell1 = gtk.CellRendererText()
self.cell2 = gtk.CellRendererText()
self.cell3 = gtk.CellRendererText()
self.id.pack_start(self.cell, True)
self.readName.pack_start(self.cell1, True)
self.type.pack_start(self.cell2, True)
self.set.pack_start(self.cell3, True)
self.id.add_attribute(self.cell, 'text', 0)
self.readName.add_attribute(self.cell1, 'text', 1)
self.type.add_attribute(self.cell2, 'text', 2)
self.set.add_attribute(self.cell3, 'text', 3)
self.treeview.set_reorderable(True)
self.readListVP.add(self.treeview)
iter = self.treestore.get_iter(self.treestore.get_path(iter)) #here's where my problem lies
self.treestore.set_value(None, 0, self.fileCountStr)
self.treestore.set_value(None, 1, "paired-end")
self.treestore.set_value(None, 2, self.file)
self.treestore.set_value(None, 3, self.readSetStr)
| [
"I spot a number of general problems with the code as well:\n\nYou're creating too many CellRenderer's! Use just one for the whole table.\nDon't use the Builder()! It's just stupidly overcomplicating things.\nYou're not adding columns the most efficent way.\n\nLook into the question I've already asked.\n"
] | [
1
] | [] | [] | [
"gtk",
"gtktreeview",
"pygtk",
"python"
] | stackoverflow_0003917228_gtk_gtktreeview_pygtk_python.txt |
Q:
How to get the EXIF or meta-data from images?
I need a really good library or a command-line software that can be used to extract Exif data from images. No specific programming language, except the library or the software has to work really well.
I found various libraries for PHP and Python, but most of it hasn't been updated or being maintained and don't work for various manufactures. I wasted a lot of time just to find something that works.
Anyone knows what Flickr uses to get Exif data? Maybe that might answer the questions.
Thanks for any help
A:
Right this is what you want.
http://www.sno.phy.queensu.ca/~phil/exiftool/
Gets what flickr does!
A:
The Gimp's Exif Viewer plugin uses libexif, which is written in C and seems to have been updated recently. Also, here is the EXIF standard itself, if you feel like rolling your own.
A:
jhead is a great command-line EXIF utility; it can both read and write EXIF data. I was even updated recently (November 2009)!
You can download binaries for many different platforms (including Linux, Mac OS X and Windows).
A:
Take a look into exif_read_data() (PHP).
A:
Did you not find that if PHP is compiled with --enable-exif that it will have an API for handling EXIF data? If you did find it, was hat one that doesn't do what you require?
A:
Have you checked this article and the modules pointed to from there? Given the standard nature of Exif, in theory, once you fully meet the standard there should be no need for further development, nor for manufacturer-specific tweaks, right?-)
A:
For perl, there is Image::ExifTool and Image::EXIF.
The first is used by the "exiftool", which is capable of reading lots of meta informations (also in other formats then exif) from images.
A:
I wrote pexif, which works pretty well for me. It is pure python, so should be easy to extend. It also lets you edit exif fields, (if you want, I wrote it to add GPS tags to my photos).
The major down side is that it has limited support for manufacturer tags, but I'm happy to add that support if I'm given example images that I can use for testing.
I've also written a JavaScript version, however this is purely read-only.
A:
You could take a look at exiv2. I used the command line version to bulk rename all my photos to include a timestamp in the filename.
Should be fairly straighforward to use from any scripting environment.
| How to get the EXIF or meta-data from images? | I need a really good library or a command-line software that can be used to extract Exif data from images. No specific programming language, except the library or the software has to work really well.
I found various libraries for PHP and Python, but most of it hasn't been updated or being maintained and don't work for various manufactures. I wasted a lot of time just to find something that works.
Anyone knows what Flickr uses to get Exif data? Maybe that might answer the questions.
Thanks for any help
| [
"Right this is what you want.\nhttp://www.sno.phy.queensu.ca/~phil/exiftool/\nGets what flickr does!\n",
"The Gimp's Exif Viewer plugin uses libexif, which is written in C and seems to have been updated recently. Also, here is the EXIF standard itself, if you feel like rolling your own. \n",
"jhead is a great c... | [
5,
2,
2,
1,
0,
0,
0,
0,
0
] | [] | [] | [
"exif",
"image",
"java",
"php",
"python"
] | stackoverflow_0002079013_exif_image_java_php_python.txt |
Q:
how to wrap file object read and write operation (which are readonly)?
i am trying to wrap the read and write operation of an instance of a file object (specifically the readline() and write() methods).
normally, i would simply replace those functions by a wrapper, a bit like this:
def log(stream):
def logwrite(write):
def inner(data):
print 'LOG: > '+data.replace('\r','<cr>').replace('\n','<lf>')
return write(data)
return inner
stream.write = logwrite(stream.write)
but the attributes of a file object are read-only ! how could i wrap them properly ?
(note: i am too lazy to wrap the whole fileobject... really, i don't want to miss a feature that i did not wrap properly, or a feature which may be added in a future version of python)
more context :
i am trying to automate the communication with a modem, whose AT command set is made available on the network through a telnet session. once logged in, i shall "grab" the module with which i want to communicate with. after some time without activity, a timeout occurs which releases the module (so that it is available to other users on the network... which i don't care, i am the sole user of this equipment). the automatic release writes a specific line on the session.
i want to wrap the readline() on a file built from a socket (cf. socket.makefile()) so that when the timeout occurs, a specific exception is thrown, so that i can detect the timeout anywhere in the script and react appropriately without complicating the AT command parser...
(of course, i want to do that because the timeout is quite spurious, otherwise i would simply feed the modem with commands without any side effect only to keep the module alive)
(feel free to propose any other method or strategy to achieve this effect)
A:
use __getattr__ to wrap your file object. provide modified methods for the ones that you are concerned with.
class Wrapped(object):
def __init__(self, file_):
self._file = file_
def write(self, data):
print 'LOG: > '+data.replace('\r','<cr>').replace('\n','<lf>')
return self._file.write(data)
def __getattr__(self, attr):
return getattr(self._file, attr)
This way, requests for attributes which you don't explicitly provide will be routed to the attribute on the wrapped object and you can just implement the ones that you want
logged = Wrapped(open(filename))
| how to wrap file object read and write operation (which are readonly)? | i am trying to wrap the read and write operation of an instance of a file object (specifically the readline() and write() methods).
normally, i would simply replace those functions by a wrapper, a bit like this:
def log(stream):
def logwrite(write):
def inner(data):
print 'LOG: > '+data.replace('\r','<cr>').replace('\n','<lf>')
return write(data)
return inner
stream.write = logwrite(stream.write)
but the attributes of a file object are read-only ! how could i wrap them properly ?
(note: i am too lazy to wrap the whole fileobject... really, i don't want to miss a feature that i did not wrap properly, or a feature which may be added in a future version of python)
more context :
i am trying to automate the communication with a modem, whose AT command set is made available on the network through a telnet session. once logged in, i shall "grab" the module with which i want to communicate with. after some time without activity, a timeout occurs which releases the module (so that it is available to other users on the network... which i don't care, i am the sole user of this equipment). the automatic release writes a specific line on the session.
i want to wrap the readline() on a file built from a socket (cf. socket.makefile()) so that when the timeout occurs, a specific exception is thrown, so that i can detect the timeout anywhere in the script and react appropriately without complicating the AT command parser...
(of course, i want to do that because the timeout is quite spurious, otherwise i would simply feed the modem with commands without any side effect only to keep the module alive)
(feel free to propose any other method or strategy to achieve this effect)
| [
"use __getattr__ to wrap your file object. provide modified methods for the ones that you are concerned with.\nclass Wrapped(object):\n def __init__(self, file_):\n self._file = file_\n\n def write(self, data):\n print 'LOG: > '+data.replace('\\r','<cr>').replace('\\n','<lf>')\n return se... | [
3
] | [] | [] | [
"python"
] | stackoverflow_0004013843_python.txt |
Q:
Auto truncation of a Tkinter label
I have a label sitting on a frame which is updated periodically to show the status of the application. Periodically, the name of the item being processed will not fit into the window, and with the way I currently have the label configured the window expands to accomidate the label.
Ideally, I'd like way to smartly truncate the text on the label (and then expand if someone expands the window). Is there an easy way to accomplish this?
Practically speaking, how can I just stop the window to expanding based on changes to text in the label?
Edit:
This is an aproximation of the code I'm working on that is not exhibiting the desired behavior (there is a link at the bottom to the actual code file):
r = tk.Tk()
statusFrame = tk.Frame(r, relief=tk.SUNKEN, borderwidth=2)
statusFrame.pack(anchor=tk.SW, fill=tk.X, side=tk.BOTTOM)
statusVar = tk.StringVar()
statusVar.set("String")
tk.Label(statusFrame, textvariable=statusVar).pack(side=tk.LEFT)
statusVar.set("this is a long text, window size should remain the same")
Actual code available here.
A:
The answer depends very much on the way you currently have configured the widget.
For example, I can get the desired functionality as such:
>>> import Tkinter as tk
>>> r=tk.Tk()
>>> r.title('hello')
''
>>> l= tk.Label(r, name='lbl', width=20, text='reduce the window width')
>>> l.pack(fill=tk.BOTH) # or tk.X, depends; check interactive resizing now
>>> l['text']= "This is a long text, window size should remain the same"
Tell us what you do in your code for a more precise (appropriate for your code) answer.
| Auto truncation of a Tkinter label | I have a label sitting on a frame which is updated periodically to show the status of the application. Periodically, the name of the item being processed will not fit into the window, and with the way I currently have the label configured the window expands to accomidate the label.
Ideally, I'd like way to smartly truncate the text on the label (and then expand if someone expands the window). Is there an easy way to accomplish this?
Practically speaking, how can I just stop the window to expanding based on changes to text in the label?
Edit:
This is an aproximation of the code I'm working on that is not exhibiting the desired behavior (there is a link at the bottom to the actual code file):
r = tk.Tk()
statusFrame = tk.Frame(r, relief=tk.SUNKEN, borderwidth=2)
statusFrame.pack(anchor=tk.SW, fill=tk.X, side=tk.BOTTOM)
statusVar = tk.StringVar()
statusVar.set("String")
tk.Label(statusFrame, textvariable=statusVar).pack(side=tk.LEFT)
statusVar.set("this is a long text, window size should remain the same")
Actual code available here.
| [
"The answer depends very much on the way you currently have configured the widget.\nFor example, I can get the desired functionality as such:\n>>> import Tkinter as tk\n>>> r=tk.Tk()\n>>> r.title('hello')\n''\n>>> l= tk.Label(r, name='lbl', width=20, text='reduce the window width')\n>>> l.pack(fill=tk.BOTH) # or tk... | [
0
] | [] | [] | [
"python",
"tkinter"
] | stackoverflow_0004001285_python_tkinter.txt |
Q:
How to enable SSL with a IIS 6 + FastCGI + Django setup?
I have successfully setup FastCGI/Django on a IIS 6 Server.
What I don't know how to do, is to enable SSL connections.
Any tips or ideas to get me started? I'm not an IIS expert, so this is quite confusing for me. :)
A:
I'm not an IIS expert either, although I do have my own web server.
Have you installed your SSL certificate? If so try reading,
https://web.archive.org/web/1/http://articles.techrepublic%2ecom%2ecom/5100-10878_11-5055536.html
which should assist you in completing the installation.
| How to enable SSL with a IIS 6 + FastCGI + Django setup? | I have successfully setup FastCGI/Django on a IIS 6 Server.
What I don't know how to do, is to enable SSL connections.
Any tips or ideas to get me started? I'm not an IIS expert, so this is quite confusing for me. :)
| [
"I'm not an IIS expert either, although I do have my own web server.\nHave you installed your SSL certificate? If so try reading,\nhttps://web.archive.org/web/1/http://articles.techrepublic%2ecom%2ecom/5100-10878_11-5055536.html\nwhich should assist you in completing the installation.\n"
] | [
2
] | [] | [] | [
"django",
"fastcgi",
"iis",
"python",
"ssl"
] | stackoverflow_0004013536_django_fastcgi_iis_python_ssl.txt |
Q:
Extending forms.SelectMultiple Without Losing Values
I have a model form that I am writing a custom widget for in order to replace the many-to-many forms.SelectMultiple fields with jQuery FCBKcomplete widgets. While the replacement of the multiselect element works fine, it is no longer pulling the options for the multiselect.
Here is my widget:
class FCBKcompleteWidget(forms.SelectMultiple):
def _media(self):
return forms.Media(js=(reverse('appstatic',
args=['js/jquery.fcbkcomplete.min.js']),
reverse('appstatic',
args=['js/init-fcbkcomplete.js'])),
css={'all': (reverse('appstatic',
args=['css/jquery.fcbkcomplete'
'.css']),)})
media = property(_media)
Here is my form:
class BlogForm(forms.ModelForm):
class Meta(object):
model = models.Blog
exclude = ('slug',)
def __init__(self, *args, **kwargs):
super(BlogForm, self).__init__(*args, **kwargs)
self.fields['description'].widget = TinyMCEWidget()
fcbkcomplete_fields = ['categories', 'admins', 'editors']
for field in fcbkcomplete_fields:
self.fields[field].widget = FCBKcompleteWidget()
Here are my models:
class Category(models.Model):
"""A blog category"""
title = models.CharField(max_length=128)
slug = models.SlugField()
class Meta(object):
verbose_name_plural = u'Categories'
def __unicode__(self):
return self.title
@models.permalink
def get_absolute_url(self):
return ('category', (), {'slug': self.slug})
class Blog(models.Model):
"""A blog"""
title = models.CharField(max_length=128)
slug = models.SlugField(unique=True)
description = models.TextField()
categories = models.ManyToManyField(Category, related_name='blogs')
shared = models.BooleanField()
admins = models.ManyToManyField(User, related_name='blog_admins')
editors = models.ManyToManyField(User, related_name='blog_editors')
def __unicode__(self):
return self.title
@models.permalink
def get_absolute_url(self):
return ('blog', (), {'slug': self.slug})
Here is the resulting HTML:
<div class="field">
<label for="name">Categories</label>
<select multiple="multiple" name="categories" id="id_categories">
</select>
<div class="help-text">trimmed for readability</div>
</div>
<div class="field">
<label for="name">Admins</label>
<select multiple="multiple" name="admins" id="id_admins">
</select>
<div class="help-text">trimmed for readability</div>
</div>
<div class="field">
<label for="name">Editors</label>
<select multiple="multiple" name="editors" id="id_editors">
</select>
<div class="help-text">trimmed for readability</div>
</div>
As you can see, none of the options are making it into the multiselect element. Here is the resulting HTML when I don't replace the widget with my custom one:
<div class="field">
<label for="name">Categories</label>
<select multiple="multiple" name="categories" id="id_categories">
<option value="1" selected="selected">Good Stuff</option>
</select>
<div class="help-text">trimmed</div>
</div>
<div class="field">
<label for="name">Admins</label>
<select multiple="multiple" name="admins" id="id_admins">
<option value="2" selected="selected">username</option>
<option value="3">some username</option>
<option value="4">another username</option>
</select>
<div class="help-text">trimmed</div>
</div>
<div class="field">
<label for="name">Editors</label>
<select multiple="multiple" name="editors" id="id_editors">
<option value="2" selected="selected">username</option>
<option value="3">some username</option>
<option value="4">another username</option>
</select>
<div class="help-text">trimmed</div>
</div>
Does anyone have any suggestions as to why the options are not making it through the widget replacement process? Any help would be greatly appreciated.
A:
1 year passed, but answer may be valuable even with current django version.
Reason for such behavior seems to missing CHOICES property for fcbk fields
just push choices to the Form
class EmailSubscriptionFilterForm(forms.ModelForm):
class Meta:
model = EmailSubscription
exclude = ('dsts',)
def loc_name(self, id):
return Location.objects.get(id = id).name
def __init__(self, *args, **kwargs):
super(EmailSubscriptionFilterForm, self).__init__(*args, **kwargs)
fcbkcomplete_fields = ['orgs']
for field in fcbkcomplete_fields:
self.fields[field].widget = MultiOriginSelect()
if args:
self.fields['orgs'].choices = ([(int(o), self.loc_name(int(o))) for o in args[0].getlist('orgs')] )
to init, it will add all options which came with POST request to the select body.
| Extending forms.SelectMultiple Without Losing Values | I have a model form that I am writing a custom widget for in order to replace the many-to-many forms.SelectMultiple fields with jQuery FCBKcomplete widgets. While the replacement of the multiselect element works fine, it is no longer pulling the options for the multiselect.
Here is my widget:
class FCBKcompleteWidget(forms.SelectMultiple):
def _media(self):
return forms.Media(js=(reverse('appstatic',
args=['js/jquery.fcbkcomplete.min.js']),
reverse('appstatic',
args=['js/init-fcbkcomplete.js'])),
css={'all': (reverse('appstatic',
args=['css/jquery.fcbkcomplete'
'.css']),)})
media = property(_media)
Here is my form:
class BlogForm(forms.ModelForm):
class Meta(object):
model = models.Blog
exclude = ('slug',)
def __init__(self, *args, **kwargs):
super(BlogForm, self).__init__(*args, **kwargs)
self.fields['description'].widget = TinyMCEWidget()
fcbkcomplete_fields = ['categories', 'admins', 'editors']
for field in fcbkcomplete_fields:
self.fields[field].widget = FCBKcompleteWidget()
Here are my models:
class Category(models.Model):
"""A blog category"""
title = models.CharField(max_length=128)
slug = models.SlugField()
class Meta(object):
verbose_name_plural = u'Categories'
def __unicode__(self):
return self.title
@models.permalink
def get_absolute_url(self):
return ('category', (), {'slug': self.slug})
class Blog(models.Model):
"""A blog"""
title = models.CharField(max_length=128)
slug = models.SlugField(unique=True)
description = models.TextField()
categories = models.ManyToManyField(Category, related_name='blogs')
shared = models.BooleanField()
admins = models.ManyToManyField(User, related_name='blog_admins')
editors = models.ManyToManyField(User, related_name='blog_editors')
def __unicode__(self):
return self.title
@models.permalink
def get_absolute_url(self):
return ('blog', (), {'slug': self.slug})
Here is the resulting HTML:
<div class="field">
<label for="name">Categories</label>
<select multiple="multiple" name="categories" id="id_categories">
</select>
<div class="help-text">trimmed for readability</div>
</div>
<div class="field">
<label for="name">Admins</label>
<select multiple="multiple" name="admins" id="id_admins">
</select>
<div class="help-text">trimmed for readability</div>
</div>
<div class="field">
<label for="name">Editors</label>
<select multiple="multiple" name="editors" id="id_editors">
</select>
<div class="help-text">trimmed for readability</div>
</div>
As you can see, none of the options are making it into the multiselect element. Here is the resulting HTML when I don't replace the widget with my custom one:
<div class="field">
<label for="name">Categories</label>
<select multiple="multiple" name="categories" id="id_categories">
<option value="1" selected="selected">Good Stuff</option>
</select>
<div class="help-text">trimmed</div>
</div>
<div class="field">
<label for="name">Admins</label>
<select multiple="multiple" name="admins" id="id_admins">
<option value="2" selected="selected">username</option>
<option value="3">some username</option>
<option value="4">another username</option>
</select>
<div class="help-text">trimmed</div>
</div>
<div class="field">
<label for="name">Editors</label>
<select multiple="multiple" name="editors" id="id_editors">
<option value="2" selected="selected">username</option>
<option value="3">some username</option>
<option value="4">another username</option>
</select>
<div class="help-text">trimmed</div>
</div>
Does anyone have any suggestions as to why the options are not making it through the widget replacement process? Any help would be greatly appreciated.
| [
"1 year passed, but answer may be valuable even with current django version. \nReason for such behavior seems to missing CHOICES property for fcbk fields \njust push choices to the Form \nclass EmailSubscriptionFilterForm(forms.ModelForm):\n\n class Meta:\n model = EmailSubscription\n exclude = (... | [
0
] | [] | [] | [
"django",
"django_forms",
"python"
] | stackoverflow_0001668881_django_django_forms_python.txt |
Q:
Splitting results from chardet output to collect encoding type
I am testing chardet in one of my scripts. I wanted to identify the encoding type of a result variable and chardet seems to do fine here.
So this is what I am doing:
myvar1 <-- gets its value from other functions
myvar2 = chardet.detect(myvar1) <-- to detect
the encoding type of myvar1
Now when I do a print myvar2, I receive the output:
{'confidence': 1.0, 'encoding':
'ascii'}
Question 1: Can someone give pointer on how to collect only the encoding value part out of this, i.e. ascii.
Edit:
The scenario is as follows:
I am using unicode(myvar1) to write all input as unicode. But as soon as myvar1 gets a value like 0xab, unicode(myvar1) fails with the error:
UnicodeDecodeError: 'ascii' codec
can't decode byte 0xab in position xxx: ordinal not in range(128)
Therefore, I am tring to:
first identify the encoding type of the input which comes in myvar1,
take the encoding type in myvar2,
decode the input (myvar1) with this encoding (myvar2) using decode() [?]
pass it on to unicode.
The input coming in is variable and not in my control.
I am sure there are other ways to do this, but I am new to this. And I am open to trying.
Any pointer please.
Many Thanks.
A:
print myvar2['encoding']
Now for the added related info: chardet is an attempt of detecting the encoding. It isn't 100% reliable and fails sometimes. However it's the best you've got since reliable encoding detection is impossible. Just provide a way for your users to specify encoding if chardet fails for them.
You can't read a text you don't know specific encoding type for. It is impossible -- because the same byte sequence can mean different chars on different encodings. In other words, encodings are ambiguous. chardet is just a guess. It can and will fail in the wild. The best and only reliable way is to ask whoever generated the string which encoding was used in first place.
EDIT:
for your scenario, the only way to stay sane is to ask whoever generated the string what's the encoding used. You said that
"The input coming in is variable and
not in my control."
If that's true, then you can't correctly read the input. You can't read a text input from a bunch of bytes without knowing beforehand which encoding it used. It's impossible. By definition.
Please ask whoever is generating the bytestrings to provide you the encoding used to generate the bytestrings, together with the bytestrings themselves, so you can make sense of them. Without the encoding, a bytestring is just a chunk of bytes and you can't know which chars are there. It's like having a bunch of data but not knowing how to interpret them.
Where do those bytes comes from? Why don't you have control over which encoding was used to generate the data? Does the data provider know that the data they're providing is useless since you can't correctly interpret it?
I will repeat once more to make it really clear: You can't correctly, reliably read a bunch of bytes as text without knowing the encoding used to generate the bytes. There's no way it will work reliably. You need some kind of agreement with the producer so you'll know the encoding.
A:
second problem: as the traceback says, aBuf is an int but it's expecting a string. You need to find out why.
uhhhh ... just worked it out; you are feeding it a single byte, expressed as an integer (0xab) instead of a string ('\xab'). In any case, chardet requires much more than 1 byte to be able to guess an encoding. Feeding any charset detector one byte is utterly pointless.
| Splitting results from chardet output to collect encoding type | I am testing chardet in one of my scripts. I wanted to identify the encoding type of a result variable and chardet seems to do fine here.
So this is what I am doing:
myvar1 <-- gets its value from other functions
myvar2 = chardet.detect(myvar1) <-- to detect
the encoding type of myvar1
Now when I do a print myvar2, I receive the output:
{'confidence': 1.0, 'encoding':
'ascii'}
Question 1: Can someone give pointer on how to collect only the encoding value part out of this, i.e. ascii.
Edit:
The scenario is as follows:
I am using unicode(myvar1) to write all input as unicode. But as soon as myvar1 gets a value like 0xab, unicode(myvar1) fails with the error:
UnicodeDecodeError: 'ascii' codec
can't decode byte 0xab in position xxx: ordinal not in range(128)
Therefore, I am tring to:
first identify the encoding type of the input which comes in myvar1,
take the encoding type in myvar2,
decode the input (myvar1) with this encoding (myvar2) using decode() [?]
pass it on to unicode.
The input coming in is variable and not in my control.
I am sure there are other ways to do this, but I am new to this. And I am open to trying.
Any pointer please.
Many Thanks.
| [
"print myvar2['encoding']\n\n\nNow for the added related info: chardet is an attempt of detecting the encoding. It isn't 100% reliable and fails sometimes. However it's the best you've got since reliable encoding detection is impossible. Just provide a way for your users to specify encoding if chardet fails for the... | [
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0004014706_python.txt |
Q:
Viewing the code of a Python function
Let's say I'm working in the Python shell and I'm given a function f. How can I access the string containing its source code? (From the shell, not by manually opening the code file.)
I want this to work even for lambda functions defined inside other functions.
A:
inspect.getsource
It looks getsource can't get lambda's source code.
A:
Not necessarily what you're looking for, but in ipython you can do:
>>> function_name??
and you will get the code source of the function (only if it's in a file). So this won't work for lambda. But it's definitely useful!
A:
maybe this can help (can get also lambda but it's very simple),
import linecache
def get_source(f):
source = []
first_line_num = f.func_code.co_firstlineno
source_file = f.func_code.co_filename
source.append(linecache.getline(source_file, first_line_num))
source.append(linecache.getline(source_file, first_line_num + 1))
i = 2
# Here i just look until i don't find any indentation (simple processing).
while source[-1].startswith(' '):
source.append(linecache.getline(source_file, first_line_num + i))
i += 1
return "\n".join(source[:-1])
A:
A function object contains only compiled bytecode, the source text is not kept. The only way to retrieve source code is to read the script file it came from.
There's nothing special about lambdas though: they still have a f.func_code.co_firstline and co_filename property which you can use to retrieve the source file, as long as the lambda was defined in a file and not interactive input.
| Viewing the code of a Python function | Let's say I'm working in the Python shell and I'm given a function f. How can I access the string containing its source code? (From the shell, not by manually opening the code file.)
I want this to work even for lambda functions defined inside other functions.
| [
"inspect.getsource\nIt looks getsource can't get lambda's source code.\n",
"Not necessarily what you're looking for, but in ipython you can do:\n>>> function_name??\n\nand you will get the code source of the function (only if it's in a file). So this won't work for lambda. But it's definitely useful!\n",
"maybe... | [
9,
8,
3,
0
] | [] | [] | [
"introspection",
"python"
] | stackoverflow_0004014722_introspection_python.txt |
Q:
Send commands between two computers over the internet
I wish to control my computer (and usb devices attached to the computer) at home with any computer that is connected to the internet. The computer at home must have a program installed that receives commands from any other computer that is connected to the internet. I thought it would be best if I do this with a web interface as it would not be necessary to install software on that computer. For obvious reasons it would require log in details.
Extra details: The main part of the project is actually a device that I will develop that connects to the computer's usb port. Sorry if it was a bit vague in my original question. This device will perform simple functions such as turning lights on etc. At first I will just attempt to switch the lights remotely using the internet. Later on I will add commands that can control certain aspects of the computer such as the music player. I think doing a full remote desktop connection to control my device is therefore not quite necessary. Does anybody know of any open source projects that can perform these functions?
So basically the problem is sending encrypted commands from a web interface to my computer at home. What would be the best method to achieve this and what programming languages should I use? I know Java, Python and C quite well, but have very little experience with web applications, such as Javascript and PHP.
I have looked at web chat examples as it is sort of similar concept to what I wish to achieve, except the text can be replaced with commands. Is this a viable solution or are there better alternatives?
Thank you
A:
VNC
SSH
Remote Desktop (Windows)
A:
You can write a WEB APPLICATION. The encryption part is solved by simple HTTPS usage. On the server side (your home computer with USB devices attached to it) you should use Python (since you're quite experienced with it) and a Python Web Framework you want (I.E. Django).
A:
While it is an interesting programming question, perhaps you should ask it on ServerFault instead? There you can probably get a lot of nice input on web-based administration / control tools.
A:
Unless this is a "for fun" project for you, there are about a jillion things out there that already do this. And if you want to control a computer from a web browser, be prepared to require installing some kind of custom plugin - since browsers can't touch arbitrary files on the local filesystem, execute local applications, or do other things that are flagrant security risks.
I've been using VNC for over a decade - free and easy.
http://en.wikipedia.org/wiki/Virtual_Network_Computing
A:
Well, I think that java can work well, in fact you have to deal with system calls to manage usb devices and things like that (and as far as I know, PHP is not the best language to do this). Also shouldn't be so hard to create a basic server/client program, just use good encryption mechanism to not show commands around web.
A:
I you are looking for solution you could use from any computer anywhere in the worls without the need to install any software on client pc, try logmein.com (http://secure.logmein.com).
It is free, reliable, works in any modern browser, you don't have to remmeber IPs and hope they won't change, ...
Or if this is a "for fun project" why not write a php script, open port 80 in your router so you can access you script from outside, possibly dynamically link some domain to your IP (http://www.dyndns.com/). In the script you would just login and then for example type the orders in textfield in some form in your script. Lets just say you want to do some command prompt stuf, so you will basically remotely construst a *.bat file for example. Then the script stores this a fromtheinternets.bat to a folder on your desktop that is being constantly monitored for changes. And when such a change is found you just activate the bat file.
Insecure? Yes (It could be made secureER)
Fun to write? Definitely
PS: I am new here, hope it's not "illegal" to post link to actual services, instead of wiki lists. This is by no means and advertisement, I am just a happy user. :)
| Send commands between two computers over the internet | I wish to control my computer (and usb devices attached to the computer) at home with any computer that is connected to the internet. The computer at home must have a program installed that receives commands from any other computer that is connected to the internet. I thought it would be best if I do this with a web interface as it would not be necessary to install software on that computer. For obvious reasons it would require log in details.
Extra details: The main part of the project is actually a device that I will develop that connects to the computer's usb port. Sorry if it was a bit vague in my original question. This device will perform simple functions such as turning lights on etc. At first I will just attempt to switch the lights remotely using the internet. Later on I will add commands that can control certain aspects of the computer such as the music player. I think doing a full remote desktop connection to control my device is therefore not quite necessary. Does anybody know of any open source projects that can perform these functions?
So basically the problem is sending encrypted commands from a web interface to my computer at home. What would be the best method to achieve this and what programming languages should I use? I know Java, Python and C quite well, but have very little experience with web applications, such as Javascript and PHP.
I have looked at web chat examples as it is sort of similar concept to what I wish to achieve, except the text can be replaced with commands. Is this a viable solution or are there better alternatives?
Thank you
| [
"VNC\nSSH\nRemote Desktop (Windows)\n",
"You can write a WEB APPLICATION. The encryption part is solved by simple HTTPS usage. On the server side (your home computer with USB devices attached to it) you should use Python (since you're quite experienced with it) and a Python Web Framework you want (I.E. Django).\n... | [
12,
2,
2,
0,
0,
0
] | [] | [] | [
"java",
"javascript",
"php",
"python"
] | stackoverflow_0004014670_java_javascript_php_python.txt |
Q:
Python - Neaten this append/extend conditional
I have a method which I will accept either a single object or a list of objects. I want to add whatever is passed to another list. Currently, my method looks like this:
def appendOrExtend(self, item):
if type(item).__name__ == "list":
self.list.extend(item)
else:
self.list.append(item)
It seems to me that there should be a more Pythonic way of achieving this, could you suggest one?
A:
def append(self, item):
self.list.append(item)
def extend(self, item):
self.list.extend(item)
Bottom line: Don't have a method to do both things. It confuses and makes your method less useful, instead of more useful. It's also harder to test and to maintain. Also the user of your function already knows if she wants to use append or extend, so by providing a single method you're discarding the information your caller/user already knows.
Another way to write is using packing/unpacking argument syntax:
def append(self, *items):
self.list.extend(items)
that way you can call the method as
x.append('single item')
or
x.append(*list_of_items)
A:
if isinstance(item, list):
A:
Zach provides a solution for checking the type more elegant. However, I would introduce two separate methods addOneElement and addMoreElements (or something similar). This makes it - in my eyes - much more readable.
A:
If you want to be able to handle sets and tuples, you might add those. To be really flexible, maybe you should take anything iterable, but then risk confusion by strings or other objects that you want taken individually but happen to be iterable. Soon someone will tell you that this is a Bad Idea, and they will probably be right - this much flexibility in your interface makes it ambiguous.
A:
You can also do this while keeping the if test:
if not isinstance(item, list):
item = [item]
self.list.extend(item)
| Python - Neaten this append/extend conditional | I have a method which I will accept either a single object or a list of objects. I want to add whatever is passed to another list. Currently, my method looks like this:
def appendOrExtend(self, item):
if type(item).__name__ == "list":
self.list.extend(item)
else:
self.list.append(item)
It seems to me that there should be a more Pythonic way of achieving this, could you suggest one?
| [
"def append(self, item):\n self.list.append(item)\ndef extend(self, item):\n self.list.extend(item)\n\nBottom line: Don't have a method to do both things. It confuses and makes your method less useful, instead of more useful. It's also harder to test and to maintain. Also the user of your function already kno... | [
10,
3,
1,
1,
0
] | [] | [] | [
"append",
"extend",
"list",
"python"
] | stackoverflow_0004014982_append_extend_list_python.txt |
Q:
Regex matching items following a header in HTML
What should be a fairly simple regex extraction is confounding me. Couldn't find a similar question on SO, so happy to be pointed to one if it exists. Given the following HTML:
<h1 class="title">Title One</h1><p><a href="#">40.5</a><a href="#">31.3</a></p>
<h1 class="title alternate">Title Two</h1><p><a href="#">12.1</a><a href="#">82.0</a></p>
(amongst a larger document - the extracts will most probably run across multiple lines)
How can I construct a regular expression that finds the text within the A tags, within the first P following an H1? The regex will go in a loop, such that I can pass in the header, in order to retrieve the items that follow.
<a[^>]*>([0-9.]+?)</a> obviously matches all items in a tag (and should be fine as a tags cannot be nexted), but I can't tie them to an H1.
.+Title One.+<a[^>]*>([0-9.]+?)</a></p> fails.
I had tried to use look behind as so:
(?<=Title One.+)<a[^>]*>([0-9.]+?)</a></p> and some variations but it is only allowed for fixed width matches (which won't be the case here).
For context, this will be using Python's regex engine. I know regex isn't necessarily the best solution for this, so alternative suggestions using DOM or something else also gratefully received :)
Update
To clarify from the above, I'd like to get back the following:
{"Title One": ["40.5", "31.3"], "Title Two": ["12.1", "82.0"]}
(not that I need help composing the dictionary, but it does demonstrate how I need the values to be related to the title).
So far BeautifulSoup looks like the best shot. LXML will also probably work as the source HTML isn't really tag-soup - it's pretty well-structured, at least in the places I'm interested in.
A:
You're right, regex is absolutely the wrong tool for HTML matching.
Your question, however, sounds exactly like the problem for Beautiful Soup - a HTML parser that can deal with less-than-perfect HTML.
A:
The other obvious answer to solve this problem is BeautifulSoup -- I like that it handles the kind of crappy html that you often run into out in the wild as sensibly and gracefully as you can hope.
A:
Is this the kind of thing you're after?
>>> from lxml import etree
>>>
>>> data = """
... <h1 class="title">Title One</h1><p><a href="#">40.5</a><a href="#">31.3</a></p>
... <h1 class="title alternate">Title Two</h1><p><a href="#">12.1</a><a href="#">82.0</a></p>
... """
>>>
>>> d = etree.HTML(data)
>>> d.xpath('//h1/following-sibling::p[1]/a/text()')
['40.5', '31.3', '12.1', '82.0']
This solution uses lxml.etree and an xpath expression.
Update
>>> from lxml import etree
>>> from pprint import pprint
>>>
>>> data = """
... <h1 class="title">Title One</h1><p><a href="#">40.5</a><a href="#">31.3</a></p>
... <h1 class="title alternate">Title Two</h1><p><a href="#">12.1</a><a href="#">82.0</a></p>
... """
>>>
>>> d = etree.HTML(data)
>>> #d.xpath('//h1[following-sibling::*[1][local-name()="p"]]')
...
>>> results = {}
>>> for h in d.xpath('//h1[following-sibling::*[1][local-name()="p"]]'):
... r = results.setdefault(str(h.text),[])
... r += [ str(x) for x in h.xpath('./following-sibling::*[1][local-name()="p"]/a/text()') ]
...
>>> pprint(results)
{'Title One': ['40.5', '31.3'], 'Title Two': ['12.1', '82.0']}
Now using predicates to look ahead, this should iterate through <h1> tags which are immediately followed by <p> tags. ( Casting tag.text to strings explicitly as I have a recollection that they aren't normal strings, you'd have trouble pickling them, etc.)
A:
Don't use regex to parse html. That can't be done, by definition. Use a html parser instead. I suggest lxml.html.
lxml.html deals with badly formed html better than BeautifulSoup, is actively maintained (BeautifulSoup isn't) and is a lot faster since it uses libxml2 internally.
A:
Here's a way using just normal string manipulation
html='''
<h1 class="title">Title One</h1><p><a href="#">40.5</a>
<a href="#">31.3</a></p>
<h1 class="title alternate">Title Two</h1><p><a href="#">12.1</a><a href="#">82.0</a></p>
'''
for i in html.split("</a>"):
if "<a href" in i:
print i.split("<a href")[-1].split(">")[-1]
output
$ python test.py
40.5
31.3
12.1
82.0
I don't actually understand what you want to get, but if your requirement is SIMPLE, yes, a regex or a few string mangling can do it. Not necessary need a parser for that.
| Regex matching items following a header in HTML | What should be a fairly simple regex extraction is confounding me. Couldn't find a similar question on SO, so happy to be pointed to one if it exists. Given the following HTML:
<h1 class="title">Title One</h1><p><a href="#">40.5</a><a href="#">31.3</a></p>
<h1 class="title alternate">Title Two</h1><p><a href="#">12.1</a><a href="#">82.0</a></p>
(amongst a larger document - the extracts will most probably run across multiple lines)
How can I construct a regular expression that finds the text within the A tags, within the first P following an H1? The regex will go in a loop, such that I can pass in the header, in order to retrieve the items that follow.
<a[^>]*>([0-9.]+?)</a> obviously matches all items in a tag (and should be fine as a tags cannot be nexted), but I can't tie them to an H1.
.+Title One.+<a[^>]*>([0-9.]+?)</a></p> fails.
I had tried to use look behind as so:
(?<=Title One.+)<a[^>]*>([0-9.]+?)</a></p> and some variations but it is only allowed for fixed width matches (which won't be the case here).
For context, this will be using Python's regex engine. I know regex isn't necessarily the best solution for this, so alternative suggestions using DOM or something else also gratefully received :)
Update
To clarify from the above, I'd like to get back the following:
{"Title One": ["40.5", "31.3"], "Title Two": ["12.1", "82.0"]}
(not that I need help composing the dictionary, but it does demonstrate how I need the values to be related to the title).
So far BeautifulSoup looks like the best shot. LXML will also probably work as the source HTML isn't really tag-soup - it's pretty well-structured, at least in the places I'm interested in.
| [
"You're right, regex is absolutely the wrong tool for HTML matching.\nYour question, however, sounds exactly like the problem for Beautiful Soup - a HTML parser that can deal with less-than-perfect HTML.\n",
"The other obvious answer to solve this problem is BeautifulSoup -- I like that it handles the kind of cra... | [
1,
1,
1,
0,
0
] | [] | [] | [
"html",
"parsing",
"python",
"regex"
] | stackoverflow_0004014546_html_parsing_python_regex.txt |
Q:
Using an image URL for link_callback in the pisa html to pdf library
Related to: django - pisa : adding images to PDF output
I've got a site that uses the Google Chart API to display a bunch of reports to the user, and I'm trying to implement a PDF version. I'm using the link_callback parameter in pisa.pisaDocument which works great for local media (css/images), but I'm wondering if it would work with remote images (using a google charts URL).
From the documentation on the pisa website, they imply this is possible, but they don't show how:
Normaly pisa expects these files to be found on the local drive. They may also be referenced relative to the original document. But the programmer might want to load form different kind of sources like the Internet via HTTP requests or from a database or anything else.
This is in a Django project, but that's pretty irrelevant. Here's what I'm using for rendering:
html = render_to_string('reporting/pdf.html', keys,
context_instance=RequestContext(request))
result = StringIO.StringIO()
pdf = pisa.pisaDocument(
StringIO.StringIO(html.encode('ascii', 'xmlcharrefreplace')),
result, link_callback=link_callback)
return HttpResponse(result.getvalue(), mimetype='application/pdf')
I tried having the link_callback return a urllib request object, but it does not seem to work:
def link_callback(uri, rel):
if uri.find('chxt') != -1:
url = "%s?%s" % (settings.GOOGLE_CHART_URL, uri)
return urllib2.urlopen(url)
return os.path.join(settings.MEDIA_ROOT, uri.replace(settings.MEDIA_URL, ""))
The PDF it generates comes out perfectly except that the google charts images are not there.
A:
Well this was a whole lot easier than I expected. In your link_callback method, if the uri is a remote image, simply return that value.
def link_callback(uri, rel):
if uri.find('chart.apis.google.com') != -1:
return uri
return os.path.join(settings.MEDIA_ROOT, uri.replace(settings.MEDIA_URL, ""))
The browser is a lot less picky about the image URL, so make sure the uri is properly quoted for pisa. I had space characters in mine which is why it was failing at first (replacing w/ '+' fixed it).
| Using an image URL for link_callback in the pisa html to pdf library | Related to: django - pisa : adding images to PDF output
I've got a site that uses the Google Chart API to display a bunch of reports to the user, and I'm trying to implement a PDF version. I'm using the link_callback parameter in pisa.pisaDocument which works great for local media (css/images), but I'm wondering if it would work with remote images (using a google charts URL).
From the documentation on the pisa website, they imply this is possible, but they don't show how:
Normaly pisa expects these files to be found on the local drive. They may also be referenced relative to the original document. But the programmer might want to load form different kind of sources like the Internet via HTTP requests or from a database or anything else.
This is in a Django project, but that's pretty irrelevant. Here's what I'm using for rendering:
html = render_to_string('reporting/pdf.html', keys,
context_instance=RequestContext(request))
result = StringIO.StringIO()
pdf = pisa.pisaDocument(
StringIO.StringIO(html.encode('ascii', 'xmlcharrefreplace')),
result, link_callback=link_callback)
return HttpResponse(result.getvalue(), mimetype='application/pdf')
I tried having the link_callback return a urllib request object, but it does not seem to work:
def link_callback(uri, rel):
if uri.find('chxt') != -1:
url = "%s?%s" % (settings.GOOGLE_CHART_URL, uri)
return urllib2.urlopen(url)
return os.path.join(settings.MEDIA_ROOT, uri.replace(settings.MEDIA_URL, ""))
The PDF it generates comes out perfectly except that the google charts images are not there.
| [
"Well this was a whole lot easier than I expected. In your link_callback method, if the uri is a remote image, simply return that value.\ndef link_callback(uri, rel):\n if uri.find('chart.apis.google.com') != -1:\n return uri\n return os.path.join(settings.MEDIA_ROOT, uri.replace(settings.MEDIA_URL, \"... | [
6
] | [] | [] | [
"pdf_generation",
"pisa",
"python"
] | stackoverflow_0004000951_pdf_generation_pisa_python.txt |
Q:
Create a new variable as named from input in Python?
This is something that I've been questioning for some time. How would I create a variable at runtime as named by the value of another variable. So, for example, the code would ask the user to input a string. A variable would then be created named after that string with a default value of "default". Is this even possible?
A:
It is possible, but it's certainly not advised. You can access the global namespace as a dict (it's a dict internally) and add entries to it.
If you were doing an interactive interpreter, say, for doing maths, or something. You would actually pass a dict to each eval() or exec that you could then re-use as it's local namespace.
As a quick, bad, example, don't do this at home:
g = globals() # get a reference to the globals dict
g[raw_input("Name Please")] = raw_input("Value Please")
print foo
Run that, it'll traceback unless you provide 'foo' to the first prompt.
| Create a new variable as named from input in Python? | This is something that I've been questioning for some time. How would I create a variable at runtime as named by the value of another variable. So, for example, the code would ask the user to input a string. A variable would then be created named after that string with a default value of "default". Is this even possible?
| [
"It is possible, but it's certainly not advised. You can access the global namespace as a dict (it's a dict internally) and add entries to it.\nIf you were doing an interactive interpreter, say, for doing maths, or something. You would actually pass a dict to each eval() or exec that you could then re-use as it's l... | [
4
] | [] | [] | [
"initialization",
"python",
"runtime",
"variables"
] | stackoverflow_0004015550_initialization_python_runtime_variables.txt |
Q:
Compression with best ratio in Python?
Which compression method in Python has the best compression ratio?
Is the commonly used zlib.compress() the best or are there some better options? I need to get the best compression ratio possible.
I am compresing strings and sending them over UDP. A typical string I compress has about 1,700,000 bytes.
A:
I'm sure that there might be some more obscure formats with better compression, but lzma is the best, of those that are well supported. There are some python bindings here.
EDIT
Don't pick a format without testing, some algorithms do better depending on the data set.
A:
If you are willing to trade performance for getter compression then the bz2 library usually gives better results than the gz (zlib) library.
There are other compression libraries like xz (LZMA2) that might give even better results but they do not appear to be in the core distribution of python.
Python Doc for BZ2 class
EDIT: Depending on the type of image you might not get much additional compression. Many image formats are previously compressed unless it is raw, bmp, or uncompressed tiff. Testing between various compression types would be highly recommended.
EDIT2: If you do decide to do image compression. Image Magick supports python bindings and many image conversion types.
Image Magick
Image Formats Supported
A:
The best compression algorithm definitely depends of the kind of data you are dealing with. Unless if you are working with a list of random numbers stored as a string (in which case no compression algorithm will work) knowing the kind of data usually allows to apply much better algorithms than general purpose ones (see other answers for good ready to use general compression algorithms).
If you are dealing with images you should definitely choose a lossy compression format (ie: pixel aware) preferably to any lossless one. That will give you much better results. Recompressing with a lossless format over a lossy one is a loss of time.
I would search through PIL to see what I can use. Something like converting image to jpeg with a compression ratio compatible with researched quality before sending should be very efficient.
You should also be very cautious if using UDP, it can lose some packets, and most compression format are very sensible to missing parts of file. OK. That can be managed at application level.
| Compression with best ratio in Python? | Which compression method in Python has the best compression ratio?
Is the commonly used zlib.compress() the best or are there some better options? I need to get the best compression ratio possible.
I am compresing strings and sending them over UDP. A typical string I compress has about 1,700,000 bytes.
| [
"I'm sure that there might be some more obscure formats with better compression, but lzma is the best, of those that are well supported. There are some python bindings here.\nEDIT\nDon't pick a format without testing, some algorithms do better depending on the data set.\n",
"If you are willing to trade performan... | [
9,
5,
3
] | [] | [] | [
"compression",
"python"
] | stackoverflow_0004015425_compression_python.txt |
Q:
Figure out child type with Django MTI or specify type as field?
I'm setting up a data model in django using multiple-table inheritance (MTI) like this:
class Metric(models.Model):
account = models.ForeignKey(Account)
date = models.DateField()
value = models.FloatField()
calculation_in_progress = models.BooleanField()
type = models.CharField( max_length=20, choices= METRIC_TYPES ) # Appropriate?
def calculate(self):
# default calculation...
class WebMetric(Metric):
url = models.URLField()
def calculate(self):
# web-specific calculation...
class TextMetric(Metric):
text = models.TextField()
def calculate(self):
# text-specific calculation...
My instinct is to put a 'type' field in the base class as shown here, so I can tell which sub-class any Metric object belongs to. It would be a bit of a hassle to keep this up to date all the time, but possible. But do I need to do this? Is there some way that django handles this automatically?
When I call Metric.objects.all() every objects returned is an instance of Metric never the subclasses. So if I call .calculate() I never get the sub-class's behavior.
I could write a function on the base class that tests to see if I can cast it to any of the sub-types like:
def determine_subtype(self):
try:
self.webmetric
return WebMetric
except WebMetric.DoesNotExist:
pass
# Repeat for every sub-class
but this seems like a bunch of repetitious code. And it's also not something that can be included in a SELECT filter -- only works in python-space.
What's the best way to handle this?
A:
While it might offend some people's sensibilities, the only practical way to solve this problem is to put either a field or a method in the base class which says what kind of object each record really is. The problem with the method you describe is that it requires a separate database query for every type of subclass, for each object you're dealing with. This could get extremely slow when working with large querysets. A better way is to use a ForeignKey to the django Content Type class.
@Carl Meyer wrote a good solution here: How do I access the child classes of an object in django without knowing the name of the child class?
Single Table Inheritance could help alleviate this issue, depending on how it gets implemented. But for now Django does not support it: Single Table Inheritance in Django so it's not a helpful suggestion.
| Figure out child type with Django MTI or specify type as field? | I'm setting up a data model in django using multiple-table inheritance (MTI) like this:
class Metric(models.Model):
account = models.ForeignKey(Account)
date = models.DateField()
value = models.FloatField()
calculation_in_progress = models.BooleanField()
type = models.CharField( max_length=20, choices= METRIC_TYPES ) # Appropriate?
def calculate(self):
# default calculation...
class WebMetric(Metric):
url = models.URLField()
def calculate(self):
# web-specific calculation...
class TextMetric(Metric):
text = models.TextField()
def calculate(self):
# text-specific calculation...
My instinct is to put a 'type' field in the base class as shown here, so I can tell which sub-class any Metric object belongs to. It would be a bit of a hassle to keep this up to date all the time, but possible. But do I need to do this? Is there some way that django handles this automatically?
When I call Metric.objects.all() every objects returned is an instance of Metric never the subclasses. So if I call .calculate() I never get the sub-class's behavior.
I could write a function on the base class that tests to see if I can cast it to any of the sub-types like:
def determine_subtype(self):
try:
self.webmetric
return WebMetric
except WebMetric.DoesNotExist:
pass
# Repeat for every sub-class
but this seems like a bunch of repetitious code. And it's also not something that can be included in a SELECT filter -- only works in python-space.
What's the best way to handle this?
| [
"While it might offend some people's sensibilities, the only practical way to solve this problem is to put either a field or a method in the base class which says what kind of object each record really is. The problem with the method you describe is that it requires a separate database query for every type of subc... | [
1
] | [
"\nBut do I need to do this? \n\nNever. Never. Never.\n\nIs there some way that django handles this automatically?\n\nYes. It's called \"polymorphism\".\nYou never need to know the subclass. Never.\n\"What about my WebMetric.url and my TextMetric.text attributes?\"\nWhat will you do with these attributes? Defi... | [
-1
] | [
"django",
"django_models",
"python"
] | stackoverflow_0003990470_django_django_models_python.txt |
Q:
Does master/worker scale?
I have a master/worker model implemented with separate python processes. The master process holds job/result lists which are protected by mutexes. Many workers run on many machines (about 200 worker processes).
I have noticed that on each machine the workers tend to do 0-20% more or less work than other worker processes and that the machines do 0-20% more or less work than other. The fastest/slowest workers and machines are different every day.
Is this a conceptual problem of the master/worker model, does it hint to a problematic implementation or is everything fine?
A:
The simplest explanation for the +/- 20% thing is that you're seeing a load balancing problem; some of the workers are just getting 20% more work than some of their peers. This could represent an implementation problem, or it could just be discreteness; if you have 200 worker processes but 1040 roughly-equal jobs to do, then 1/5 of the worker processes are going to have an extra 20% of work to do, and there's nothing to be done about that unless you can subdivide the work more finely.
Master/worker scales (and handles these load balancing issues about as well and easily as anything else) up to the point where contention for the shared resources in the master process starts to become non-trivial. You can push scaling forward a little bit by reducing the critical sections (those protected by mutexes) to an absolute minimum; by aggregating work units so that there are fewer requests (but notice that this works in the opposite direction of improving load balancing); or by having multiple masters (potentially a hierarchy of masters). If that doesn't work, you have to start considering more peer-to-peer work scheduling algorithms, where there is no longer a single bottleneck. A peer-to-peer analogue of master/worker is called work stealing, which is one of those things that (IMHO) doesn't seem like it should work until someone shows you that does; it has been recently popularized by Cilk. The idea is that everyone gets a list of tasks, and if the peers need more work they steal it from each other randomly and continue chugging away until they're done. It's more complicated to implement than master/worker, but avoids the single-master bottleneck.
| Does master/worker scale? | I have a master/worker model implemented with separate python processes. The master process holds job/result lists which are protected by mutexes. Many workers run on many machines (about 200 worker processes).
I have noticed that on each machine the workers tend to do 0-20% more or less work than other worker processes and that the machines do 0-20% more or less work than other. The fastest/slowest workers and machines are different every day.
Is this a conceptual problem of the master/worker model, does it hint to a problematic implementation or is everything fine?
| [
"The simplest explanation for the +/- 20% thing is that you're seeing a load balancing problem; some of the workers are just getting 20% more work than some of their peers. This could represent an implementation problem, or it could just be discreteness; if you have 200 worker processes but 1040 roughly-equal jobs... | [
3
] | [] | [] | [
"concurrency",
"python"
] | stackoverflow_0004015348_concurrency_python.txt |
Q:
Can I include sub-config files in my mercurial .hgrc?
I want to keep my main .hgrc in revision control, because I have a fair amount of customization it in, but I want to have different author names depending on which machine I'm using (work, home, &c.).
The way I'd do this in a bash script is to source a host-local bash script that is ignored by Mercurial, but I'm not sure how to do this in the config file format Mercurial uses.
A:
You can do this using the not-often-used but been-there-awhile include syntax.
Put your machine specific stuff in your ~/.hgrc and then include a constant-across-all-systems boilerplate config file. Example:
[ui]
username=You <you@somewhere>
%include .hgrc-boilerplate
Track the .hgrc-boilerplate file in revision control.
See the hgrc man page for more details.
| Can I include sub-config files in my mercurial .hgrc? | I want to keep my main .hgrc in revision control, because I have a fair amount of customization it in, but I want to have different author names depending on which machine I'm using (work, home, &c.).
The way I'd do this in a bash script is to source a host-local bash script that is ignored by Mercurial, but I'm not sure how to do this in the config file format Mercurial uses.
| [
"You can do this using the not-often-used but been-there-awhile include syntax.\nPut your machine specific stuff in your ~/.hgrc and then include a constant-across-all-systems boilerplate config file. Example:\n[ui]\nusername=You <you@somewhere>\n\n%include .hgrc-boilerplate\n\nTrack the .hgrc-boilerplate file in ... | [
20
] | [] | [] | [
"dotfiles",
"hgrc",
"mercurial",
"python"
] | stackoverflow_0004015901_dotfiles_hgrc_mercurial_python.txt |
Q:
Running a PIL based Python program as a web application
I have created a Python Program that converts strings into images at run time , using PIL's Image Draw module.
I want to run this program interfaced with a simple web form that should display a text field to input the string and on pressing a button , it should display the string converted as an image.
Would be a great deal of help if one could precisely guide me as to how i can go about and achieve this ?
PS:I have downloaded the Python Web Module as of now and im currently exploring it, just in case that helps.
A:
I would suggest using web.py from http://webpy.org/. It is an excellent micro-framework for doing simple or one-off web apps.
Do your image creator write files or return streams? Anyway, web.py can return both files and stream large files to the browser. See http://webpy.org/images for an example on how to set headers based on content type.
If you are planning on writing a larger web app, maybe you should look into Turbogears, Pylons or Django.
A:
http://webpython.codepoint.net/ or http://www.linux.com/archive/articles/136602 ?
| Running a PIL based Python program as a web application | I have created a Python Program that converts strings into images at run time , using PIL's Image Draw module.
I want to run this program interfaced with a simple web form that should display a text field to input the string and on pressing a button , it should display the string converted as an image.
Would be a great deal of help if one could precisely guide me as to how i can go about and achieve this ?
PS:I have downloaded the Python Web Module as of now and im currently exploring it, just in case that helps.
| [
"I would suggest using web.py from http://webpy.org/. It is an excellent micro-framework for doing simple or one-off web apps. \nDo your image creator write files or return streams? Anyway, web.py can return both files and stream large files to the browser. See http://webpy.org/images for an example on how to set h... | [
1,
0
] | [] | [] | [
"image_manipulation",
"python",
"web_applications",
"webforms"
] | stackoverflow_0004016101_image_manipulation_python_web_applications_webforms.txt |
Q:
Recursive problem
I have a problem when I import classes from one to another. I have
those classes in different modules:
crm.py
from CRMContactInformation import CRMContactInformation
class CRM(rdb.Model):
"""Set up crm table in the database"""
rdb.metadata(metadata)
rdb.tablename("crms")
id = Column("id", Integer, ForeignKey("screens.id"),
primary_key=True)
screen_id = Column("screen_id", Integer, )
contactInformation = relationship(CRMContactInformation,
userlist=False, backref="crms")
....
CRMContactInformation.py
from CRM import CRM
class CRMContactInformation(rdb.Model):
"""Set up crm contact information table in the database"""
rdb.metadata(metadata)
rdb.tablename("crm_contact_informations")
id = Column("id", Integer, ForeignKey(CRM.id), primary_key=True)
owner = Column("owner", String(50))
.....
As you can see, I have a recursive problem because I import
CRMContactInformation in CRM and CRM in CRMContactInformation. I got
this error or similar:
“AttributeError: ‘module’ object has no attribute ”
I tried to change the imports importing the whole path. It didn't work
out either.
Is there any way I can use the metadata object to access the
attributes of the tables? or another way to solve this?
Thanks in advance!
A:
Delay the imports:
class CRM(rdb.Model):
"""Set up crm table in the database"""
rdb.metadata(metadata)
rdb.tablename("crms")
id = Column("id", Integer, ForeignKey("screens.id"), primary_key=True)
screen_id = Column("screen_id", Integer, )
....
from CRMContactInformation import CRMContactInformation
CRM.contactInformation = relationship(CRMContactInformation, userlist=False, backref="crms")
| Recursive problem | I have a problem when I import classes from one to another. I have
those classes in different modules:
crm.py
from CRMContactInformation import CRMContactInformation
class CRM(rdb.Model):
"""Set up crm table in the database"""
rdb.metadata(metadata)
rdb.tablename("crms")
id = Column("id", Integer, ForeignKey("screens.id"),
primary_key=True)
screen_id = Column("screen_id", Integer, )
contactInformation = relationship(CRMContactInformation,
userlist=False, backref="crms")
....
CRMContactInformation.py
from CRM import CRM
class CRMContactInformation(rdb.Model):
"""Set up crm contact information table in the database"""
rdb.metadata(metadata)
rdb.tablename("crm_contact_informations")
id = Column("id", Integer, ForeignKey(CRM.id), primary_key=True)
owner = Column("owner", String(50))
.....
As you can see, I have a recursive problem because I import
CRMContactInformation in CRM and CRM in CRMContactInformation. I got
this error or similar:
“AttributeError: ‘module’ object has no attribute ”
I tried to change the imports importing the whole path. It didn't work
out either.
Is there any way I can use the metadata object to access the
attributes of the tables? or another way to solve this?
Thanks in advance!
| [
"Delay the imports:\nclass CRM(rdb.Model):\n \"\"\"Set up crm table in the database\"\"\"\n rdb.metadata(metadata)\n rdb.tablename(\"crms\")\n\n id = Column(\"id\", Integer, ForeignKey(\"screens.id\"), primary_key=True)\n screen_id = Column(\"screen_id\", Integer, )\n\n ...... | [
0
] | [] | [] | [
"python",
"sqlalchemy"
] | stackoverflow_0004016047_python_sqlalchemy.txt |
Q:
Prevent OpenSSL from using system certificates?
How can I prevent OpenSSL (specifically, Python's ssl module) from using system certificate authorities?
In other words, I would like it to trust only the certificate authorities which I specify, and nothing else:
ssl_socket = ssl.wrap_socket(newsocket, server_side=True, certfile="my_cert.pem",
ca_certs=MY_TRUSTED_CAs, # <<< Only CAs specified here
cert_reqs=ssl.CERT_REQUIRED, ssl_version=ssl.PROTOCOL_TLSv1)
A:
I've just run a few tests, and listing your selection of CAs in the ca_certs parameters is exactly what you need.
The system I've tried it on is Linux with Python 2.6. If you don't use ca_certs, it doesn't let you use cert_reqs=ssl.CERT_REQUIRED:
Traceback (most recent call last):
File "sockettest.py", line 18, in <module>
cert_reqs=ssl.CERT_REQUIRED, ssl_version=ssl.PROTOCOL_TLSv1)
File "/usr/lib/python2.6/ssl.py", line 350, in wrap_socket
suppress_ragged_eofs=suppress_ragged_eofs)
File "/usr/lib/python2.6/ssl.py", line 113, in __init__
cert_reqs, ssl_version, ca_certs)
ssl.SSLError: _ssl.c:317: No root certificates specified for verification of other-side certificates.
I've also tried to use a client to send a certificate that's not from a CA in the ca_certs parameter, and I get ssl_error_unknown_ca_alert (as expected).
Note that either way, there's no client-certificate CA list send (in the certificate_authorities list in the CertificateRequest TLS message), but that wouldn't be required. It's only useful to help the client choose the certificate.
| Prevent OpenSSL from using system certificates? | How can I prevent OpenSSL (specifically, Python's ssl module) from using system certificate authorities?
In other words, I would like it to trust only the certificate authorities which I specify, and nothing else:
ssl_socket = ssl.wrap_socket(newsocket, server_side=True, certfile="my_cert.pem",
ca_certs=MY_TRUSTED_CAs, # <<< Only CAs specified here
cert_reqs=ssl.CERT_REQUIRED, ssl_version=ssl.PROTOCOL_TLSv1)
| [
"I've just run a few tests, and listing your selection of CAs in the ca_certs parameters is exactly what you need.\nThe system I've tried it on is Linux with Python 2.6. If you don't use ca_certs, it doesn't let you use cert_reqs=ssl.CERT_REQUIRED:\nTraceback (most recent call last):\n File \"sockettest.py\", line... | [
2
] | [] | [] | [
"openssl",
"python",
"ssl"
] | stackoverflow_0004006489_openssl_python_ssl.txt |
Q:
Django Celery causes an import error on runserver command
When I issue a runserver command, an ImportError is raised from djcelery (Django Celery).
% python manage.py runserver
~/Workspace/django-projects/no-labels/src
Validating models...
Unhandled exception in thread started by <function inner_run at 0x1ef7320>
Traceback (most recent call last):
File "/home/damon/Workspace/django-projects/no-labels/env/lib/python2.6/site-packages/django/core/management/commands/runserver.py", line 48, in inner_run
self.validate(display_num_errors=True)
File "/home/damon/Workspace/django-projects/no-labels/env/lib/python2.6/site-packages/django/core/management/base.py", line 249, in validate
num_errors = get_validation_errors(s, app)
File "/home/damon/Workspace/django-projects/no-labels/env/lib/python2.6/site-packages/django/core/management/validation.py", line 28, in get_validation_errors
for (app_name, error) in get_app_errors().items():
File "/home/damon/Workspace/django-projects/no-labels/env/lib/python2.6/site-packages/django/db/models/loading.py", line 146, in get_app_errors
self._populate()
File "/home/damon/Workspace/django-projects/no-labels/env/lib/python2.6/site-packages/django/db/models/loading.py", line 64, in _populate
self.load_app(app_name)
File "/home/damon/Workspace/django-projects/no-labels/env/lib/python2.6/site-packages/django/db/models/loading.py", line 78, in load_app
models = import_module('.models', app_name)
File "/home/damon/Workspace/django-projects/no-labels/env/lib/python2.6/site-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/home/damon/Workspace/django-projects/no-labels/env/lib/python2.6/site-packages/djcelery/models.py", line 14, in <module>
from celery.app import default_app
ImportError: No module named app
The same issue occurs when manually trying to import celery.app.default_app in the python console:
>>> from celery.app import default_app
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named app
A:
celery.app was only added on September 14. You'll need to be running a copy of celery released since then.
| Django Celery causes an import error on runserver command | When I issue a runserver command, an ImportError is raised from djcelery (Django Celery).
% python manage.py runserver
~/Workspace/django-projects/no-labels/src
Validating models...
Unhandled exception in thread started by <function inner_run at 0x1ef7320>
Traceback (most recent call last):
File "/home/damon/Workspace/django-projects/no-labels/env/lib/python2.6/site-packages/django/core/management/commands/runserver.py", line 48, in inner_run
self.validate(display_num_errors=True)
File "/home/damon/Workspace/django-projects/no-labels/env/lib/python2.6/site-packages/django/core/management/base.py", line 249, in validate
num_errors = get_validation_errors(s, app)
File "/home/damon/Workspace/django-projects/no-labels/env/lib/python2.6/site-packages/django/core/management/validation.py", line 28, in get_validation_errors
for (app_name, error) in get_app_errors().items():
File "/home/damon/Workspace/django-projects/no-labels/env/lib/python2.6/site-packages/django/db/models/loading.py", line 146, in get_app_errors
self._populate()
File "/home/damon/Workspace/django-projects/no-labels/env/lib/python2.6/site-packages/django/db/models/loading.py", line 64, in _populate
self.load_app(app_name)
File "/home/damon/Workspace/django-projects/no-labels/env/lib/python2.6/site-packages/django/db/models/loading.py", line 78, in load_app
models = import_module('.models', app_name)
File "/home/damon/Workspace/django-projects/no-labels/env/lib/python2.6/site-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/home/damon/Workspace/django-projects/no-labels/env/lib/python2.6/site-packages/djcelery/models.py", line 14, in <module>
from celery.app import default_app
ImportError: No module named app
The same issue occurs when manually trying to import celery.app.default_app in the python console:
>>> from celery.app import default_app
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named app
| [
"celery.app was only added on September 14. You'll need to be running a copy of celery released since then.\n"
] | [
3
] | [] | [] | [
"celery",
"django",
"importerror",
"python"
] | stackoverflow_0004016704_celery_django_importerror_python.txt |
Q:
Using the video-card memory (ram) to store objects
Is it possible to store objects in the video-card memory instead of the ram? I have been using StringIO to store some objects in the RAM; would it be possible to allocate some of the video-card's memory for this purpose?
A:
Not without OS support. PyOpenCL might have something that you can leverage for that.
| Using the video-card memory (ram) to store objects | Is it possible to store objects in the video-card memory instead of the ram? I have been using StringIO to store some objects in the RAM; would it be possible to allocate some of the video-card's memory for this purpose?
| [
"Not without OS support. PyOpenCL might have something that you can leverage for that.\n"
] | [
2
] | [] | [] | [
"python",
"video_card"
] | stackoverflow_0004016861_python_video_card.txt |
Q:
Python and Postgres on Red Hat
I've been having trouble installing psycopg2 on linux. I receive the following error when I try to import psycopg2.
Python 2.6.4 (r264:75706, Nov 19 2009, 14:52:22)
[GCC 3.4.6 20060404 (Red Hat 3.4.6-3)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import psycopg2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.6/site-packages/psycopg2/__init__.py", line 69, in <module>
from _psycopg import BINARY, NUMBER, STRING, DATETIME, ROWID
ImportError: /usr/local/lib/python2.6/site-packages/psycopg2/_psycopg.so: undefined symbol: PQserverVersion
I'm using Postgresql 9.0.1, Psycopg2 2.2.2, Python 2.6.4, and RHEL 4.
The problem is identical to this question from a year ago, which was never answered: http://code.activestate.com/lists/python-list/169772/.
Has anyone seen this error? Any suggestions would be much appreciated.
EDIT: This same combination of Postgresql 9.0.1, Psycopg2 2.2.2, and Python 2.6.4 worked fine on my mac (snow leopard). So I expect the problem is something particular to Red Hat.
A:
Red hat comes with a build of postgres, which can conflict with a custom installation. Python uses pg_config to configure the psycopg2 build. I installed postgres into /usr/local/pgsql/, but calling which pg_config returned /usr/bin/pg_config/.
In the psycopg2 build directory, there is a file setup.cfg, which lets you explicitly define the path to pg_config:
pg_config=/usr/local/pgsql/bin/pg_config
Setting this one parameter and re-compiling solved my problem.
| Python and Postgres on Red Hat | I've been having trouble installing psycopg2 on linux. I receive the following error when I try to import psycopg2.
Python 2.6.4 (r264:75706, Nov 19 2009, 14:52:22)
[GCC 3.4.6 20060404 (Red Hat 3.4.6-3)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import psycopg2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.6/site-packages/psycopg2/__init__.py", line 69, in <module>
from _psycopg import BINARY, NUMBER, STRING, DATETIME, ROWID
ImportError: /usr/local/lib/python2.6/site-packages/psycopg2/_psycopg.so: undefined symbol: PQserverVersion
I'm using Postgresql 9.0.1, Psycopg2 2.2.2, Python 2.6.4, and RHEL 4.
The problem is identical to this question from a year ago, which was never answered: http://code.activestate.com/lists/python-list/169772/.
Has anyone seen this error? Any suggestions would be much appreciated.
EDIT: This same combination of Postgresql 9.0.1, Psycopg2 2.2.2, and Python 2.6.4 worked fine on my mac (snow leopard). So I expect the problem is something particular to Red Hat.
| [
"Red hat comes with a build of postgres, which can conflict with a custom installation. Python uses pg_config to configure the psycopg2 build. I installed postgres into /usr/local/pgsql/, but calling which pg_config returned /usr/bin/pg_config/.\nIn the psycopg2 build directory, there is a file setup.cfg, which let... | [
2
] | [] | [] | [
"django",
"linux",
"postgresql",
"python"
] | stackoverflow_0004016594_django_linux_postgresql_python.txt |
Q:
Pass Password to runas from Python
I need to run a file as another user without it prompting for a password, from my script. How is this done?
A:
There's an executable program called SANUR.EXE that's made for just this kind of situation: you can use it to pipe in the password on the command-line, like this: runas /user:domain\username cmd.exe | sanur mysekritpassword.
A:
Have the user add whatever it is as a scheduled task with a specific name but no schedule, so it can only be invoked manually. They will need to enter the account credentials when creating the task, but only once. Then you can simply tell schtasks (Windows command line tool) to run it.
| Pass Password to runas from Python | I need to run a file as another user without it prompting for a password, from my script. How is this done?
| [
"There's an executable program called SANUR.EXE that's made for just this kind of situation: you can use it to pipe in the password on the command-line, like this: runas /user:domain\\username cmd.exe | sanur mysekritpassword.\n",
"Have the user add whatever it is as a scheduled task with a specific name but no s... | [
2,
0
] | [] | [] | [
"passwords",
"python",
"runas",
"windows"
] | stackoverflow_0004011245_passwords_python_runas_windows.txt |
Q:
GAE: ValueError: insecure string pickle
I'm having trouble unpickling objects from Google App Engine. I am running Windows 7. Here is the procedure:
Create a CSV with one of the fields being pickle.dumps([[('CS', 2110), ('CS', 3300), ('CS', 3140)]]), or some similar argument.
The CSV looks something like this:
INFO,2210,"CS 2110, 3300, 3140","(lp0
(lp1
(S'CS'
p2
I2110
tp3
a(g2
I3300
tp4
a(g2
I3140
tp5
aa."
CS,3110,CS 2110 or equivalent experience,"(lp0
(lp1
(S'CS'
p2
I2110
tp3
aa."
MSE,4102,"MATH 2210, 2230, 2310, or 2940","(lp0
(lp1
(S'MATH'
p2
I2210
tp3
a(g2
I2230
tp4
a(g2
I2310
tp5
aa(lp6
(g2
I2940
tp7
aa."
(Yes, those are \ns produced by pickle.dumps())
Load this file into the google app engine devserver:
appcfg.py upload_data --config_file="DataLoader.py" --filename="pre_req_data.csv" --kind=Course --url=http://localhost:8083/remote_api "appdir"
Course model:
class Course(db.Model):
dept_code = db.StringProperty()
number = db.IntegerProperty()
raw_pre_reqs = db.StringProperty(multiline=True)
original_description = db.StringProperty()
def getPreReqs(self):
pickle.loads(str(self.raw_pre_reqs))
DataLoader.py:
class CourseLoader(bulkloader.Loader):
def __init__(self):
bulkloader.Loader.__init__(self, 'Course',
[('dept_code', str),
('number', int),
('original_description', str),
('raw_pre_reqs', str)
])
loaders = [CourseLoader]
Confirm that the data is successfully loaded:
Try to unpickle:
class MainHandler(webapp.RequestHandler):
def get(self):
self.writeOut('cock!')
self.writeOut('\n')
courses = Course().all()
for c in courses:
self.writeOut("%s => %s" % (c.raw_pre_reqs, c.getPreReqs()))
def writeOut(self, string):
self.response.out.write(string)
Observe error:
Traceback (most recent call last):
File "C:\Program Files\Google\google_appengine\google\appengine\ext\webapp__init__.py", line 511, in call
handler.get(*groups)
File "main.py", line 30, in get
self.writeOut("%s => %s" % (c.raw_pre_reqs, c.getPreReqs()))
File "src\Models.py", line 17, in getPreReqs
pickle.loads(str(self.raw_pre_reqs))
File "C:\Python26\lib\pickle.py", line 1374, in loads
return Unpickler(file).load()
File "C:\Python26\lib\pickle.py", line 858, in load
dispatchkey
File "C:\Python26\lib\pickle.py", line 966, in load_string
raise ValueError, "insecure string pickle"
ValueError: insecure string pickle
What am I doing wrong here?
A:
Pickle is a binary format, and CSV isn't binary-safe. You need to encode your pickle - say, using base64.b64encode - if you want to transport it inside a text format.
A:
Pickle can be a binary format, but by default it is completely ASCII safe(protocol 0). Read the pickle docs for specifics: pickle.dump.
It will usually have line breaks so you have to take that into consideration when using a line-based format such as CSV.
If you are reading someone else's pickle, they may have used the binary protocol, but the output you pasted looks like normal pickling.
| GAE: ValueError: insecure string pickle | I'm having trouble unpickling objects from Google App Engine. I am running Windows 7. Here is the procedure:
Create a CSV with one of the fields being pickle.dumps([[('CS', 2110), ('CS', 3300), ('CS', 3140)]]), or some similar argument.
The CSV looks something like this:
INFO,2210,"CS 2110, 3300, 3140","(lp0
(lp1
(S'CS'
p2
I2110
tp3
a(g2
I3300
tp4
a(g2
I3140
tp5
aa."
CS,3110,CS 2110 or equivalent experience,"(lp0
(lp1
(S'CS'
p2
I2110
tp3
aa."
MSE,4102,"MATH 2210, 2230, 2310, or 2940","(lp0
(lp1
(S'MATH'
p2
I2210
tp3
a(g2
I2230
tp4
a(g2
I2310
tp5
aa(lp6
(g2
I2940
tp7
aa."
(Yes, those are \ns produced by pickle.dumps())
Load this file into the google app engine devserver:
appcfg.py upload_data --config_file="DataLoader.py" --filename="pre_req_data.csv" --kind=Course --url=http://localhost:8083/remote_api "appdir"
Course model:
class Course(db.Model):
dept_code = db.StringProperty()
number = db.IntegerProperty()
raw_pre_reqs = db.StringProperty(multiline=True)
original_description = db.StringProperty()
def getPreReqs(self):
pickle.loads(str(self.raw_pre_reqs))
DataLoader.py:
class CourseLoader(bulkloader.Loader):
def __init__(self):
bulkloader.Loader.__init__(self, 'Course',
[('dept_code', str),
('number', int),
('original_description', str),
('raw_pre_reqs', str)
])
loaders = [CourseLoader]
Confirm that the data is successfully loaded:
Try to unpickle:
class MainHandler(webapp.RequestHandler):
def get(self):
self.writeOut('cock!')
self.writeOut('\n')
courses = Course().all()
for c in courses:
self.writeOut("%s => %s" % (c.raw_pre_reqs, c.getPreReqs()))
def writeOut(self, string):
self.response.out.write(string)
Observe error:
Traceback (most recent call last):
File "C:\Program Files\Google\google_appengine\google\appengine\ext\webapp__init__.py", line 511, in call
handler.get(*groups)
File "main.py", line 30, in get
self.writeOut("%s => %s" % (c.raw_pre_reqs, c.getPreReqs()))
File "src\Models.py", line 17, in getPreReqs
pickle.loads(str(self.raw_pre_reqs))
File "C:\Python26\lib\pickle.py", line 1374, in loads
return Unpickler(file).load()
File "C:\Python26\lib\pickle.py", line 858, in load
dispatchkey
File "C:\Python26\lib\pickle.py", line 966, in load_string
raise ValueError, "insecure string pickle"
ValueError: insecure string pickle
What am I doing wrong here?
| [
"Pickle is a binary format, and CSV isn't binary-safe. You need to encode your pickle - say, using base64.b64encode - if you want to transport it inside a text format.\n",
"Pickle can be a binary format, but by default it is completely ASCII safe(protocol 0). Read the pickle docs for specifics: pickle.dump.\nIt ... | [
4,
2
] | [] | [] | [
"google_app_engine",
"pickle",
"python"
] | stackoverflow_0002963602_google_app_engine_pickle_python.txt |
Q:
catching a broken socket in python
I'm having problems detecting a broken socket when a broken pipe exception occurs. See the below code for an example:
The Server:
import errno, select, socket, time, SocketServer
class MetaServer(object):
def __init__(self):
self.server = Server(None, Handler, bind_and_activate=False)
def run(self, sock, addr):
rfile = sock.makefile('rb', 1)
self.server.process_request(sock, addr)
while 1:
r, _, _ = select.select([rfile], [], [], 1.0)
if r:
print 'Got %s' % rfile.readline()
else:
print 'nothing to read'
class Server(SocketServer.ThreadingMixIn, SocketServer.TCPServer):
allow_reuse_address = True
daemon_threads = True
class Handler(SocketServer.StreamRequestHandler):
def handle(self):
print 'connected!'
try:
while 1:
self.wfile.write('testing...')
time.sleep(1)
except socket.error as e:
if e.errno == errno.EPIPE:
print 'Broken pipe!'
self.finish()
self.request.close()
if __name__ == '__main__':
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(('127.0.0.1', 8081))
s.listen(5)
ms = MetaServer()
while 1:
client, address = s.accept()
ms.run(client, address)
The Client:
import select, socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('127.0.0.1', 8081))
while 1:
r, _, _ = select.select([s], [], [], 1.0)
if not r:
continue
msg = s.recv(1024)
print 'Got %s' % (msg,)
Now, if I run the server and client, all is well, and I get a "nothing is read" message every second. As soon as I CTRL-C out of the client, the server goes crazy and starts to "read" from what should be a busted socket, dumping a lot of "Got " messages.
Is there some way to detect this broken socket in the MetaServer.run() function to avoid the above said behavior?
A:
Yes, that's something which is not really in the documentation but old Un*x behavior: You need to abort when you get an empty string.
| catching a broken socket in python | I'm having problems detecting a broken socket when a broken pipe exception occurs. See the below code for an example:
The Server:
import errno, select, socket, time, SocketServer
class MetaServer(object):
def __init__(self):
self.server = Server(None, Handler, bind_and_activate=False)
def run(self, sock, addr):
rfile = sock.makefile('rb', 1)
self.server.process_request(sock, addr)
while 1:
r, _, _ = select.select([rfile], [], [], 1.0)
if r:
print 'Got %s' % rfile.readline()
else:
print 'nothing to read'
class Server(SocketServer.ThreadingMixIn, SocketServer.TCPServer):
allow_reuse_address = True
daemon_threads = True
class Handler(SocketServer.StreamRequestHandler):
def handle(self):
print 'connected!'
try:
while 1:
self.wfile.write('testing...')
time.sleep(1)
except socket.error as e:
if e.errno == errno.EPIPE:
print 'Broken pipe!'
self.finish()
self.request.close()
if __name__ == '__main__':
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(('127.0.0.1', 8081))
s.listen(5)
ms = MetaServer()
while 1:
client, address = s.accept()
ms.run(client, address)
The Client:
import select, socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('127.0.0.1', 8081))
while 1:
r, _, _ = select.select([s], [], [], 1.0)
if not r:
continue
msg = s.recv(1024)
print 'Got %s' % (msg,)
Now, if I run the server and client, all is well, and I get a "nothing is read" message every second. As soon as I CTRL-C out of the client, the server goes crazy and starts to "read" from what should be a busted socket, dumping a lot of "Got " messages.
Is there some way to detect this broken socket in the MetaServer.run() function to avoid the above said behavior?
| [
"Yes, that's something which is not really in the documentation but old Un*x behavior: You need to abort when you get an empty string.\n"
] | [
3
] | [] | [] | [
"python",
"sockets"
] | stackoverflow_0004016920_python_sockets.txt |
Q:
Implement a Web based Client that interacts with a TCP Server
EDIT:Question Updated. Thanks Slott.
I have a TCP Server in Python.
It is a server with asynchronous behaviour. .
The message format is Binary Data.
Currently I have a python client that interacts with the code.
What I want to be able to do eventually implement a Web based Front End to this client.
I just wanted to know , what should be correct design for such an application.
A:
Start with any WSGI-based web server. werkzeug is a choice.
The Asynchronous TCP/IP is a seriously complicated problem. HTTP is synchronous. So using the synchronous web server presenting some asynchronous data is always a problem. Always.
The best you can do is to buffer things and have two processes in your web application.
TCP/IP process that collects data from the remove server and buffers it in a file (or files) somewhere.
WSGI web process which handles GET/POST processing.
GET requests will fetch some or all of the buffer and display it.
POST requests will send a message to the TCP/IP server.
A:
For Web-based, talk HTTP. Use JSON or XML as data formats.
Be standards-compliant and make use of the vast number of libraries out there. Don't reinvent the wheel. This way you have less headaches in the long run.
A:
if you need to maintain a connection to a backend server across multiple HTTP requests, Twisted's HTTP server is an ideal choice, since it's built to manage multiple connections easily.
| Implement a Web based Client that interacts with a TCP Server | EDIT:Question Updated. Thanks Slott.
I have a TCP Server in Python.
It is a server with asynchronous behaviour. .
The message format is Binary Data.
Currently I have a python client that interacts with the code.
What I want to be able to do eventually implement a Web based Front End to this client.
I just wanted to know , what should be correct design for such an application.
| [
"Start with any WSGI-based web server. werkzeug is a choice.\nThe Asynchronous TCP/IP is a seriously complicated problem. HTTP is synchronous. So using the synchronous web server presenting some asynchronous data is always a problem. Always.\nThe best you can do is to buffer things and have two processes in you... | [
1,
0,
0
] | [] | [] | [
"python",
"web_applications"
] | stackoverflow_0004017883_python_web_applications.txt |
Q:
implementing callback between Python and C
I have wrapped some C code using SWIG to use it as a python library.
Within this framework, some python code I have written calls a C function, which returns a string. However, for creating the string, the C function requires a ranking, the generation of which I have implemented in Python. How would I go about implementing this using callbacks?
I see this as the following multistep process.
1) Python instantiates a C object:
import test_Objects #test_Objects is the C file that has been wrapped
C = test_objects.my_class()
2) Call the relevant method on the my_class object, which return a string:
txt_1 = "string1"
txt_2 - "string2"
result = C.sorted_string(txt_1, txt_2)
2.1) I want sorted_string to call the following python function, which returns a sorted list.
def sorted_string([my_list]):
.....
.....
return your_list
2.2) Sorted_string would make use of the list to generate the result.
How would I implement step 2.1?
A:
The PyObject_Call*() functions can be used to call a Python function or method from C.
A:
It is not specific to your question, but here is an example that I asked: swig-passing-argument-to-python-callback-function
Hope this is helpful.
| implementing callback between Python and C | I have wrapped some C code using SWIG to use it as a python library.
Within this framework, some python code I have written calls a C function, which returns a string. However, for creating the string, the C function requires a ranking, the generation of which I have implemented in Python. How would I go about implementing this using callbacks?
I see this as the following multistep process.
1) Python instantiates a C object:
import test_Objects #test_Objects is the C file that has been wrapped
C = test_objects.my_class()
2) Call the relevant method on the my_class object, which return a string:
txt_1 = "string1"
txt_2 - "string2"
result = C.sorted_string(txt_1, txt_2)
2.1) I want sorted_string to call the following python function, which returns a sorted list.
def sorted_string([my_list]):
.....
.....
return your_list
2.2) Sorted_string would make use of the list to generate the result.
How would I implement step 2.1?
| [
"The PyObject_Call*() functions can be used to call a Python function or method from C.\n",
"It is not specific to your question, but here is an example that I asked: swig-passing-argument-to-python-callback-function\nHope this is helpful.\n"
] | [
0,
0
] | [] | [] | [
"c",
"callback",
"python",
"swig"
] | stackoverflow_0004016528_c_callback_python_swig.txt |
Q:
Passing a list of values to a function
Sorry for such a silly question, but sitting in front of the comp for many hours makes my head overheated, in other words — I'm totally confused.
My task is to define a function that takes a list of words and returns something.
How can I define a function that will take a list of words?
def function(list_of_words):
do something
When running this script in Python IDLE we should suppose to write something like this:
>>> def function('this', 'is', 'a', 'list', 'of', 'words')
But Python errors that the function takes one argument, and six (arguments) are given.
I guess I should give my list a variable name, i.e. list_of_words = ['this', 'is', 'a', 'list', 'of', 'words'], but ... how?
A:
Use code:
def function(*list_of_words):
do something
list_of_words will be a tuple of arguments passed to a function.
A:
Simply call your function with:
function( ['this', 'is', 'a', 'list', 'of', 'words'] )
This is passing a list as an argument.
A:
It's simple:
list_of_words = ['this', 'is', 'a', 'list', 'of', 'words']
def function(list_of_words):
do_something
That's all there is to it.
A:
>>> def function(list_of_words):
... print( list_of_words )
...
>>>
>>> function('this', 'is', 'a', 'list', 'of', 'words')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: function() takes exactly 1 argument (6 given)
>>> function(['this', 'is', 'a', 'list', 'of', 'words'])
['this', 'is', 'a', 'list', 'of', 'words']
Works for me. What's going wrong for you? Can you be specific on what doesn't work for you?
| Passing a list of values to a function | Sorry for such a silly question, but sitting in front of the comp for many hours makes my head overheated, in other words — I'm totally confused.
My task is to define a function that takes a list of words and returns something.
How can I define a function that will take a list of words?
def function(list_of_words):
do something
When running this script in Python IDLE we should suppose to write something like this:
>>> def function('this', 'is', 'a', 'list', 'of', 'words')
But Python errors that the function takes one argument, and six (arguments) are given.
I guess I should give my list a variable name, i.e. list_of_words = ['this', 'is', 'a', 'list', 'of', 'words'], but ... how?
| [
"Use code:\ndef function(*list_of_words):\n do something\n\nlist_of_words will be a tuple of arguments passed to a function.\n",
"Simply call your function with:\nfunction( ['this', 'is', 'a', 'list', 'of', 'words'] )\n\nThis is passing a list as an argument.\n",
"It's simple:\nlist_of_words = ['this', 'is'... | [
9,
6,
2,
1
] | [] | [] | [
"python"
] | stackoverflow_0004018178_python.txt |
Q:
How can we modify data that is in a shelve?
I have opened a shelve using the following code:
#!/usr/bin/python
import shelve #Module:Shelve is imported to achieve persistence
Accounts = 0
Victor = {'Name':'Victor Hughes','Email':'victor@yahoo.com','Deposit':65000,'Accno':'SA456178','Acctype':'Savings'}
Beverly = {'Name':'Beverly Dsilva','Email':'bevd@hotmail.com','Deposit':23000,'Accno':'CA432178','Acctype':'Current'}
def open_shelf(name='shelfile.shl'):
global Accounts
Accounts = shelve.open(name) #Accounts = {}
Accounts['Beverly']= Beverly
Accounts['Victor']= Victor
def close_shelf():
Accounts.close()
I am able to append values to the shelve but unable to modify the values.
I have defined a function Deposit() from which I would like to modify the data present in the shelve.But it gives me the following error:
Traceback (most recent call last):
File "./functest.py", line 16, in <module>
Deposit()
File "/home/pavitar/Software-Development/Python/Banking/Snippets/Depositfunc.py", line 18, in Deposit
for key in Accounts:
TypeError: 'int' object is not iterable
Here is my Function:
#!/usr/bin/python
import os #This module is imported so as to use clear function in the while-loop
from DB import * #Imports the data from database DB.py
def Deposit():
while True:
os.system("clear") #Clears the screen once the loop is re-invoked
input = raw_input('\nEnter the A/c type: ')
flag=0
for key in Accounts:
if Accounts[key]['Acctype'].find(input) != -1:
amt = input('\nAmount of Deposit: ')
flag+=1
Accounts[key]['Deposit'] += amt
if flag == 0:
print "NO such Account!"
if __name__ == '__main__':
open_shelf()
Deposit()
close_shelf()
I'm new to Python.Please help.Correct me if I'm wrong.I need someone to give a little bit of explanation as to the functioning of this code.I'm confused.
A:
Firstly, don't use a global for Accounts, rather pass it back and forth. Using the global caused your error. Like this:
def open_shelf(name='shelfile.shl'):
Accounts = shelve.open(name) #Accounts = {}
...
return Accounts
def close_shelf(Accounts):
Accounts.close()
def Deposit(Accounts):
...
if __name__ == '__main__':
Accounts = open_shelf()
Deposit(Accounts)
close_shelf(Accounts)
Secondly, don't redefine built-in functions. In Deposit(), you assign the result of raw_input to a variable named input:
input = raw_input('\nEnter the A/c type: ')
Four lines later, you try to use the built-in input function:
amt = input('\nAmount of Deposit: ')
But that won't work because input has been redefined!
Thirdly, when iterating over shelved items, follow the pattern of 1) grab shelved item, 2) mutate item, 3) write mutated item back to shelf. Like so:
for key, acct in Accounts.iteritems(): # grab a shelved item
if val['Acctype'].find(input) != -1:
amt = input('\nAmount of Deposit: ')
flag+=1
acct['Deposit'] += amt # mutate the item
Accounts[key] = acct # write item back to shelf
(This third bit of advice was tweaked from hughdbrown's answer.)
A:
I think you'd have more luck like this:
for key, val in Accounts.iteritems():
if val['Acctype'].find(input) != -1:
amt = input('\nAmount of Deposit: ')
flag+=1
val['Deposit'] += amt
Accounts[key] = val
| How can we modify data that is in a shelve? | I have opened a shelve using the following code:
#!/usr/bin/python
import shelve #Module:Shelve is imported to achieve persistence
Accounts = 0
Victor = {'Name':'Victor Hughes','Email':'victor@yahoo.com','Deposit':65000,'Accno':'SA456178','Acctype':'Savings'}
Beverly = {'Name':'Beverly Dsilva','Email':'bevd@hotmail.com','Deposit':23000,'Accno':'CA432178','Acctype':'Current'}
def open_shelf(name='shelfile.shl'):
global Accounts
Accounts = shelve.open(name) #Accounts = {}
Accounts['Beverly']= Beverly
Accounts['Victor']= Victor
def close_shelf():
Accounts.close()
I am able to append values to the shelve but unable to modify the values.
I have defined a function Deposit() from which I would like to modify the data present in the shelve.But it gives me the following error:
Traceback (most recent call last):
File "./functest.py", line 16, in <module>
Deposit()
File "/home/pavitar/Software-Development/Python/Banking/Snippets/Depositfunc.py", line 18, in Deposit
for key in Accounts:
TypeError: 'int' object is not iterable
Here is my Function:
#!/usr/bin/python
import os #This module is imported so as to use clear function in the while-loop
from DB import * #Imports the data from database DB.py
def Deposit():
while True:
os.system("clear") #Clears the screen once the loop is re-invoked
input = raw_input('\nEnter the A/c type: ')
flag=0
for key in Accounts:
if Accounts[key]['Acctype'].find(input) != -1:
amt = input('\nAmount of Deposit: ')
flag+=1
Accounts[key]['Deposit'] += amt
if flag == 0:
print "NO such Account!"
if __name__ == '__main__':
open_shelf()
Deposit()
close_shelf()
I'm new to Python.Please help.Correct me if I'm wrong.I need someone to give a little bit of explanation as to the functioning of this code.I'm confused.
| [
"Firstly, don't use a global for Accounts, rather pass it back and forth. Using the global caused your error. Like this:\ndef open_shelf(name='shelfile.shl'):\n Accounts = shelve.open(name) #Accounts = {}\n ...\n return Accounts\n\ndef close_shelf(Accounts):\n Accounts.close()\n\n\ndef Deposi... | [
4,
3
] | [] | [] | [
"dictionary",
"python",
"shelve"
] | stackoverflow_0004017733_dictionary_python_shelve.txt |
Q:
Is Zpsycopg2 compatible with zope 2?
I have zope 2.11 installed. Now i want to use Posgresql 7.4.13 DB with it. So i know i need to install psycopg2 Database Adapter. Can any one tell me Is psycopg2 compatible with zope2??
A:
Yes, you can use psycopg2 with Zope2.
Just install it in your Python with easy_install or setup.py. You will also need a matching ZPsycopgDA Product in Zope. You find the ZPsycopgDA folder in the psycopg2 source distribution tarball.
| Is Zpsycopg2 compatible with zope 2? | I have zope 2.11 installed. Now i want to use Posgresql 7.4.13 DB with it. So i know i need to install psycopg2 Database Adapter. Can any one tell me Is psycopg2 compatible with zope2??
| [
"Yes, you can use psycopg2 with Zope2. \nJust install it in your Python with easy_install or setup.py. You will also need a matching ZPsycopgDA Product in Zope. You find the ZPsycopgDA folder in the psycopg2 source distribution tarball.\n"
] | [
1
] | [] | [] | [
"database",
"python",
"zope"
] | stackoverflow_0003725699_database_python_zope.txt |
Q:
Encryption: simulate SSL in javascript and python
Because of china Great Firewall has blocked google appengine's https port. So I want to simulate a Secure Socket Layer by javascript and python to protect my users information will not be capture by those ISP and GFW.
My plan:
Shake hands:
Browser request server, server generate a encrypt key k1, and decrypt key k2, send k1 to browser.
Browser generate a encrypt key k3, and decrypt key k4, send k3 to server.
Browse:
During the session, browser encrypt data with k1 and send to server, server decrypt with k2. server encrypt data with k3 and response to browser, browser decrypt with k4.
Please figure out my mistake.
If it's right, my question is
how to generate a key pair in
javascript and python, are there
some libraries?
how to encrypt and decrypt data in
javascript and python , are there
some libraries?
A:
You have a fundamental problem in that a JavaScript implementation of SSL would have no built-in root certificates to establish trust, which makes it impossible to prevent a man-in-the-middle attack. Any certificates you deliver from your site, including a root certificate, could be intercepted and replaced by a spy.
Note that this is a fundamental limitation, not a peculiarity of the way SSL works. All cryptographic security relies on establishing a shared secret. The root certificates deployed with mainstream browsers provide the entry points to a trust network established by certifying authorities (CAs) that enable you to establish the shared secret with a known third party. These certificates are not, AFAIK, directly accessible to JavaScript code. They are only used to establish secure (e.g., https) connections.
A:
You can't stop the men in the middle from trapping your packets/messages, especially if they don't really care if you find out. What you can do is encrypt your messages so that trapping them does not enable them to read what you're sending and receiving. In theory that's fine, but in practice you can't do modern crypto by hand even with the keys: you need to transfer some software too, and that's where it gets much more awkward.
You want to have the client's side of the crypto software locally, or at least enough to be able to check whether a digital signature of the crypto software is correct. Digital signatures are very difficult to forge. Deliver signed code, check its signature, and if the signature validates against a public key that you trust (alas, you'll have to transfer that out of band) then you know that the code (plus any CA certificates – trust roots – sent along with it) can be trusted to work as desired. The packets can then go over plain HTTP; they'll either get to where they're meant to or be intercepted, but either way nobody but the intended recipient will be able to read them. The only advantage of SSL is that it builds virtually all of this stuff for you and makes it easy.
I have no idea how practical it is to do this all in Javascript. Obviously it can do it – it's a Turing-complete language, it has access to all the requisite syscalls – but it could be stupidly expensive. It might be easier to think in terms of using GPG…
(Hiding the fact from the government that you are communicating at all is a different problem entirely.)
A:
The other answers here are correct: You won't be able to securely deliver the JavaScript that is going to be running on the client. However, if you just want to look into this stuff some more anyway, check out the opensource project Forge. It has an SSL/TLS implementation in JavaScript and a simple Python SSL server:
http://github.com/digitalbazaar/forge/blob/master/README
If you want to read up some more on its uses:
http://digitalbazaar.com/2010/07/20/javascript-tls-1/
http://digitalbazaar.com/2010/07/20/javascript-tls-2/
A:
There's a big problem, if security really is a big concern: Your algorithm is going to be transfered unsecured. Can you trust the client at all? Can the client trust the server at all?
| Encryption: simulate SSL in javascript and python | Because of china Great Firewall has blocked google appengine's https port. So I want to simulate a Secure Socket Layer by javascript and python to protect my users information will not be capture by those ISP and GFW.
My plan:
Shake hands:
Browser request server, server generate a encrypt key k1, and decrypt key k2, send k1 to browser.
Browser generate a encrypt key k3, and decrypt key k4, send k3 to server.
Browse:
During the session, browser encrypt data with k1 and send to server, server decrypt with k2. server encrypt data with k3 and response to browser, browser decrypt with k4.
Please figure out my mistake.
If it's right, my question is
how to generate a key pair in
javascript and python, are there
some libraries?
how to encrypt and decrypt data in
javascript and python , are there
some libraries?
| [
"You have a fundamental problem in that a JavaScript implementation of SSL would have no built-in root certificates to establish trust, which makes it impossible to prevent a man-in-the-middle attack. Any certificates you deliver from your site, including a root certificate, could be intercepted and replaced by a s... | [
2,
1,
1,
0
] | [] | [] | [
"javascript",
"python",
"ssl"
] | stackoverflow_0003977274_javascript_python_ssl.txt |
Q:
convert string to datetime object
I'd like to convert this string into a datetime object:
Wed Oct 20 16:35:44 +0000 2010
Is there a simple way to do this? Or do I have to write a RE to parse the elements, convert Oct to 10 and so forth?
EDIT:
strptime is great. However, with
datetime.strptime(date_str, "%a %b %d %H:%M:%S %z %Y")
I get
ValueError: 'z' is a bad directive in format '%a %b %d %H:%M:%S %z %Y'
even though %z seems to be correct.
EDIT2:
The %z tag appears to not be supported. See http://bugs.python.org/issue6641.
I got around it by using a timedelta object to modify the time appropriately.
A:
No RE needed. Try this:
from dateutil import parser
yourDate = parser.parse(yourString)
for "Wed Oct 20 16:35:44 +0000 2010" returns datetime.datetime(2010, 10, 20, 16, 35, 44, tzinfo=tzutc())
A:
http://docs.python.org/library/time.html#time.strptime
A:
I'm pretty sure you're able to do that using datetime.strptime.
From the docs:
datetime.strptime(date_string, format)
Return a datetime corresponding to
date_string, parsed according to
format. This is equivalent to
datetime(*(time.strptime(date_string,
format)[0:6])). ValueError is raised
if the date_string and format can’t be
parsed by time.strptime() or if it
returns a value which isn’t a time
tuple.
A:
Depending on where that string originates, you may be able to use datetime.strptime to parse it. The only problem is that strptime relies on some platform-specific things, so if that string needs to be able to come from arbitrary other systems, and all the days and months aren't defined exactly the same (Jun or June), you may have troubles.
A:
time.strptime should do it:
http://docs.python.org/library/time.html?highlight=time#time.strptime
| convert string to datetime object | I'd like to convert this string into a datetime object:
Wed Oct 20 16:35:44 +0000 2010
Is there a simple way to do this? Or do I have to write a RE to parse the elements, convert Oct to 10 and so forth?
EDIT:
strptime is great. However, with
datetime.strptime(date_str, "%a %b %d %H:%M:%S %z %Y")
I get
ValueError: 'z' is a bad directive in format '%a %b %d %H:%M:%S %z %Y'
even though %z seems to be correct.
EDIT2:
The %z tag appears to not be supported. See http://bugs.python.org/issue6641.
I got around it by using a timedelta object to modify the time appropriately.
| [
"No RE needed. Try this:\nfrom dateutil import parser\n\nyourDate = parser.parse(yourString) \n\nfor \"Wed Oct 20 16:35:44 +0000 2010\" returns datetime.datetime(2010, 10, 20, 16, 35, 44, tzinfo=tzutc())\n",
"http://docs.python.org/library/time.html#time.strptime\n",
"I'm pretty sure you're able to do that usi... | [
28,
2,
2,
2,
1
] | [] | [] | [
"datetime",
"python"
] | stackoverflow_0004018730_datetime_python.txt |
Q:
Once I can save this string as image, other time I can'
Ok. Long story short.
My camera has a method which takes a photo and this is what it returns:
[160, 120, 3, 10, 1287848024, 96181, 'super long image string']
I am able to decode the string and save it as image right after I call the method like this:
for i in range(0, 10):
image = camProxy.getImageRemote(nameId)
imageWidth = image[0]
imageHeight = image[1]
imageByteArray = image[6]
im = Image.fromstring("YCbCr",(imageWidth,imageHeight),imageByteArray)
fileName = str(time.time())+".jpg"
im.save(fileName, "JPEG")
This works nicely and I can open the saved images.
However, if I just save the string into a txt file and later I want to load it and save as image like this:
f = open("rawImage.txt", "r")
data = f.readline()
f.close()
# save as image
im = Image.frombuffer("YCbCr",(160,120),data)
im.save("test.jpg", "JPEG")
What I get is almost completely green image.
Here is an example string which I keep having problems with:
http://richardknop.com/rawImage.txt
Here is a complete output of the getImageRemote() method of the camera for that image:
http://richardknop.com/log.txt
Anybody got ideas what could be wrong? Is this some issue related to encoding? All files are saved as ASCII but I have tried saving them all as UTF-8 as well.
EDIT:
How I have written the image to file? I just redirected the output of the script:
python script.py > output.txt
And in the script I had:
print imageByteArray
A:
I got it working by changing the file mode from "r" to "rb".
Here's the working code:
import time
import Image
image_data = [160, 120, 3, 10, 1287848024, 96181, 'really long string from http://richardknop.com/log.txt']
imageWidth = image_data[0]
imageHeight = image_data[1]
imageByteArray = image_data[6]
fout = open("image_data.txt", "wb")
fout.write(imageByteArray)
fout.close()
fin = open("image_data.txt", 'rb')
image_string = fin.read()
fin.close()
im = Image.fromstring("YCbCr",(imageWidth,imageHeight),image_string)
fileName = str(time.time())+".jpg"
im.save(fileName, "JPEG")
I verified that you are correct, in that read and readline make no difference here, but I still advise using read, because that says what you mean.
Here's my original answer:
Change data = f.readline() to data = f.read(). read grabs the whole file, readline grabs just one line.
A:
Maybe you should read and write into your file using binary mode like this :
open('file_name', 'wb')
open('file_name', 'rb')
A:
Read in the data:
import Image
import ast
with open('rawImage.txt','r') as f:
raw_data=f.read()
with open('log.txt','r') as f:
log_data=f.read()
log_data=ast.literal_eval(log_data)
imageWidth=log_data[0]
imageHeight=log_data[1]
log_data=log_data[6]
Let's try to see if raw_data (from rawImage.txt) is the same string as
log_data (from log.txt). Oops: they're not the same length:
print(len(raw_data))
# 146843
print(len(log_data))
# 57600
Take a peek at the beginning of both strings. It appears raw_data has written 4 characters for '\x81' when a single character \x81 was intended.
print(list(raw_data[:10]))
# ['6', '}', '\\', 'x', '8', '1', '8', '}', '\\', 'x']
print(list(log_data[:10]))
# ['6', '}', '\x81', '8', '}', '\x81', '7', '\x90', '\x8a', '4']
This might have happened because rawImage.txt was opened in writing mode 'w' instead of 'wb'. The best solution is to write rawImage.txt using the right writing mode, as Steven Rumbalski does here.
But given this predicament, here is a way you can fix it:
raw_data_fixed=raw_data.decode('string_escape')
Now this works:
im = Image.fromstring("YCbCr",(imageWidth,imageHeight),raw_data_fixed)
im.show()
| Once I can save this string as image, other time I can' | Ok. Long story short.
My camera has a method which takes a photo and this is what it returns:
[160, 120, 3, 10, 1287848024, 96181, 'super long image string']
I am able to decode the string and save it as image right after I call the method like this:
for i in range(0, 10):
image = camProxy.getImageRemote(nameId)
imageWidth = image[0]
imageHeight = image[1]
imageByteArray = image[6]
im = Image.fromstring("YCbCr",(imageWidth,imageHeight),imageByteArray)
fileName = str(time.time())+".jpg"
im.save(fileName, "JPEG")
This works nicely and I can open the saved images.
However, if I just save the string into a txt file and later I want to load it and save as image like this:
f = open("rawImage.txt", "r")
data = f.readline()
f.close()
# save as image
im = Image.frombuffer("YCbCr",(160,120),data)
im.save("test.jpg", "JPEG")
What I get is almost completely green image.
Here is an example string which I keep having problems with:
http://richardknop.com/rawImage.txt
Here is a complete output of the getImageRemote() method of the camera for that image:
http://richardknop.com/log.txt
Anybody got ideas what could be wrong? Is this some issue related to encoding? All files are saved as ASCII but I have tried saving them all as UTF-8 as well.
EDIT:
How I have written the image to file? I just redirected the output of the script:
python script.py > output.txt
And in the script I had:
print imageByteArray
| [
"I got it working by changing the file mode from \"r\" to \"rb\".\nHere's the working code:\nimport time\nimport Image\nimage_data = [160, 120, 3, 10, 1287848024, 96181, 'really long string from http://richardknop.com/log.txt']\nimageWidth = image_data[0]\nimageHeight = image_data[1]\nimageByteArray = image_data[6]... | [
6,
3,
1
] | [] | [] | [
"image_processing",
"python"
] | stackoverflow_0004017609_image_processing_python.txt |
Q:
Python: Is it bad style to give an argument the same name as the function?
Consider the following code:
def localize(value, localize=None):
# do something with the localize argument
The localize variable contains information whether the global localization setting should be respected or not. It is called by the same name through three layers of code. What's the lesser evil,
shadow the function name with the argument name, or
use a different name in this function than in all the rest of the code base, despite them having absolutely the same meaning?
The localize function doesn't use recursion, so not being able to call itself is not a problem.
/edit: changing the function name is out of the question, since it's public API. The only wiggle room is in the argument name.
A:
I'd say that's bad style. Instead of changing the function name you could change the parameter name. Perhaps you could use a name like locale or localization? A noun is probably a better choice than a verb anyway.
A:
What's more likely to happen, the caller is confused by having to pass in a different named argument, or someone will refactor it with recursion later? If you need to use recursion, you can always use a mutual recursion to get away from the shadowing scope.
A:
Nothing is against this practice according to the PEP8, but I would suggest not to use this kind of naming, although there would be no problem using it technically speaking.
If you are part of a project, conjointly state a naming convention that would not allow you to get confused about alike-named variables and function. PEP8 suggest appending your variable with an underscore in case of clash with an reserved word, you may do the same in your case.
A:
Yes, it's bad form because it's confusing (2 meanings for one word).
I would rewrite it as a boolean, with True as the default:
def localize(value, use_global_setting=True):
...
| Python: Is it bad style to give an argument the same name as the function? | Consider the following code:
def localize(value, localize=None):
# do something with the localize argument
The localize variable contains information whether the global localization setting should be respected or not. It is called by the same name through three layers of code. What's the lesser evil,
shadow the function name with the argument name, or
use a different name in this function than in all the rest of the code base, despite them having absolutely the same meaning?
The localize function doesn't use recursion, so not being able to call itself is not a problem.
/edit: changing the function name is out of the question, since it's public API. The only wiggle room is in the argument name.
| [
"I'd say that's bad style. Instead of changing the function name you could change the parameter name. Perhaps you could use a name like locale or localization? A noun is probably a better choice than a verb anyway.\n",
"What's more likely to happen, the caller is confused by having to pass in a different named ar... | [
11,
1,
1,
1
] | [] | [] | [
"coding_style",
"python"
] | stackoverflow_0004018783_coding_style_python.txt |
Q:
Python urllib2 file upload problems
I'm currently trying to initiate a file upload with urllib2 and the urllib2_file library. Here's my code:
import sys
import urllib2_file
import urllib2
URL='http://aquate.us/upload.php'
d = [('uploaded', open(sys.argv[1:]))]
req = urllib2.Request(URL, d)
u = urllib2.urlopen(req)
print u.read()
I've placed this .py file in my My Documents directory and placed a shortcut to it in my Send To folder (the shortcut URL is ).
When I right click a file, choose Send To, and select Aquate (my python), it opens a command prompt for a split second and then closes it. Nothing gets uploaded.
I knew there was probably an error going on so I typed the code into CL python, line by line.
When I ran the u=urllib2.urlopen(req) line, I didn't get an error;
alt text http://www.aquate.us/u/55245858877937182052.jpg
instead, the cursor simply started blinking on a new line beneath that line. I waited a couple of minutes to see if something would happen but it just stayed like that. To get it to stop, I had to press ctrl+break.
What's up with this script?
Thanks in advance!
[Edit]
Forgot to mention -- when I ran the script without the request data (the file) it ran like a charm. Is it a problem with urllib2_file?
[edit 2]:
import MultipartPostHandler, urllib2, cookielib,sys
import win32clipboard as w
cookies = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookies),MultipartPostHandler.MultipartPostHandler)
params = {"uploaded" : open("c:/cfoot.js") }
a=opener.open("http://www.aquate.us/upload.php", params)
text = a.read()
w.OpenClipboard()
w.EmptyClipboard()
w.SetClipboardText(text)
w.CloseClipboard()
That code works like a charm if you run it through the command line.
A:
If you're using Python 2.5 or newer, urllib2_file is both unnecessary and unsupported, so check which version you're using (and perhaps upgrade).
If you're using Python 2.3 or 2.4 (the only versions supported by urllib2_file), try running the sample code and see if you have the same problem. If so, there is likely something wrong either with your Python or urllib2_file installation.
EDIT:
Also, you don't seem to be using either of urllib2_file's two supported formats for POST data. Try using one of the following two lines instead:
d = ['uploaded', open(sys.argv[1:])]
## --OR-- ##
d = {'uploaded': open(sys.argv[1:])}
A:
First, there's a third way to run Python programs.
From cmd.exe, type python myprogram.py. You get a nice log. You don't have to type stuff one line at a time.
Second, check the urrlib2 documentation. You'll need to look at urllib, also.
A Request requires a URL and a urlencoded encoded buffer of data.
data should be a buffer in the
standard
application/x-www-form-urlencoded
format. The urllib.urlencode()
function takes a mapping or sequence
of 2-tuples and returns a string in
this format.
You need to encode your data.
A:
If you're still on Python2.5, what worked for me was to download the code here:
http://peerit.blogspot.com/2007/07/multipartposthandler-doesnt-work-for.html
and save it as MultipartPostHandler.py
then use:
import urllib2, MultipartPostHandler
opener = urllib2.build_opener(MultipartPostHandler.MultipartPostHandler())
opener.open(url, {"file":open(...)})
or if you need cookies:
import urllib2, MultipartPostHandler, cookielib
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj), MultipartPostHandler.MultipartPostHandler())
opener.open(url, {"file":open(...)})
| Python urllib2 file upload problems | I'm currently trying to initiate a file upload with urllib2 and the urllib2_file library. Here's my code:
import sys
import urllib2_file
import urllib2
URL='http://aquate.us/upload.php'
d = [('uploaded', open(sys.argv[1:]))]
req = urllib2.Request(URL, d)
u = urllib2.urlopen(req)
print u.read()
I've placed this .py file in my My Documents directory and placed a shortcut to it in my Send To folder (the shortcut URL is ).
When I right click a file, choose Send To, and select Aquate (my python), it opens a command prompt for a split second and then closes it. Nothing gets uploaded.
I knew there was probably an error going on so I typed the code into CL python, line by line.
When I ran the u=urllib2.urlopen(req) line, I didn't get an error;
alt text http://www.aquate.us/u/55245858877937182052.jpg
instead, the cursor simply started blinking on a new line beneath that line. I waited a couple of minutes to see if something would happen but it just stayed like that. To get it to stop, I had to press ctrl+break.
What's up with this script?
Thanks in advance!
[Edit]
Forgot to mention -- when I ran the script without the request data (the file) it ran like a charm. Is it a problem with urllib2_file?
[edit 2]:
import MultipartPostHandler, urllib2, cookielib,sys
import win32clipboard as w
cookies = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookies),MultipartPostHandler.MultipartPostHandler)
params = {"uploaded" : open("c:/cfoot.js") }
a=opener.open("http://www.aquate.us/upload.php", params)
text = a.read()
w.OpenClipboard()
w.EmptyClipboard()
w.SetClipboardText(text)
w.CloseClipboard()
That code works like a charm if you run it through the command line.
| [
"If you're using Python 2.5 or newer, urllib2_file is both unnecessary and unsupported, so check which version you're using (and perhaps upgrade).\nIf you're using Python 2.3 or 2.4 (the only versions supported by urllib2_file), try running the sample code and see if you have the same problem. If so, there is like... | [
2,
0,
0
] | [] | [] | [
"post",
"python",
"upload",
"urllib2"
] | stackoverflow_0000407468_post_python_upload_urllib2.txt |
Q:
Form doesn't accept additional parameters
I was trying to pass an additional parameter to my form, which is anObject to ForeignKey relation. But dunno why form returns __init__() got an unexpected keyword argument 'parent' when I'm pretty sure that it is possible to send additional parameters to form's __init__ (ie here : Simple form not validating). Am I wrong ?
def add_video(request):
parent = ParentObject.objects.all()[0]
if request.method == 'POST':
form = VideoForm(data=request.POST, parent=parent)
if form.is_valid():
form.save()
next = reverse('manage_playforward',)
return HttpResponseRedirect(next)
else:
form = VideoForm()
class VideoForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
try:
self.parent = kwargs.pop['parent']
logging.debug(self.parent)
except:
pass
super(VideoForm, self).__init__(*args, **kwargs)
A:
kwargs.pop['parent'] is throwing TypeError: 'builtin_function_or_method' object is unsubscriptable, because you're trying to do a key lookup on a function method ({}.pop). This error is then being swallowed by your exception handler.
For this to work do kwargs.pop('parent', None). In your case:
class VideoForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
self.parent = kwargs.pop('parent', None)
super(VideoForm, self).__init__(*args, **kwargs)
As a side note, 99% of time its best to only catch specific Exceptions in your except blocks. Doing so will help dodge bugs/confusion like this. Also, I would highly suggest adding unit tests for this custom construction (or just TDDing your other code, but that's a separate issue)
| Form doesn't accept additional parameters | I was trying to pass an additional parameter to my form, which is anObject to ForeignKey relation. But dunno why form returns __init__() got an unexpected keyword argument 'parent' when I'm pretty sure that it is possible to send additional parameters to form's __init__ (ie here : Simple form not validating). Am I wrong ?
def add_video(request):
parent = ParentObject.objects.all()[0]
if request.method == 'POST':
form = VideoForm(data=request.POST, parent=parent)
if form.is_valid():
form.save()
next = reverse('manage_playforward',)
return HttpResponseRedirect(next)
else:
form = VideoForm()
class VideoForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
try:
self.parent = kwargs.pop['parent']
logging.debug(self.parent)
except:
pass
super(VideoForm, self).__init__(*args, **kwargs)
| [
"kwargs.pop['parent'] is throwing TypeError: 'builtin_function_or_method' object is unsubscriptable, because you're trying to do a key lookup on a function method ({}.pop). This error is then being swallowed by your exception handler. \nFor this to work do kwargs.pop('parent', None). In your case:\nclass Video... | [
6
] | [] | [] | [
"django",
"django_forms",
"keyword_argument",
"python"
] | stackoverflow_0004019711_django_django_forms_keyword_argument_python.txt |
Q:
How to filter objects of an object_list of a generic view in Django
So I have a view that does a queryset and returns a simple list:
def cdr(request):
queryset = CdrView.objects.all()
return object_list(request,
queryset = queryset,
template_name = "reports/cdrview_list.html",
paginate_by = 200,
page = request.GET.get('page', 1)
)
Initially, just to know if it works, I printed all the objects in "object_list" line by line in my template and it's OK but butt ugly. The problem is that my database is constantly growing and is currently at over a million objects. Each object (it's like a phone call) has source and destination among other attributes such as direction (in or out). In my template I call it by doing something like:
{{ call.src }} {{ call.dst }}
Since I'm fairly new in Django, I have a question on how I can make a form to be displayed at the top of my page where I an choose to see only calls that have direction as "in" or calls that have a source that starts with "xxxx". Basically filters.
Do I do most of the filtering in my views?
Or is it in my templates?
Thank you!
A:
You filter in your views.py. Since it's a search, we'll use request.REQUEST instead of the normal request.POST.
from forms.py import SearchForm
def cdr(request, form_class=SearchForm):
queryset = CdrView.objects.all()
search_form = SearchForm(request.REQUEST)
if search_form.is_valid():
search_src = search_form.cleaned_data.get('search_src',None)
search_dest = search_form.cleaned_data.get('search_dest',None)
if search_src:
queryset = queryset.filter(src__icontains=search_src)
if search_dest:
queryset = queryset.filter(dest__icontains=search_dest)
return object_list(request,
queryset = queryset,
template_name = "reports/cdrview_list.html",
extra_context = {'search_form': search_form },
paginate_by = 200,
page = request.GET.get('page', 1)
)
Then, in forms.py:
from django import forms
class SearchForm(forms.Form):
search_src = forms.CharField(max_length=20, required=False)
search_dest = forms.CharField(max_length=20, required=False)
And then in your template:
<form method="get" action="">
<ul>{{ search_form.as_ul }}</ul>
<input type="submit" value="Search" />
</form>
A:
You should do all your business logic in the view, this is the basic idea of working with an MVC (MTV) framework.
beside if you want to use a form for filtering your data, you don't have a choice rather than pass from the view.py
| How to filter objects of an object_list of a generic view in Django | So I have a view that does a queryset and returns a simple list:
def cdr(request):
queryset = CdrView.objects.all()
return object_list(request,
queryset = queryset,
template_name = "reports/cdrview_list.html",
paginate_by = 200,
page = request.GET.get('page', 1)
)
Initially, just to know if it works, I printed all the objects in "object_list" line by line in my template and it's OK but butt ugly. The problem is that my database is constantly growing and is currently at over a million objects. Each object (it's like a phone call) has source and destination among other attributes such as direction (in or out). In my template I call it by doing something like:
{{ call.src }} {{ call.dst }}
Since I'm fairly new in Django, I have a question on how I can make a form to be displayed at the top of my page where I an choose to see only calls that have direction as "in" or calls that have a source that starts with "xxxx". Basically filters.
Do I do most of the filtering in my views?
Or is it in my templates?
Thank you!
| [
"You filter in your views.py. Since it's a search, we'll use request.REQUEST instead of the normal request.POST.\nfrom forms.py import SearchForm\n\ndef cdr(request, form_class=SearchForm):\n queryset = CdrView.objects.all()\n search_form = SearchForm(request.REQUEST)\n if search_form.is_valid():\n search_s... | [
3,
1
] | [] | [] | [
"django",
"python"
] | stackoverflow_0004019328_django_python.txt |
Q:
split a 256 bit hash into 32 bit prefix in python
In python, how does one split a SHA256 hash into a 32bit prefixes? I'm working with Google's safebrowsing api, which requires that I compare 32bit prefixes between my own collection, and the collection the API sends to me. I understand how to pull the list from Google, and I understand how to form a collection of hashes from parsed URLs, however, I don't understand how I am to derive the first 32bits of each hash.
And after obtaining the prefix, would the best course of action between to place them in a dictionary with corresponding key/value pairs being the prefix/full hash, so that I can reference them later?
A:
32 bits is the first 4 bytes. So you can slice the byte array.
hash_obj.digest()[:4]
You can take that and use it as a dictionary key.
EDIT
I'm not sure if you need the hex representation, that would be.
hash_obj.hexdigest()[:8]
| split a 256 bit hash into 32 bit prefix in python | In python, how does one split a SHA256 hash into a 32bit prefixes? I'm working with Google's safebrowsing api, which requires that I compare 32bit prefixes between my own collection, and the collection the API sends to me. I understand how to pull the list from Google, and I understand how to form a collection of hashes from parsed URLs, however, I don't understand how I am to derive the first 32bits of each hash.
And after obtaining the prefix, would the best course of action between to place them in a dictionary with corresponding key/value pairs being the prefix/full hash, so that I can reference them later?
| [
"32 bits is the first 4 bytes. So you can slice the byte array.\nhash_obj.digest()[:4]\n\nYou can take that and use it as a dictionary key.\nEDIT\nI'm not sure if you need the hex representation, that would be.\nhash_obj.hexdigest()[:8]\n\n"
] | [
3
] | [] | [] | [
"hash",
"prefix",
"python"
] | stackoverflow_0004019730_hash_prefix_python.txt |
Q:
problematic python function returns
I have a function similar to the following:
def getCost(list):
cost = 0
for item in list:
cost += item
return cost
and I call it as so:
cost = getCost([1, 2, 3, 4])
This is GREATLY simplified but it illustrates what is going on. No matter what I do, cost always ends up == 0. If I change the value of cost in the function to say 12, then 12 is returned. If I debug and look at the value of cost prior to the return, cost == 10
It looks like it is always returning the defined number for cost, and completely disregarding any modifications to it. Can anyone tell me what would cause this?
A:
This should solve all of your problems (if summing the list items in cost is indeed what you're trying to do:
def getCost(costlist):
return sum(costlist)
It accomplishes the exact same things and is guaranteed to work. It's also much more simple than using a loop and an accumulator.
| problematic python function returns | I have a function similar to the following:
def getCost(list):
cost = 0
for item in list:
cost += item
return cost
and I call it as so:
cost = getCost([1, 2, 3, 4])
This is GREATLY simplified but it illustrates what is going on. No matter what I do, cost always ends up == 0. If I change the value of cost in the function to say 12, then 12 is returned. If I debug and look at the value of cost prior to the return, cost == 10
It looks like it is always returning the defined number for cost, and completely disregarding any modifications to it. Can anyone tell me what would cause this?
| [
"This should solve all of your problems (if summing the list items in cost is indeed what you're trying to do:\ndef getCost(costlist):\n return sum(costlist)\n\nIt accomplishes the exact same things and is guaranteed to work. It's also much more simple than using a loop and an accumulator.\n"
] | [
2
] | [] | [] | [
"python"
] | stackoverflow_0004019527_python.txt |
Q:
Results being duplicated after 2nd request
I am playing around with django and python and have hit a bit of a roadblock here. I query my model and return the objects and then perform some simple operations on the results and return them to the view. After the 2nd request the childs for the forum category are duplicated and I have no idea why this happens.
ForumBuilder class which builds a list of categories and appends forums for that category
class ForumBuilder:
def childern(self, parent, forums):
for forum in forums:
if forum.parent is None or parent.id != forum.parent.id:
continue
parent.childs.append(forum)
def build(self, forums):
categories = []
for forum in forums:
if forum.parent is None:
categories.append(forum)
self.childern(forum, forums)
return categories
The index view
def index(request):
forums = Forum.objects.all().order_by('-order')
builder = ForumBuilder()
return render_to_response('forums/index.html', {'categories': builder.build(forums)})
A:
Let me guess... You have something like:
class Foo(object):
childs = []
When you should have something like:
class Foo(object)
def __init__(self):
self.childs = []
The difference is that in the first case, all your Foo instance will share the same childs object (class attribute) and in the former each instance will have its own childs (instance attribute).
| Results being duplicated after 2nd request | I am playing around with django and python and have hit a bit of a roadblock here. I query my model and return the objects and then perform some simple operations on the results and return them to the view. After the 2nd request the childs for the forum category are duplicated and I have no idea why this happens.
ForumBuilder class which builds a list of categories and appends forums for that category
class ForumBuilder:
def childern(self, parent, forums):
for forum in forums:
if forum.parent is None or parent.id != forum.parent.id:
continue
parent.childs.append(forum)
def build(self, forums):
categories = []
for forum in forums:
if forum.parent is None:
categories.append(forum)
self.childern(forum, forums)
return categories
The index view
def index(request):
forums = Forum.objects.all().order_by('-order')
builder = ForumBuilder()
return render_to_response('forums/index.html', {'categories': builder.build(forums)})
| [
"Let me guess... You have something like:\nclass Foo(object):\n childs = []\n\nWhen you should have something like:\nclass Foo(object)\n def __init__(self):\n self.childs = []\n\nThe difference is that in the first case, all your Foo instance will share the same childs object (class attribute) and in t... | [
3
] | [] | [] | [
"django",
"python"
] | stackoverflow_0004020060_django_python.txt |
Q:
The simplest video streaming?
I have a camera that is taking pictures one by one (about 10 pictures per second) and sending them to PC. I need to show this incoming sequence of images as a live video in PC.
Is it enough just to use some Python GUI framework, create a control that will hold a single image and just change the image in the control very fast?
Or would that be just lame? Should I use some sort of video streaming library? If yes, what do you recommend?
A:
Or would that be just lame?
No. It wouldn't work at all.
There's a trick to getting video to work. Apple's QuickTime implements that trick. So does a bunch of Microsoft product. Plus some open source video playback tools.
There are several closely-related tricks, all of which are a huge pain in the neck.
Compression. Full-sized video is Huge. Do the math 640x480x24-bit color at 30 frames per second. It adds up quickly. Without compression, you can't read it in fast enough.
Buffering and Timing. Sometimes the data rates and frame rates don't align well. You need a buffer of ready-to-display frames and you need a deadly accurate clock to get them do display at exactly the right intervals.
Making a sequence of JPEG images into a movie is what iPhoto and iMovie are for.
Usually, what we do is create the video file from the image and play the video file through a standard video player. Making a QuickTime movie or Flash movie from images isn't that hard. There are a lot of tools to help make movies from images. Almost any photo management solution can create a slide show and save it as a movie in some standard format.
Indeed, I think that Graphic Converter can do this.
| The simplest video streaming? | I have a camera that is taking pictures one by one (about 10 pictures per second) and sending them to PC. I need to show this incoming sequence of images as a live video in PC.
Is it enough just to use some Python GUI framework, create a control that will hold a single image and just change the image in the control very fast?
Or would that be just lame? Should I use some sort of video streaming library? If yes, what do you recommend?
| [
"\nOr would that be just lame?\n\nNo. It wouldn't work at all.\nThere's a trick to getting video to work. Apple's QuickTime implements that trick. So does a bunch of Microsoft product. Plus some open source video playback tools.\nThere are several closely-related tricks, all of which are a huge pain in the neck... | [
2
] | [] | [] | [
"python",
"video",
"video_streaming"
] | stackoverflow_0004019571_python_video_video_streaming.txt |
Q:
What is a Class in python, what does it do, why is it needed?
Possible Duplicate:
Do I correctly understand what a class is?
Before you rant and rage and scream at me, understand that I tried searching everywhere, from google to this very site on what exactly a class in python is. I have found definitions of sorts, but have never been able to understand them fully. So here's the question. What is a class in python, what does it do, and why is it used?
A:
Class (in any language) basically help in implementing OOPS. By Following these principles, it becomes easier to maintain code & make changes etc.
I guess you start off by reading what OOPS is & then come back to Python.
| What is a Class in python, what does it do, why is it needed? |
Possible Duplicate:
Do I correctly understand what a class is?
Before you rant and rage and scream at me, understand that I tried searching everywhere, from google to this very site on what exactly a class in python is. I have found definitions of sorts, but have never been able to understand them fully. So here's the question. What is a class in python, what does it do, and why is it used?
| [
"Class (in any language) basically help in implementing OOPS. By Following these principles, it becomes easier to maintain code & make changes etc. \nI guess you start off by reading what OOPS is & then come back to Python.\n"
] | [
0
] | [] | [] | [
"class",
"python"
] | stackoverflow_0004020290_class_python.txt |
Q:
Check maxlen of deque in python 2.6
I have had to change from python 2.7 to 2.6.
I've been using a deque with the maxlen property and have been checking what the maxlen is. Apparently you can use maxlen in python 2.6, but in 2.6 deques do not have a maxlen attribute.
What is the cleanest way to check what the maxlen of a deque is in python 2.6?
In 2.7:
from collections import deque
d = deque(maxlen = 10)
print d.maxlen
In 2.6 the deque can be used and the maxlen works properly, but maxlen is not an attribute that can be referred to.
Cheers
A:
I would create my own deque by inheriting from collections.deque. It is not difficult. Namely, here it is:
import collections
class deque(collections.deque):
def __init__(self, iterable=(), maxlen=None):
super(deque, self).__init__(iterable, maxlen)
self._maxlen = maxlen
@property
def maxlen(self):
return self._maxlen
and this is the new deque at work:
>>> d = deque()
>>> print d
deque([])
>>> print d.maxlen
None
>>> d = deque(maxlen=3)
>>> print d
deque([], maxlen=3)
>>> print d.maxlen
3
>>> d = deque(range(5))
>>> print d
deque([0, 1, 2, 3, 4])
>>> print d.maxlen
None
>>> d = deque(range(5), maxlen=3)
>>> print d
deque([2, 3, 4], maxlen=3)
>>> print d.maxlen
3
A:
maxlen is a new part of deque that was first implemented in Python 2.7. It just doesn't exist in Python 2.6.
That said, there are a few things you can do:
Create a new class that inherits all the methods and attributes from deque but also implements a maxlen attribute.
Adapt your code so that maxlen isn't necessary
A:
I would create my own queue class that inherits from deque. Something like:
class Deque(deque):
def __init__(self,*args,**kwargs):
deque.__init__(self, *args, **kwargs)
self.maxlen = kwargs.get('maxlen',None)
>>>d = Deque(maxlen=10)
>>>d.maxlen
>>>10
A:
Well, if you don't have the maxlen attribute, you can just steal it from the representation:
>>> import re
>>> d = deque(maxlen=42)
>>> d.__repr__()
'deque([], maxlen=42)'
>>> int(re.sub("\)$","",re.sub(".*=","",d.__repr__())))
42
Yes, I know it's horrible. I would prefer to upgrade to 2.7 myself but sometimes we're not given the power we desire, and we have to resort to kludges like this.
| Check maxlen of deque in python 2.6 |
I have had to change from python 2.7 to 2.6.
I've been using a deque with the maxlen property and have been checking what the maxlen is. Apparently you can use maxlen in python 2.6, but in 2.6 deques do not have a maxlen attribute.
What is the cleanest way to check what the maxlen of a deque is in python 2.6?
In 2.7:
from collections import deque
d = deque(maxlen = 10)
print d.maxlen
In 2.6 the deque can be used and the maxlen works properly, but maxlen is not an attribute that can be referred to.
Cheers
| [
"I would create my own deque by inheriting from collections.deque. It is not difficult. Namely, here it is:\nimport collections\n\nclass deque(collections.deque):\n def __init__(self, iterable=(), maxlen=None):\n super(deque, self).__init__(iterable, maxlen)\n self._maxlen = maxlen\n @property\n... | [
6,
2,
2,
1
] | [] | [] | [
"deque",
"python",
"python_2.6"
] | stackoverflow_0004020264_deque_python_python_2.6.txt |
Q:
Speed up NumPy loop
I'm running a model in Python and I'm trying to speed up the execution time. Through profiling the code I've found that a huge amount of the total processing time is spent in the cell_in_shadow function below. I'm wondering if there is any way to speed it up?
The aim of the function is to provide a boolean response stating whether the specified cell in the NumPy array is shadowed by another cell (in the x direction only). It does this by stepping backwards along the row checking each cell against the height it must be to make the given cell in shadow. The values in shadow_map are calculated by another function not shown here - for this example, take shadow_map to be an array with values similar to:
[0] = 0 (not used)
[1] = 3
[2] = 7
[3] = 18
The add_x function is used to ensure that the array indices loop around (using clock-face arithmetic), as the grid has periodic boundaries (anything going off one side will re-appear on the other side).
def cell_in_shadow(x, y):
"""Returns True if the specified cell is in shadow, False if not."""
# Get the global variables we need
global grid
global shadow_map
global x_len
# Record the original length and move to the left
orig_x = x
x = add_x(x, -1)
while x != orig_x:
# Gets the height that's needed from the shadow_map (the array index is the distance using clock-face arithmetic)
height_needed = shadow_map[( (x - orig_x) % x_len)]
if grid[y, x] - grid[y, orig_x] >= height_needed:
return True
# Go to the cell to the left
x = add_x(x, -1)
def add_x(a, b):
"""Adds the two numbers using clockface arithmetic with the x_len"""
global x_len
return (a + b) % x_len
A:
I do agree with Sancho that Cython will probably be the way to go, but here are a couple of small speed-ups:
A. Store grid[y, orig_x] in some variable before you start the while loop and use that variable instead. This will save a bunch of look-up calls to the grid array.
B. Since you are basically just starting at x_len - 1 in shadow_map and working down to 1, you can avoid using the modulus so much. Basically, change:
while x != orig_x:
height_needed = shadow_map[( (x - orig_x) % x_len)]
to
for i in xrange(x_len-1,0,-1):
height_needed = shadow_map[i]
or just get rid of the height_needed variable all together with:
if grid[y, x] - grid[y, orig_x] >= shadow_map[i]:
These are small changes, but they might help a little bit.
Also, if you plan on going the Cython route, I would consider having your function do this process for the whole grid, or at least a row at a time. That will save a lot of the function call overhead. However, you might not be able to really do this depending on how you are using the results.
Lastly, have you tried using Psyco? It takes less work than Cython though it probably won't give you quite as big of a speed boost. I would certainly try it first.
A:
If you're not limited to strict Python, I'd suggest using Cython for this. It can allow static typing of the indices and efficient, direct access to a numpy array's underlying data buffer at c speed.
Check out a short tutorial/example at http://wiki.cython.org/tutorials/numpy
In that example, which is doing operations very similar to what you're doing (incrementing indices, accessing individual elements of numpy arrays), adding type information to the index variables cut the time in half compared to the original. Adding efficient indexing into the numpy arrays by giving them type information cut the time to about 1% of the original.
Most Python code is already valid Cython, so you can just use what you have and add annotations and type information where needed to give you some speed-ups.
I suspect you'd get the most out of adding type information your indices x, y, orig_x and the numpy arrays.
A:
The following guide compares several different approaches to optimising numerical code in python:
Scipy PerformancePython
It is a bit out of date, but still helpful. Note that it refers to pyrex, which has since been forked to create the Cython project, as mentioned by Sancho.
Personally I prefer f2py, because I think that fortran 90 has many of the nice features of numpy (e.g. adding two arrays together with one operation), but has the full speed of compiled code. On the other hand if you don't know fortran then this may not be the way to go.
I briefly experimented with cython, and the trouble I found was that by default cython generates code which can handle arbitrary python types, but which is still very slow. You then have to spend time adding all the necessary cython declarations to get it to be more specific and fast, whereas if you go with C or fortran then you will tend to get fast code straight out of the box. Again this is biased by me already being familiar with these languages, whereas Cython may be more appropriate if Python is the only language you know.
| Speed up NumPy loop | I'm running a model in Python and I'm trying to speed up the execution time. Through profiling the code I've found that a huge amount of the total processing time is spent in the cell_in_shadow function below. I'm wondering if there is any way to speed it up?
The aim of the function is to provide a boolean response stating whether the specified cell in the NumPy array is shadowed by another cell (in the x direction only). It does this by stepping backwards along the row checking each cell against the height it must be to make the given cell in shadow. The values in shadow_map are calculated by another function not shown here - for this example, take shadow_map to be an array with values similar to:
[0] = 0 (not used)
[1] = 3
[2] = 7
[3] = 18
The add_x function is used to ensure that the array indices loop around (using clock-face arithmetic), as the grid has periodic boundaries (anything going off one side will re-appear on the other side).
def cell_in_shadow(x, y):
"""Returns True if the specified cell is in shadow, False if not."""
# Get the global variables we need
global grid
global shadow_map
global x_len
# Record the original length and move to the left
orig_x = x
x = add_x(x, -1)
while x != orig_x:
# Gets the height that's needed from the shadow_map (the array index is the distance using clock-face arithmetic)
height_needed = shadow_map[( (x - orig_x) % x_len)]
if grid[y, x] - grid[y, orig_x] >= height_needed:
return True
# Go to the cell to the left
x = add_x(x, -1)
def add_x(a, b):
"""Adds the two numbers using clockface arithmetic with the x_len"""
global x_len
return (a + b) % x_len
| [
"I do agree with Sancho that Cython will probably be the way to go, but here are a couple of small speed-ups:\nA. Store grid[y, orig_x] in some variable before you start the while loop and use that variable instead. This will save a bunch of look-up calls to the grid array.\nB. Since you are basically just starting... | [
3,
2,
1
] | [] | [] | [
"numpy",
"performance",
"python"
] | stackoverflow_0004000674_numpy_performance_python.txt |
Q:
python, functions combination via names and chain them to invoke in-a-go?
All,
I have confuse below may need your help
suppose I have a row based variable length data matrix
93 77 96 85
50 69 54 16
39 91 59 38
64 30 18 50
43 9 74 94
44 87 95 89
...
I want to generate result data via above source with different generating algorithms while the different range selection algorithms, take below code for example
lsrc = [# above data]
def rangesel_001(lsrc):
tmp = []
for i in len(lsrc):
if i % 2 = 0:
tmp.append(lsrc[i])
return tmp
def algo_001(lsrc):
tmp = []
for i in len(lsrc):
tmp.append([x+1 for x in lsrc[i]])
return tmp
So the data I want is:
dscl = algo_001(rangesel_001(lsrc))
Here comes is my questions:
1. suppose I have an extendable "rangesel" set and the "algo" is also extendable looks like this
rangesel_001() algo_001()
rangesel_002() algo_002()
rangesel_003() algo_003()
… ...
I want to mix them, then invoke them in-a-go to get all the result what I want
rangesel_001 + algo_001
rangesel_001 + algo_002
rangesel_001 + algo_003
rangesel_002 + algo_001
rangesel_002 + algo_002
rangesel_002 + algo_003
...
is there a way easy to manager those function names then combine them to execute?
2. you may noticed the different part in “rangesel”s and “algo”s algorithm is snippet here:
if i % 2 = 0:
and
[x+1 for x in lsrc[i]]
It there a way to exact those common part out from function definitions and I can just give somekind a list:
if i % 2 = 0 rangesel_001
if i % 3 = 0 rangesel_002
if i % 4 = 0 rangesel_003
and
[x+1 for x in lsrc[i]] algo_001
[x/2 for x in lsrc[i]] algo_002
then I can get all the “rangeset” functions and "algo" sets?
3. maybe later I need this:
dscl = algo_001(rangesel_001( \
algo_002(rangesel_002(lsrc)) \
))
so, is there a painfulless way I can chain those "rangesel_002 + algo_001" combinations?
for example: suppose I already have the full combinations
rangesel_001 + algo_001
rangesel_001 + algo_002
rangesel_001 + algo_003
rangesel_002 + algo_001
rangesel_002 + algo_002
rangesel_002 + algo_003
now I want to pick random 3 to chain them and invoke to get result list?
dscl = random_pick(3, combination_list, lsrc)
Thanks!
A:
For your first question, you can define a function composition operation like this:
def compose(f, g):
return lambda *x: f(g(*x))
Then, you can:
ra = compose(rangeset_001, algo_001)
ra(lsrc)
If you make lists of functions like this:
rangesets = [rangeset_001, rangeset_002, rangeset_003]
then you can iterate:
for r in rangesets:
ra = compose(r, algo_001)
ra(lsrc)
Expansion of this idea to the algo_xxx functions is also possible.
| python, functions combination via names and chain them to invoke in-a-go? | All,
I have confuse below may need your help
suppose I have a row based variable length data matrix
93 77 96 85
50 69 54 16
39 91 59 38
64 30 18 50
43 9 74 94
44 87 95 89
...
I want to generate result data via above source with different generating algorithms while the different range selection algorithms, take below code for example
lsrc = [# above data]
def rangesel_001(lsrc):
tmp = []
for i in len(lsrc):
if i % 2 = 0:
tmp.append(lsrc[i])
return tmp
def algo_001(lsrc):
tmp = []
for i in len(lsrc):
tmp.append([x+1 for x in lsrc[i]])
return tmp
So the data I want is:
dscl = algo_001(rangesel_001(lsrc))
Here comes is my questions:
1. suppose I have an extendable "rangesel" set and the "algo" is also extendable looks like this
rangesel_001() algo_001()
rangesel_002() algo_002()
rangesel_003() algo_003()
… ...
I want to mix them, then invoke them in-a-go to get all the result what I want
rangesel_001 + algo_001
rangesel_001 + algo_002
rangesel_001 + algo_003
rangesel_002 + algo_001
rangesel_002 + algo_002
rangesel_002 + algo_003
...
is there a way easy to manager those function names then combine them to execute?
2. you may noticed the different part in “rangesel”s and “algo”s algorithm is snippet here:
if i % 2 = 0:
and
[x+1 for x in lsrc[i]]
It there a way to exact those common part out from function definitions and I can just give somekind a list:
if i % 2 = 0 rangesel_001
if i % 3 = 0 rangesel_002
if i % 4 = 0 rangesel_003
and
[x+1 for x in lsrc[i]] algo_001
[x/2 for x in lsrc[i]] algo_002
then I can get all the “rangeset” functions and "algo" sets?
3. maybe later I need this:
dscl = algo_001(rangesel_001( \
algo_002(rangesel_002(lsrc)) \
))
so, is there a painfulless way I can chain those "rangesel_002 + algo_001" combinations?
for example: suppose I already have the full combinations
rangesel_001 + algo_001
rangesel_001 + algo_002
rangesel_001 + algo_003
rangesel_002 + algo_001
rangesel_002 + algo_002
rangesel_002 + algo_003
now I want to pick random 3 to chain them and invoke to get result list?
dscl = random_pick(3, combination_list, lsrc)
Thanks!
| [
"For your first question, you can define a function composition operation like this:\ndef compose(f, g):\n return lambda *x: f(g(*x))\n\nThen, you can:\nra = compose(rangeset_001, algo_001)\nra(lsrc)\n\nIf you make lists of functions like this:\nrangesets = [rangeset_001, rangeset_002, rangeset_003]\n\nthen you ... | [
1
] | [] | [] | [
"combinations",
"function",
"python"
] | stackoverflow_0004020671_combinations_function_python.txt |
Q:
Problems creating tables one to one relation
I get errors when I try to create tables one to one relation. Screen contains crm and crm contains more classes. The relation is one to one between crm, so I want to use the screen id as primary key in crm. And the relation is one to one between crm and some classes, I just added one class as example, so children of crm must contain a screen id as a primary key. When I try to make the last relation, it's when it breaks. I tried to use both, crm id and screen id.
I didn't work. I get errors such as, "UnmappedClassError" when I try to use crm id in ContactInformation, and "Could not determine join condition between parent/child tables on relationship CRM.contactInformation. Specify a 'primaryjoin' expression. If this is a many-to-many relationship, 'secondaryjoin' is needed as well" when I try to use screen id in ContactInformation.
These are my classes:
class Screen(rdb.Model):
"""Set up screens table in the database"""
rdb.metadata(metadata)
rdb.tablename("screens")
id = Column("id", Integer, primary_key=True)
title = Column("title", String(100))
....
crm = relationship("CRM", uselist=False, backref="screens")
class CRM(rdb.Model):
"""Set up crm table in the database"""
rdb.metadata(metadata)
rdb.tablename("crms")
id = Column("id", Integer, ForeignKey("screens.id"), primary_key=True)
contactInformation = relationship("crm_contact_informations", uselist=False, backref="crms")
....
class CRMContactInformation(rdb.Model):
"""Set up crm contact information table in the database"""
rdb.metadata(metadata)
rdb.tablename("crm_contact_informations")
**id = Column("id", Integer, ForeignKey("screens.id"), primary_key=True)**
owner = Column("owner", String(50))
...
A:
First, why 3 tables all with 1:1 relationships? A 2NF-compliant single table should work considerably better.
But if you still want 3 tables, try relating crm_contact_informations to crms, not to screens:
id = Column("id", Integer, ForeignKey("crms.id"), primary_key=True)
| Problems creating tables one to one relation | I get errors when I try to create tables one to one relation. Screen contains crm and crm contains more classes. The relation is one to one between crm, so I want to use the screen id as primary key in crm. And the relation is one to one between crm and some classes, I just added one class as example, so children of crm must contain a screen id as a primary key. When I try to make the last relation, it's when it breaks. I tried to use both, crm id and screen id.
I didn't work. I get errors such as, "UnmappedClassError" when I try to use crm id in ContactInformation, and "Could not determine join condition between parent/child tables on relationship CRM.contactInformation. Specify a 'primaryjoin' expression. If this is a many-to-many relationship, 'secondaryjoin' is needed as well" when I try to use screen id in ContactInformation.
These are my classes:
class Screen(rdb.Model):
"""Set up screens table in the database"""
rdb.metadata(metadata)
rdb.tablename("screens")
id = Column("id", Integer, primary_key=True)
title = Column("title", String(100))
....
crm = relationship("CRM", uselist=False, backref="screens")
class CRM(rdb.Model):
"""Set up crm table in the database"""
rdb.metadata(metadata)
rdb.tablename("crms")
id = Column("id", Integer, ForeignKey("screens.id"), primary_key=True)
contactInformation = relationship("crm_contact_informations", uselist=False, backref="crms")
....
class CRMContactInformation(rdb.Model):
"""Set up crm contact information table in the database"""
rdb.metadata(metadata)
rdb.tablename("crm_contact_informations")
**id = Column("id", Integer, ForeignKey("screens.id"), primary_key=True)**
owner = Column("owner", String(50))
...
| [
"First, why 3 tables all with 1:1 relationships? A 2NF-compliant single table should work considerably better.\nBut if you still want 3 tables, try relating crm_contact_informations to crms, not to screens:\nid = Column(\"id\", Integer, ForeignKey(\"crms.id\"), primary_key=True)\n\n"
] | [
0
] | [] | [] | [
"python",
"sqlalchemy"
] | stackoverflow_0004017303_python_sqlalchemy.txt |
Q:
python middleware to capture errors?
is there a python middleware that captures errors from web app and emails it?
which is the easiest one to use.
i am deploying app using nginx proxying to multiple app servers of gunicorn+web.py framework.
right now any error is printed out in each app server, which is not very easy to manage.
what is the best way to handle this?
A:
Check out Paste. Code to email an exception would look something like:
from paste.exceptions.errormiddleware import ErrorMiddleware
app = ErrorMiddleware(app,
global_conf, debug=False,
error_email='foo@example.com',
smtp_server='localhost')
| python middleware to capture errors? | is there a python middleware that captures errors from web app and emails it?
which is the easiest one to use.
i am deploying app using nginx proxying to multiple app servers of gunicorn+web.py framework.
right now any error is printed out in each app server, which is not very easy to manage.
what is the best way to handle this?
| [
"Check out Paste. Code to email an exception would look something like:\nfrom paste.exceptions.errormiddleware import ErrorMiddleware\napp = ErrorMiddleware(app,\n global_conf, debug=False,\n error_email='foo@example.com',\n smtp_server='localhost')\n\n... | [
2
] | [] | [] | [
"email",
"error_handling",
"middleware",
"python"
] | stackoverflow_0004016749_email_error_handling_middleware_python.txt |
Q:
Python Object Storage Memory/Dict/DB/Other Options
I'm currently writing a python application that will take a directory of text files and parse them into custom python objects based on the the attributes specified in the text file. As part of my application, I compare the current loaded object data set to a previous dataset (same format) and scan it for possible duplicates, conflicts, updates, etc. However since there can be ~10,000+ objects at a time, I'm not really sure how to approach this.
I'm currently storing the previous data set in a DB as it's being used by another web app. As of now, my python application loads the 'proposed' dataset into memory (creating the rule objects), and then I store those objects in a dictionary (problem #1). Then when it comes time to compare, I use a combination of SQL queries and failed inserts to determine new/existing and existing but updated entries (problem #2).
This is hackish and terrible at best. I'm looking for some advice on restructuring the application and handling the object storage/comparisons.
A:
You can fake what Git does and load the entire set as basically a single file and parse from there. The biggest issue is that dictionaries are not ordered so your comparisons will not always be 1:1. A list of tuples will give you 1:1 comparisons. If a lot has changed this will be difficult.
Here is a basic flow for how you can do this.
Start with both tuple lists at index 0.
Compare a hash of each tuple hashlib.sha1(str(tuple1)) == hashlib.sha1(str(tuple2))
If they are equal, record the matching indexes and add 1 to each index and compare again
If the are unequal, search each side for a match and record the matching indexes
If there are no matches, you can assume there is an insert/update/delete happening and come back to it later
You can map your matching items as reference points to do further investigation into the ones that did not match. This technique can be applied at each level you drill down. You will end up with a map of what is different down to the individual values.
The nice thing is each of the slices that you create can be compared in parallel since they will not correspond to each other... unless you are moving things from one file to another.
Then again, it may be easier to use a diff library to compare the two data sets. Might as well not reinvent the wheel; even if it might be a really shiny wheel.
Check out http://docs.python.org/library/difflib.html
| Python Object Storage Memory/Dict/DB/Other Options | I'm currently writing a python application that will take a directory of text files and parse them into custom python objects based on the the attributes specified in the text file. As part of my application, I compare the current loaded object data set to a previous dataset (same format) and scan it for possible duplicates, conflicts, updates, etc. However since there can be ~10,000+ objects at a time, I'm not really sure how to approach this.
I'm currently storing the previous data set in a DB as it's being used by another web app. As of now, my python application loads the 'proposed' dataset into memory (creating the rule objects), and then I store those objects in a dictionary (problem #1). Then when it comes time to compare, I use a combination of SQL queries and failed inserts to determine new/existing and existing but updated entries (problem #2).
This is hackish and terrible at best. I'm looking for some advice on restructuring the application and handling the object storage/comparisons.
| [
"You can fake what Git does and load the entire set as basically a single file and parse from there. The biggest issue is that dictionaries are not ordered so your comparisons will not always be 1:1. A list of tuples will give you 1:1 comparisons. If a lot has changed this will be difficult.\nHere is a basic flo... | [
1
] | [] | [] | [
"object",
"python"
] | stackoverflow_0004019557_object_python.txt |
Q:
Finding the index of a string in a tuple
Tup = ('string1','string2','string3')
My program returned string2 how do I get it's index within Tup?
A:
>>> tup.index('string2')
1
Note that the index() method has only just been added for tuples in versions 2.6 and better.
| Finding the index of a string in a tuple | Tup = ('string1','string2','string3')
My program returned string2 how do I get it's index within Tup?
| [
">>> tup.index('string2')\n1\n\nNote that the index() method has only just been added for tuples in versions 2.6 and better.\n"
] | [
49
] | [] | [] | [
"python",
"tuples"
] | stackoverflow_0004021154_python_tuples.txt |
Q:
How do I use a variable as table name in Django?
I'd like to do the following:
for table in table_list:
'table'.samefield = newvalue
A:
If you know the name of the application that it's in, then you can do
model = getattr(application_name.models, 'table')
model.somefield = newvalue
or just
getattr(application_name.models, 'table').somefield = newvalue
if you only want to access it once (although that doesn't really make any sense).
| How do I use a variable as table name in Django? | I'd like to do the following:
for table in table_list:
'table'.samefield = newvalue
| [
"If you know the name of the application that it's in, then you can do\nmodel = getattr(application_name.models, 'table')\nmodel.somefield = newvalue\n\nor just\ngetattr(application_name.models, 'table').somefield = newvalue\n\nif you only want to access it once (although that doesn't really make any sense).\n"
] | [
3
] | [] | [] | [
"django_models",
"python"
] | stackoverflow_0004021211_django_models_python.txt |
Q:
Using pip packages in python scripts
I've install this script via pip, but how to a reference these files as part of my script, do I need to use import or some other magic?
Thanks
A:
import isn't magic, it's how things get done.
| Using pip packages in python scripts | I've install this script via pip, but how to a reference these files as part of my script, do I need to use import or some other magic?
Thanks
| [
"import isn't magic, it's how things get done.\n"
] | [
2
] | [] | [] | [
"package",
"python"
] | stackoverflow_0004021333_package_python.txt |
Q:
How do you decode a binary encoded mail message in Python?
I'm writing a Google App engine app that processes incoming mail, and here's the code I'm currently using to process mail messages:
for content_type, body in email_bodies:
#8bit bug in mail messages - see bug report here
#http://code.google.com/p/googleappengine/issues/detail?id=2383
if body.encoding == '8bit':
body.encoding = '7bit'
#test for html content
if content_type == "text/html":
#parse html result
if content_type == "text/plain":
decoded_msg_body = body.decode()
However I just got a message that was using the binary encoding scheme, and when my program tried to process the message using body.decode(), I received a UnknownEncodingError. How should this program parse the binary content type? Also, how can I imitate this message type on my local version of GAE so I can debug and test it out?
I appreciate your help,
Kevin
A:
Rather than reinventing the wheel, you should try Python's built in email parser.
http://docs.python.org/library/email.parser.html
It's designed to handle the lifting involved in getting all sorts of different email formats into a good Python object. Use it to do the parsing, and you'll get nicely predictable objects to work with.
The email module doesn't do message sending and receiving, it just helps put them together and parse them out.
| How do you decode a binary encoded mail message in Python? | I'm writing a Google App engine app that processes incoming mail, and here's the code I'm currently using to process mail messages:
for content_type, body in email_bodies:
#8bit bug in mail messages - see bug report here
#http://code.google.com/p/googleappengine/issues/detail?id=2383
if body.encoding == '8bit':
body.encoding = '7bit'
#test for html content
if content_type == "text/html":
#parse html result
if content_type == "text/plain":
decoded_msg_body = body.decode()
However I just got a message that was using the binary encoding scheme, and when my program tried to process the message using body.decode(), I received a UnknownEncodingError. How should this program parse the binary content type? Also, how can I imitate this message type on my local version of GAE so I can debug and test it out?
I appreciate your help,
Kevin
| [
"Rather than reinventing the wheel, you should try Python's built in email parser.\nhttp://docs.python.org/library/email.parser.html\nIt's designed to handle the lifting involved in getting all sorts of different email formats into a good Python object. Use it to do the parsing, and you'll get nicely predictable ob... | [
1
] | [] | [] | [
"binary",
"email",
"encoding",
"google_app_engine",
"python"
] | stackoverflow_0004021392_binary_email_encoding_google_app_engine_python.txt |
Q:
Python: How to print text inputed by user?
My code is as follows:
#!/usr/bin/python
#Filename:2_7.py
text=raw_input("Enter string:")
for ?? in range(??)
print
For example, if the user enters test text, I want test text to be printed on the screen by this code.
What should I write instead of ?? to achieve this purpose?
A:
Maybe you want something like this?
var = raw_input("Enter something: ")
print "you entered ", var
Or in Python 3.x
var = input("Enter something: ")
print ("you entered ", var)
A:
Do you want to split the string into separate words and print each one?
e.g.
for word in text.split():
print word
in action:
Enter string: here are some words
here
are
some
words
A:
Well, for a simplistic case, you could try:
for word in text.split(" "):
print word
For a more complex case where you want to use a regular expression for a splitter:
for word in re.split("\W+", text):
if word != "":
print word
The earlier example will output "Hello, my name is Pax." as:
Hello,
my
name
is
Pax.
while the latter will more correctly give you:
Hello
my
name
is
Pax
(and you can improve the regex if you have more edge cases).
| Python: How to print text inputed by user? | My code is as follows:
#!/usr/bin/python
#Filename:2_7.py
text=raw_input("Enter string:")
for ?? in range(??)
print
For example, if the user enters test text, I want test text to be printed on the screen by this code.
What should I write instead of ?? to achieve this purpose?
| [
"Maybe you want something like this?\nvar = raw_input(\"Enter something: \")\nprint \"you entered \", var\n\nOr in Python 3.x\nvar = input(\"Enter something: \")\nprint (\"you entered \", var)\n\n",
"Do you want to split the string into separate words and print each one?\ne.g.\nfor word in text.split():\n prin... | [
4,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0004021742_python.txt |
Q:
Execute functions in a list as a chain
In Python I defined these functions:
def foo_1(p): return p + 1
def foo_2(p): return p + 1
def foo_3(p): return p + 1
def foo_4(p): return p + 1
def foo_5(p): return p + 1
I want to execute these functions in a chain like this:
foo_1(foo_2(foo_3(foo_4(foo_5(1)))))
I'd like to specify these functions as a list, executing those functions in a chain with a precise sequence. This is what I've tried:
lf = [Null,foo_1,foo_2,foo_3,foo_4,foo_5] # Null is for +1 issue here
def execu(lst, seq, raw_para):
# in some way
execu(lf,(1,2,3,4,5), 1) # = foo_1(foo_2(foo_3(foo_4(foo_5(1)))))
execu(lf,(1,2,3), 1) # = foo_1(foo_2(foo_3(1)))
execu(lf,(3,3,3), 1) # = foo_3(foo_3(foo_3(1)))
A:
You can use reduce for this:
reduce(lambda x, y: y(x), list_of_functions, initial_value)
Like so:
reduce(lambda x, y: y(x), reversed([foo_1, foo_2, foo_3, ...]), 1)
Note that If you want to apply the functions in the order of foo_1(foo_2(etc...)), you have to make sure that foo_1 is the last element of the list of functions. Therefore I use reversed in the latter example.
A:
No need for "Null" in "lf".
def execu(lst, seq, raw_para):
para = raw_para
for i in reversed(seq):
para = lst[i](para)
return para
A:
def execu(lst, seq, raw_para):
return reduce(lambda x, y: y(x), reversed(operator.itemgetter(*seq)(lst)), raw_para)
| Execute functions in a list as a chain | In Python I defined these functions:
def foo_1(p): return p + 1
def foo_2(p): return p + 1
def foo_3(p): return p + 1
def foo_4(p): return p + 1
def foo_5(p): return p + 1
I want to execute these functions in a chain like this:
foo_1(foo_2(foo_3(foo_4(foo_5(1)))))
I'd like to specify these functions as a list, executing those functions in a chain with a precise sequence. This is what I've tried:
lf = [Null,foo_1,foo_2,foo_3,foo_4,foo_5] # Null is for +1 issue here
def execu(lst, seq, raw_para):
# in some way
execu(lf,(1,2,3,4,5), 1) # = foo_1(foo_2(foo_3(foo_4(foo_5(1)))))
execu(lf,(1,2,3), 1) # = foo_1(foo_2(foo_3(1)))
execu(lf,(3,3,3), 1) # = foo_3(foo_3(foo_3(1)))
| [
"You can use reduce for this:\nreduce(lambda x, y: y(x), list_of_functions, initial_value)\n\nLike so:\nreduce(lambda x, y: y(x), reversed([foo_1, foo_2, foo_3, ...]), 1)\n\nNote that If you want to apply the functions in the order of foo_1(foo_2(etc...)), you have to make sure that foo_1 is the last element of the... | [
8,
5,
3
] | [] | [] | [
"composition",
"function",
"python"
] | stackoverflow_0004021731_composition_function_python.txt |
Q:
How to write this exception in Python 2.6
I need to write an expection that if a string is null, then fire this exception. How to write this?
Eg.
str = get_str()
if get_str() returns None or nothing. It should raise an exception.
A:
There is no such thing as "nothing" in Python. There is either something, or there is not.
if str is None:
raise SomeException()
A:
returning None (which is the same as not returning anything explicitly) isn't an exception by itself. If it should be an exception, get_str() should raise this exception, and it's up to you to decide what the correct reason and therefore correct exception is. It may be ValueError, TypeError or something custom. E.g.
def get_str():
x = some_complex_computation()
if x is None:
raise ValueError("because it's wrong!!")
return x
However, often, None will be a valid return value. Either check it explicitly once you get the return value as Ignacio points out, or just use "duck typing": assume you get a string back and let python fail if it isn't. E.g.
str = get_str()
if 'foo' in str:
print "Looks okay!"
if str is None, python will fail at the 'in' expression with a TypeError exception.
| How to write this exception in Python 2.6 | I need to write an expection that if a string is null, then fire this exception. How to write this?
Eg.
str = get_str()
if get_str() returns None or nothing. It should raise an exception.
| [
"There is no such thing as \"nothing\" in Python. There is either something, or there is not.\nif str is None:\n raise SomeException()\n\n",
"returning None (which is the same as not returning anything explicitly) isn't an exception by itself. If it should be an exception, get_str() should raise this exception, ... | [
3,
0
] | [] | [] | [
"exception",
"python"
] | stackoverflow_0004021754_exception_python.txt |
Q:
Implement iteritems function for my custom iterator?
The objective of the following program is to learn python generators and iterator implementation in order to understand Python magic methods.
I'm stuck implementing the iteritems() function. Also, I want to know whether I'm going on right direction or I'm conceptually wrong in understanding of python magic methods.
import random
class Random(object):
""" generate randomised int of specified length --> [r1, r2, r3....] """
def __init__(self, length = 1, rand_range = 100):
self.index = 0
self.generated_range = []
self.length = length if str(length).isdigit() else 1
self.rand_range = rand_range if str(rand_range).isdigit() else 100
def __iter__(self): return self
def next(self):
if self.index < self.length:
item = random.randrange(0, self.rand_range)
self.generated_range.append(item)
self.index += 1
return item
raise StopIteration
def __eq__(self, obj):
if self.length == obj.length and self.rand_range == obj.rand_range:
return True
return False
def __len__(self):
return self.length
def __contains__(self, item):
if not self.index == self.length:
while self.next(): continue
if item in self.generated_range:
return True
return False
def __getitem__(self, key):
if key in self.generated_range:
return key * 100
raise KeyError
def __setitem__(self, key, value):
raise NotImplemented
def __delitem__(self, key):
raise NotImplemented
def iteritems(self):
if not self.index == self.length:
while self.next(): continue
for item in self.generated_range:
yield item
if __name__ == '__main__':
r1 = Random(10)
print [ r for r in r1]
print 10 in r1
r2 = Random(20)
print r2.iteritems()
CodePAD link
A:
iteritems is supposed to return a sequence of (key, val) pairs for a dictionary, so I don't think it's appropriate for your class. If you created a descendant of UserDict, then there might be a case to override it.
If you do override iteritems, you should do so by yielding key, value pairs.
Poor programmer's override:
def iteritems(self):
return iter([(x[0], x) for x in "alpha bravo charlie".split()])
| Implement iteritems function for my custom iterator? | The objective of the following program is to learn python generators and iterator implementation in order to understand Python magic methods.
I'm stuck implementing the iteritems() function. Also, I want to know whether I'm going on right direction or I'm conceptually wrong in understanding of python magic methods.
import random
class Random(object):
""" generate randomised int of specified length --> [r1, r2, r3....] """
def __init__(self, length = 1, rand_range = 100):
self.index = 0
self.generated_range = []
self.length = length if str(length).isdigit() else 1
self.rand_range = rand_range if str(rand_range).isdigit() else 100
def __iter__(self): return self
def next(self):
if self.index < self.length:
item = random.randrange(0, self.rand_range)
self.generated_range.append(item)
self.index += 1
return item
raise StopIteration
def __eq__(self, obj):
if self.length == obj.length and self.rand_range == obj.rand_range:
return True
return False
def __len__(self):
return self.length
def __contains__(self, item):
if not self.index == self.length:
while self.next(): continue
if item in self.generated_range:
return True
return False
def __getitem__(self, key):
if key in self.generated_range:
return key * 100
raise KeyError
def __setitem__(self, key, value):
raise NotImplemented
def __delitem__(self, key):
raise NotImplemented
def iteritems(self):
if not self.index == self.length:
while self.next(): continue
for item in self.generated_range:
yield item
if __name__ == '__main__':
r1 = Random(10)
print [ r for r in r1]
print 10 in r1
r2 = Random(20)
print r2.iteritems()
CodePAD link
| [
"iteritems is supposed to return a sequence of (key, val) pairs for a dictionary, so I don't think it's appropriate for your class. If you created a descendant of UserDict, then there might be a case to override it.\nIf you do override iteritems, you should do so by yielding key, value pairs.\nPoor programmer's ove... | [
4
] | [] | [] | [
"containers",
"iterator",
"python"
] | stackoverflow_0004020905_containers_iterator_python.txt |
Q:
Force Update QTableView + QSqlTableModel in PyQt
I have a QTableView which displays data from a QSqlTableModel. I want my model to check for changes when a user hits a "refresh" button but I can't find a way to update the data.
I tried the reset() and update() methods on the model without any result.
Is it possible to "re-read" from the database and update the model? How?
A:
As you dont state what you canged in your model, ill assume the simplest form of change (changed data).
For me model.select() works to update the data in the model and force the View to update itself.
| Force Update QTableView + QSqlTableModel in PyQt | I have a QTableView which displays data from a QSqlTableModel. I want my model to check for changes when a user hits a "refresh" button but I can't find a way to update the data.
I tried the reset() and update() methods on the model without any result.
Is it possible to "re-read" from the database and update the model? How?
| [
"As you dont state what you canged in your model, ill assume the simplest form of change (changed data).\nFor me model.select() works to update the data in the model and force the View to update itself.\n"
] | [
5
] | [] | [] | [
"pyqt",
"python",
"qtableview"
] | stackoverflow_0004022049_pyqt_python_qtableview.txt |
Q:
How to create a self contained Python Qt app on mac
I've recently started using a mac, and I'm curious about how to make a mac app that uses PyQt and is self-contained.
Can anyone give me any pointers on where to start and what I'll need?
A:
PyInstaller should be pretty good for that -- it's cross-platform (Mac, Windows, Linux) and offers out-of-the-box support for PyQt (among other useful third-party libraries). Now that a good release (1.4) has finally been recognized as stable, and officially released, after a somewhat long hiatus, PyInstaller is fully "back in business" and my favorite packager!-)
A:
Ars Technica did a fantastic article on this exact topic last year.
Check out page 2 of the article How-to:Deploying PyQt applications on Windows and Mac OS X
Quick Summary: It is possible, but time consuming, results in large app bundles, and their are some strange quirks.
This post was written in March of 2009, so the situation might be different.
A:
I've tried the same for some weeks now. Finally i have to say py2app just wont do. I was lucky with pyinstaller1.4. Although you need to add some minor modifications to run flawlessly on OS X. Furthermore the apps it creates are only 1/4 of the size compared to py2app. And most important it works :) And yet another goodie ... it works with the python framework which ships with OS X so there is no need to install python via MacPorts etc.
| How to create a self contained Python Qt app on mac | I've recently started using a mac, and I'm curious about how to make a mac app that uses PyQt and is self-contained.
Can anyone give me any pointers on where to start and what I'll need?
| [
"PyInstaller should be pretty good for that -- it's cross-platform (Mac, Windows, Linux) and offers out-of-the-box support for PyQt (among other useful third-party libraries). Now that a good release (1.4) has finally been recognized as stable, and officially released, after a somewhat long hiatus, PyInstaller is ... | [
3,
2,
1
] | [] | [] | [
"macos",
"python",
"qt"
] | stackoverflow_0003173047_macos_python_qt.txt |
Q:
pygtk - update a gtk.liststore adds blank line
I'm having issue with adding extra lines to gtk.liststore. below is a snipped of the code. The glade definition can be found here. When I add a row the row seperators appear but where text should appear i keeps empty... please help define everything with glade interface designer.
class GTKInterface(gobject.GObject):
__gsignals__ = {
}
def __init__(self):
gobject.GObject.__init__(self)
self.gladefile = 'glade/mainwindow.glade'
self.builder = gtk.Builder()
self.builder.add_from_file(self.gladefile)
self.treeview = self.builder.get_object('treeview1')
self.liststore = self.builder.get_object('liststore')
def add_row(self, episodes):
for ep in episodes:
self.liststore.append(["test","test1", "test2"])
What did I miss or how can this be solved?
Kind Regards,
A:
I have found what I have forgot cell renders !!! I read about here
| pygtk - update a gtk.liststore adds blank line | I'm having issue with adding extra lines to gtk.liststore. below is a snipped of the code. The glade definition can be found here. When I add a row the row seperators appear but where text should appear i keeps empty... please help define everything with glade interface designer.
class GTKInterface(gobject.GObject):
__gsignals__ = {
}
def __init__(self):
gobject.GObject.__init__(self)
self.gladefile = 'glade/mainwindow.glade'
self.builder = gtk.Builder()
self.builder.add_from_file(self.gladefile)
self.treeview = self.builder.get_object('treeview1')
self.liststore = self.builder.get_object('liststore')
def add_row(self, episodes):
for ep in episodes:
self.liststore.append(["test","test1", "test2"])
What did I miss or how can this be solved?
Kind Regards,
| [
"I have found what I have forgot cell renders !!! I read about here\n"
] | [
0
] | [] | [] | [
"pygtk",
"python"
] | stackoverflow_0004022275_pygtk_python.txt |
Q:
Validating ModelChoiceField in Django forms
I'm trying to validate a form containing a ModelChoiceField:
forms.py:
from django import forms
from modelchoicetest.models import SomeObject
class SomeObjectAddForm(forms.ModelForm):
class Meta:
model = SomeObject
models.py:
from django.db import models
class SomeChoice(models.Model):
name = models.CharField(max_length=16)
def __unicode__(self):
return self.name
class SomeObject(models.Model):
choice = models.ForeignKey(SomeChoice)
views.py:
from django.shortcuts import render_to_response
from django.template import RequestContext
from django.http import HttpResponseRedirect
from django.core.urlresolvers import reverse
from forms import SomeObjectAddForm
def add(request):
if request.method == 'POST':
form = SomeObjectAddForm(request.POST)
if form.is_valid():
form.save()
return HttpResponseRedirect(reverse('modelchoicetest_add'))
else:
form = SomeObjectAddForm()
return render_to_response('modelchoicetest/index.html',
{'form': form},
context_instance=RequestContext(request))
When it is used in normal circumstances, everything goes just fine. But I'd like to protect the form from the invalid input. It's pretty obvious that I must get forms.ValidationError when I put invalid value in this field, isn't it? But if I try to submit a form with a value 'invalid' in 'somechoice' field, I get
ValueError: invalid literal for int() with base 10: 'invalid'
and not the expected forms.ValidationError. What should I do? I tried to place a def clean_somechoice(self) to check this field but that didn't work: ValueError happens before it comes to clean_somechoice()
Plus I don't think this is a good solution, there must be something more simple but I just missed that.
here's the full traceback:
Traceback:
File "/usr/local/lib/python2.6/dist-packages/django/core/handlers/base.py" in get_response
101. response = callback(request, *callback_args, **callback_kwargs)
File "/home/andrey/public_html/example/modelchoicetest/views.py" in add
11. if form.is_valid():
File "/usr/local/lib/python2.6/dist-packages/django/forms/forms.py" in is_valid
120. return self.is_bound and not bool(self.errors)
File "/usr/local/lib/python2.6/dist-packages/django/forms/forms.py" in _get_errors
111. self.full_clean()
File "/usr/local/lib/python2.6/dist-packages/django/forms/forms.py" in full_clean
276. value = field.clean(value)
File "/usr/local/lib/python2.6/dist-packages/django/forms/fields.py" in clean
154. value = self.to_python(value)
File "/usr/local/lib/python2.6/dist-packages/django/forms/models.py" in to_python
911. value = self.queryset.get(**{key: value})
File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py" in get
330. clone = self.filter(*args, **kwargs)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py" in filter
536. return self._filter_or_exclude(False, *args, **kwargs)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py" in _filter_or_exclude
554. clone.query.add_q(Q(*args, **kwargs))
File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py" in add_q
1109. can_reuse=used_aliases)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py" in add_filter
1048. connector)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/where.py" in add
66. value = obj.prepare(lookup_type, value)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/where.py" in prepare
267. return self.field.get_prep_lookup(lookup_type, value)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/fields/__init__.py" in get_prep_lookup
314. return self.get_prep_value(value)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/fields/__init__.py" in get_prep_value
496. return int(value)
Exception Type: ValueError at /
Exception Value: invalid literal for int() with base 10: 'invalid'
A:
It looks to me like the exception is being raised by the clean method of the actual ModelChoiceField object. Because it's a foreign key, Django is expecting an int, which would be representative of the pk for SomeChoice. How exactly are you passing invalid into the form?
RESPONSE TO COMMENT
If you really feel you need to catch this, you can try overriding the default ModelChoiceField by creating a new field called choice and pass in the to_field_name kwarg into the ModelChoiceField __init__ method. This way Django won't be filtering on pk, and won't raise that exception.
Personally I wouldn't use this solution. There is no need to accommodate user who are hacking your form.
A:
This is known Django bug:
http://code.djangoproject.com/ticket/11716
And while this bug is not fixed, you can only handle ValueError manually.
| Validating ModelChoiceField in Django forms | I'm trying to validate a form containing a ModelChoiceField:
forms.py:
from django import forms
from modelchoicetest.models import SomeObject
class SomeObjectAddForm(forms.ModelForm):
class Meta:
model = SomeObject
models.py:
from django.db import models
class SomeChoice(models.Model):
name = models.CharField(max_length=16)
def __unicode__(self):
return self.name
class SomeObject(models.Model):
choice = models.ForeignKey(SomeChoice)
views.py:
from django.shortcuts import render_to_response
from django.template import RequestContext
from django.http import HttpResponseRedirect
from django.core.urlresolvers import reverse
from forms import SomeObjectAddForm
def add(request):
if request.method == 'POST':
form = SomeObjectAddForm(request.POST)
if form.is_valid():
form.save()
return HttpResponseRedirect(reverse('modelchoicetest_add'))
else:
form = SomeObjectAddForm()
return render_to_response('modelchoicetest/index.html',
{'form': form},
context_instance=RequestContext(request))
When it is used in normal circumstances, everything goes just fine. But I'd like to protect the form from the invalid input. It's pretty obvious that I must get forms.ValidationError when I put invalid value in this field, isn't it? But if I try to submit a form with a value 'invalid' in 'somechoice' field, I get
ValueError: invalid literal for int() with base 10: 'invalid'
and not the expected forms.ValidationError. What should I do? I tried to place a def clean_somechoice(self) to check this field but that didn't work: ValueError happens before it comes to clean_somechoice()
Plus I don't think this is a good solution, there must be something more simple but I just missed that.
here's the full traceback:
Traceback:
File "/usr/local/lib/python2.6/dist-packages/django/core/handlers/base.py" in get_response
101. response = callback(request, *callback_args, **callback_kwargs)
File "/home/andrey/public_html/example/modelchoicetest/views.py" in add
11. if form.is_valid():
File "/usr/local/lib/python2.6/dist-packages/django/forms/forms.py" in is_valid
120. return self.is_bound and not bool(self.errors)
File "/usr/local/lib/python2.6/dist-packages/django/forms/forms.py" in _get_errors
111. self.full_clean()
File "/usr/local/lib/python2.6/dist-packages/django/forms/forms.py" in full_clean
276. value = field.clean(value)
File "/usr/local/lib/python2.6/dist-packages/django/forms/fields.py" in clean
154. value = self.to_python(value)
File "/usr/local/lib/python2.6/dist-packages/django/forms/models.py" in to_python
911. value = self.queryset.get(**{key: value})
File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py" in get
330. clone = self.filter(*args, **kwargs)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py" in filter
536. return self._filter_or_exclude(False, *args, **kwargs)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py" in _filter_or_exclude
554. clone.query.add_q(Q(*args, **kwargs))
File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py" in add_q
1109. can_reuse=used_aliases)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py" in add_filter
1048. connector)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/where.py" in add
66. value = obj.prepare(lookup_type, value)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/where.py" in prepare
267. return self.field.get_prep_lookup(lookup_type, value)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/fields/__init__.py" in get_prep_lookup
314. return self.get_prep_value(value)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/fields/__init__.py" in get_prep_value
496. return int(value)
Exception Type: ValueError at /
Exception Value: invalid literal for int() with base 10: 'invalid'
| [
"It looks to me like the exception is being raised by the clean method of the actual ModelChoiceField object. Because it's a foreign key, Django is expecting an int, which would be representative of the pk for SomeChoice. How exactly are you passing invalid into the form?\nRESPONSE TO COMMENT\nIf you really feel yo... | [
1,
0
] | [] | [] | [
"django",
"forms",
"python",
"validation"
] | stackoverflow_0002400548_django_forms_python_validation.txt |
Q:
efficient and complete input check for command line argument and option with python
I am developing cli with python version 2.4.3. i want to have the input exception check. The following is part of the code. With this code, I can type
addcd -t 11
and if I type
addcd -t str_not_int
or
addcd -s 3
I will catch the error of wrong type argument and wrong option. However, it is not sufficient. e.g.
addcd s 11
or
addcd s a
then the optparse cannot detect this kind of misinput.
to eliminate the case like
"addcd s a 11 21", I add something by checking number of argument, but I do not know if it is the right way.
So, how can I implement a thorough/efficient input check for CLI?
class OptionParsingError(RuntimeError):
def __init__(self, msg):
self.msg = msg
class OptionParsingExit(Exception):
def __init__(self, status, msg):
self.msg = msg
self.status = status
class ModifiedOptionParser(optparse.OptionParser):
def error(self, msg):
raise OptionParsingError(msg)
def exit(self, status=0, msg=None):
raise OptionParsingExit(status, msg)
class CDContainerCLI(cmd.Cmd):
"""Simple CLI """
def __init__(self):
""" initialization """
cmd.Cmd.__init__(self)
self.cdcontainer=None
def addcd(self, s):
args=s.split()
try:
parser = ModifiedOptionParser()
parser.add_option("-t", "--track", dest="track_number", type="int",
help="track number")
(options, positional_args) = parser.parse_args(args)
except OptionParsingError, e:
print 'There was a parsing error: %s' % e.msg
return
except OptionParsingExit, e:
print 'The option parser exited with message %s and result code %s' % (e.msg, e.status)
return
if len(args) != 4:
print "wrong number of inputs"
return
cd_obj= CD()
cd_obj.addCD(options.track_number, options.cd_name)
A:
First, read this http://docs.python.org/library/optparse.html#terminology.
You're not following the optparse rules for options. If you don't follow the rules, you cannot use this library.
Specifically.
addcd s 11
or
addcd s a
Are not legal options. No "-". No "--". Why no "-"?
"then it cannot detect."
Cannot detect what? Cannot detect that they're present? Of course not. They're not legal options.
Cannot detect that the option parameters are the right type? Of course not. They're not legal options to begin with.
eliminate the case of "addcd s a 11 21"
Since none of that looks like legal options, then optparse can't help you. Those aren't "options".
Those are "arguments". They will be found in the variable positional_args.
However, you're not using that variable. Why not?
Indeed, this code
return
if len(args) != 4:
print "wrong number of inputs"
return
Makes no sense at all. After return no statement can be executed. Why write code after a return?
Why check the len of args after parsing args? It's already parsed, it's too late to care how long it was.
| efficient and complete input check for command line argument and option with python | I am developing cli with python version 2.4.3. i want to have the input exception check. The following is part of the code. With this code, I can type
addcd -t 11
and if I type
addcd -t str_not_int
or
addcd -s 3
I will catch the error of wrong type argument and wrong option. However, it is not sufficient. e.g.
addcd s 11
or
addcd s a
then the optparse cannot detect this kind of misinput.
to eliminate the case like
"addcd s a 11 21", I add something by checking number of argument, but I do not know if it is the right way.
So, how can I implement a thorough/efficient input check for CLI?
class OptionParsingError(RuntimeError):
def __init__(self, msg):
self.msg = msg
class OptionParsingExit(Exception):
def __init__(self, status, msg):
self.msg = msg
self.status = status
class ModifiedOptionParser(optparse.OptionParser):
def error(self, msg):
raise OptionParsingError(msg)
def exit(self, status=0, msg=None):
raise OptionParsingExit(status, msg)
class CDContainerCLI(cmd.Cmd):
"""Simple CLI """
def __init__(self):
""" initialization """
cmd.Cmd.__init__(self)
self.cdcontainer=None
def addcd(self, s):
args=s.split()
try:
parser = ModifiedOptionParser()
parser.add_option("-t", "--track", dest="track_number", type="int",
help="track number")
(options, positional_args) = parser.parse_args(args)
except OptionParsingError, e:
print 'There was a parsing error: %s' % e.msg
return
except OptionParsingExit, e:
print 'The option parser exited with message %s and result code %s' % (e.msg, e.status)
return
if len(args) != 4:
print "wrong number of inputs"
return
cd_obj= CD()
cd_obj.addCD(options.track_number, options.cd_name)
| [
"First, read this http://docs.python.org/library/optparse.html#terminology.\nYou're not following the optparse rules for options. If you don't follow the rules, you cannot use this library.\nSpecifically.\naddcd s 11\n\nor\naddcd s a \n\nAre not legal options. No \"-\". No \"--\". Why no \"-\"?\n\"then it canno... | [
2
] | [] | [] | [
"command_line_arguments",
"python"
] | stackoverflow_0004022485_command_line_arguments_python.txt |
Q:
Context locals - how do they make local context variables global?
I was reading through the Flask doc - and came across this:
... For web applications it’s crucial to
react to the data a client sent to the
server. In Flask this information is
provided by the global request object.
If you have some experience with
Python you might be wondering how that
object can be global and how Flask
manages to still be threadsafe. The
answer are context locals ...
Now I understood context locals to be stuff like the with statement (certainly thats what the python 2.6 doc seems to suggest). Im struggling to see how this would allow you to have globally accessible vars that reside in a local namespace? How does this conceptually work?
Also: globals are generally considered filthy I take it, so why is this OK ?
A:
They are actually proxy objects to the real objects so that when you reference one you get access to the object for your current thread.
An example would be the request object. You can see this being set up in globlals.py and then imported into the __init__.py for flask.
The benefit of this is that you can access the request just by doing
from flask import request
and write methods like
@app.route('/')
def hello_world():
return "Hello World!"
without having to pass the request around as a parameter.
This is making use of some of the reusable code libraries from Werkzeug.
| Context locals - how do they make local context variables global? | I was reading through the Flask doc - and came across this:
... For web applications it’s crucial to
react to the data a client sent to the
server. In Flask this information is
provided by the global request object.
If you have some experience with
Python you might be wondering how that
object can be global and how Flask
manages to still be threadsafe. The
answer are context locals ...
Now I understood context locals to be stuff like the with statement (certainly thats what the python 2.6 doc seems to suggest). Im struggling to see how this would allow you to have globally accessible vars that reside in a local namespace? How does this conceptually work?
Also: globals are generally considered filthy I take it, so why is this OK ?
| [
"They are actually proxy objects to the real objects so that when you reference one you get access to the object for your current thread.\nAn example would be the request object. You can see this being set up in globlals.py and then imported into the __init__.py for flask.\nThe benefit of this is that you can acces... | [
5
] | [] | [] | [
"python"
] | stackoverflow_0004022537_python.txt |
Q:
Testing call order across mock objects with Mox and Python
I'm testing a function that obtains a skeleton object from one helper object, modifies it using a second helper, and passes the modified object back to the first helper. Something along the lines of:
class ReadModifyUpdate(object):
def __init__(self, store, modifier):
self._store = store
self._modifier = modifier
def modify(key):
record = self._store.read(key)
self._modifier.modify(record)
self._store.update(key, record)
Using Python and Mox, we can test this with:
class ReadModifyUpdateTest(mox.MoxTestBase):
def test_modify(self):
mock_record = self.mox.CreateMockAnthing()
mock_store = self.mox.CreateMockAnything()
mock_modifier = self.mox.CreateMockAnything()
mock_store.read("test_key").AndReturn(mock_record)
mock_modifier.modify(mock_record)
mock_store.update("test_key", mock_record)
self.mox.ReplayAll()
updater = ReadModifyUpdate(mock_store, mock_modifier)
updater.modify("test_key")
...but this doesn't catch the bug in which store.update() is inadvertently called before modifier.modify(). Is there a good way, in Mox, to check the order of methods called on multiple mocks? Something like EasyMock's MocksControl object?
A:
Maybe not the best solution but you could try to use one mock that you give twice to your object under test. You then have control over the call order.
class ReadModifyUpdateTest(mox.MoxTestBase):
def test_modify(self):
mock_record = self.mox.CreateMockAnthing()
mock_storeModifier = self.mox.CreateMockAnything()
mock_storeModifier.read("test_key").AndReturn(mock_record)
mock_storeModifier.modify(mock_record)
mock_storeModifier.update("test_key", mock_record)
self.mox.ReplayAll()
updater = ReadModifyUpdate(mock_storeModifier, mock_storeModifier)
updater.modify("test_key")
A:
To provide an answer to my own question - I've currently got this working using a side effect which checks the call order.
Defining a helper class:
class OrderedCallSequence(object):
def __init__(self, test_case):
self._expectation_count = 0
self._evaluated = 0
self._test_case = test_case
def assertOrder(self):
self._expectation_count += 1
expected_position = self._expectation_count
def side_effect(*args, **kwargs):
self._evaluated += 1
self._test_case.assertEquals(self._evaluated, expected_position,
msg="Invoked in incorrect sequence")
return side_effect
...the test case becomes:
class ReadModifyUpdateTest(mox.MoxTestBase):
def test_modify(self):
mock_record = self.mox.CreateMockAnthing()
mock_store = self.mox.CreateMockAnything()
mock_modifier = self.mox.CreateMockAnything()
sequence = OrderedCallSequence(self)
mock_store.read("test_key").WithSideEffects(sequence.assertOrder()).AndReturn(mock_record)
mock_modifier.modify(mock_record).WithSideEffects(sequence.assertOrder())
mock_store.update("test_key", mock_record).WithSideEffects(sequence.assertOrder())
self.mox.ReplayAll()
updater = ReadModifyUpdate(mock_store, mock_modifier)
updater.modify("test_key")
| Testing call order across mock objects with Mox and Python | I'm testing a function that obtains a skeleton object from one helper object, modifies it using a second helper, and passes the modified object back to the first helper. Something along the lines of:
class ReadModifyUpdate(object):
def __init__(self, store, modifier):
self._store = store
self._modifier = modifier
def modify(key):
record = self._store.read(key)
self._modifier.modify(record)
self._store.update(key, record)
Using Python and Mox, we can test this with:
class ReadModifyUpdateTest(mox.MoxTestBase):
def test_modify(self):
mock_record = self.mox.CreateMockAnthing()
mock_store = self.mox.CreateMockAnything()
mock_modifier = self.mox.CreateMockAnything()
mock_store.read("test_key").AndReturn(mock_record)
mock_modifier.modify(mock_record)
mock_store.update("test_key", mock_record)
self.mox.ReplayAll()
updater = ReadModifyUpdate(mock_store, mock_modifier)
updater.modify("test_key")
...but this doesn't catch the bug in which store.update() is inadvertently called before modifier.modify(). Is there a good way, in Mox, to check the order of methods called on multiple mocks? Something like EasyMock's MocksControl object?
| [
"Maybe not the best solution but you could try to use one mock that you give twice to your object under test. You then have control over the call order.\nclass ReadModifyUpdateTest(mox.MoxTestBase):\n def test_modify(self):\n mock_record = self.mox.CreateMockAnthing()\n mock_storeModifier = self.mo... | [
0,
0
] | [] | [] | [
"mocking",
"mox",
"python",
"testing"
] | stackoverflow_0004016798_mocking_mox_python_testing.txt |
Q:
Wondering whether I should just bail on using properties in python
I have been trying to use properties instead of specific setters and getters in my app. They seem more pythonic and generally make my code more readable.
More readable except for one issue: Typos.
consider the following simple example (note, my properties actually do some processing even though the examples here just set or return a simple variable)
class GotNoClass(object):
def __init__(self):
object.__init__(self)
self.__a = None
def __set_a(self, a):
self.__a = a
def __get_a(self):
return self.__a
paramName = property(__get_a, __set_a)
if __name__ == "__main__":
classy = GotNoClass()
classy.paramName = 100
print classy.paramName
classy.paranName = 200
print classy.paramName
#oops! Typo above! as seen by this line:
print classy.paranName
The output, as anyone who reads a little closely will see, is:
100
100
200
Oops. Shouldn't have been except for the fact that I made a typo - I wrote paranName (two n's) instead of paramName.
This is easy to debug in this simple example, but it has been hurting me in my larger project. Since python happily creates a new variable when I accidentally meant to use a property, I get subtle errors in my code. Errors that I am finding hard to track down at times. Even worse, I once used the same typo twice (once as I was setting and later once as I was getting) so my code appeared to be working but much later, when a different branch of code finally tried to access this property (correctly) I got the wrong value - but it took me several days before I realized that my results were just a bit off.
Now that I know that this is an issue, I am spending more time closely reading my code, but ideally I would have a way to catch this situation automatically - if I miss just one I can introduce an error that does not show up until a fair bit of time has passed...
So I am wondering, should I just switch to using good old setters and getters? Or is there some neat way to avoid this situation? Do people just rely on themselves to catch these errors manually? Alas I am not a professional programmer, just someone trying to get some stuff done here at work and I don't really know the best way to approach this.
Thanks.
P.S.
I understand that this is also one of the benefits of Python and I am not complaining about that. Just wondering whether I would be better off using explicit setters and getters.
A:
Have you tried a static analysis tool? Here is a great thread about them.
A:
Depending on how your code works, you could try using slots. You'll get an AttributeError exception thrown when you try to assign something that's not in slots then, which will make such typo's more obvious.
A:
There are times when compile-time checking really saves time. You seem to have identified one such case. By accident rather than careful choice I use getters and setters, and am happy ;-)
| Wondering whether I should just bail on using properties in python | I have been trying to use properties instead of specific setters and getters in my app. They seem more pythonic and generally make my code more readable.
More readable except for one issue: Typos.
consider the following simple example (note, my properties actually do some processing even though the examples here just set or return a simple variable)
class GotNoClass(object):
def __init__(self):
object.__init__(self)
self.__a = None
def __set_a(self, a):
self.__a = a
def __get_a(self):
return self.__a
paramName = property(__get_a, __set_a)
if __name__ == "__main__":
classy = GotNoClass()
classy.paramName = 100
print classy.paramName
classy.paranName = 200
print classy.paramName
#oops! Typo above! as seen by this line:
print classy.paranName
The output, as anyone who reads a little closely will see, is:
100
100
200
Oops. Shouldn't have been except for the fact that I made a typo - I wrote paranName (two n's) instead of paramName.
This is easy to debug in this simple example, but it has been hurting me in my larger project. Since python happily creates a new variable when I accidentally meant to use a property, I get subtle errors in my code. Errors that I am finding hard to track down at times. Even worse, I once used the same typo twice (once as I was setting and later once as I was getting) so my code appeared to be working but much later, when a different branch of code finally tried to access this property (correctly) I got the wrong value - but it took me several days before I realized that my results were just a bit off.
Now that I know that this is an issue, I am spending more time closely reading my code, but ideally I would have a way to catch this situation automatically - if I miss just one I can introduce an error that does not show up until a fair bit of time has passed...
So I am wondering, should I just switch to using good old setters and getters? Or is there some neat way to avoid this situation? Do people just rely on themselves to catch these errors manually? Alas I am not a professional programmer, just someone trying to get some stuff done here at work and I don't really know the best way to approach this.
Thanks.
P.S.
I understand that this is also one of the benefits of Python and I am not complaining about that. Just wondering whether I would be better off using explicit setters and getters.
| [
"Have you tried a static analysis tool? Here is a great thread about them.\n",
"Depending on how your code works, you could try using slots. You'll get an AttributeError exception thrown when you try to assign something that's not in slots then, which will make such typo's more obvious.\n",
"There are times whe... | [
3,
2,
0
] | [] | [] | [
"getter",
"properties",
"python",
"setter"
] | stackoverflow_0004021862_getter_properties_python_setter.txt |
Q:
Transaction collision for sequential insert on Google App Engine. Why?
I am inserting a set of records on Google App Engine. I insert them in batch to avoid deadline exceptions.
When there is a large number of records (for example 1k) I always receive an unexpected:
Transaction collision for entity group
with key
datastore_types.Key.from_path(u'GroupModel',
u'root', _app=u'streamtomail').
Retrying...
This situation happen always.
In local environment instead it works without any problem.
How is it possible to have transaction collisions if I am using a sequential process and no one is using the system in the meanwhile?
Here is the code that I use for batching:
def deferred_worker():
if next_chunk():
process_chunk()
deferred.defer(deferred_worker)
where in *process_chunk()* I do 50 inserts in the database
A:
The collision is on an instance of your 'GroupModel' entity with the key name 'root'. Based on that, I'm guessing you're putting everything inside a single entity group with that as the parent. As documented here, every entity with the same parent is in the same entity group, to which transactions are serialized. Thus, any concurrent updates to any entity in that group will potentially conflict with any other.
| Transaction collision for sequential insert on Google App Engine. Why? | I am inserting a set of records on Google App Engine. I insert them in batch to avoid deadline exceptions.
When there is a large number of records (for example 1k) I always receive an unexpected:
Transaction collision for entity group
with key
datastore_types.Key.from_path(u'GroupModel',
u'root', _app=u'streamtomail').
Retrying...
This situation happen always.
In local environment instead it works without any problem.
How is it possible to have transaction collisions if I am using a sequential process and no one is using the system in the meanwhile?
Here is the code that I use for batching:
def deferred_worker():
if next_chunk():
process_chunk()
deferred.defer(deferred_worker)
where in *process_chunk()* I do 50 inserts in the database
| [
"The collision is on an instance of your 'GroupModel' entity with the key name 'root'. Based on that, I'm guessing you're putting everything inside a single entity group with that as the parent. As documented here, every entity with the same parent is in the same entity group, to which transactions are serialized. ... | [
2
] | [] | [] | [
"google_app_engine",
"google_cloud_datastore",
"python"
] | stackoverflow_0004007650_google_app_engine_google_cloud_datastore_python.txt |
Q:
Python: Existing environment variables which need to be added again to execute
I'm trying to execute a command which runs a program that uses perl and python. Although both of them are already in PATH, I get this error 'perl' is not recognized as an internal or external command,operable program or batch file.
'python' is not recognized as an internal or external command,operable program or batch file. So I tried os.putenv('PATH', dir) but only one was taken in.
A:
So try
os.putenv('PATH', dir + ";" + otherdir)
| Python: Existing environment variables which need to be added again to execute | I'm trying to execute a command which runs a program that uses perl and python. Although both of them are already in PATH, I get this error 'perl' is not recognized as an internal or external command,operable program or batch file.
'python' is not recognized as an internal or external command,operable program or batch file. So I tried os.putenv('PATH', dir) but only one was taken in.
| [
"So try\nos.putenv('PATH', dir + \";\" + otherdir)\n\n"
] | [
0
] | [] | [] | [
"environment_variables",
"python",
"windows"
] | stackoverflow_0004022414_environment_variables_python_windows.txt |
Q:
How does numpy zeros implement the parameter shape?
I want to implement a similar function, and want to accept an array or number that I pass to numpy.ones.
Specifically, I want to do this:
def halfs(shape):
shape = numpy.concatenate([2], shape)
return 0.5 * numpy.ones(shape)
Example input-output pairs:
# default
In [5]: beta_jeffreys()
Out[5]: array([-0.5, -0.5])
# scalar
In [5]: beta_jeffreys(3)
Out[3]:
array([[-0.5, -0.5, -0.5],
[-0.5, -0.5, -0.5]])
# vector (1)
In [3]: beta_jeffreys((3,))
Out[3]:
array([[-0.5, -0.5, -0.5],
[-0.5, -0.5, -0.5]])
# vector (2)
In [7]: beta_jeffreys((2,3))
Out[7]:
array([[[-0.5, -0.5, -0.5],
[-0.5, -0.5, -0.5]],
[[-0.5, -0.5, -0.5],
[-0.5, -0.5, -0.5]]])
A:
def halfs(shape=()):
if isinstance(shape, tuple):
return 0.5 * numpy.ones((2,) + shape)
else:
return 0.5 * numpy.ones((2, shape))
a = numpy.arange(5)
# array([0, 1, 2, 3, 4])
halfs(a.shape)
#array([[ 0.5, 0.5, 0.5, 0.5, 0.5],
# [ 0.5, 0.5, 0.5, 0.5, 0.5]])
halfs(3)
#array([[ 0.5, 0.5, 0.5],
# [ 0.5, 0.5, 0.5]])
| How does numpy zeros implement the parameter shape? | I want to implement a similar function, and want to accept an array or number that I pass to numpy.ones.
Specifically, I want to do this:
def halfs(shape):
shape = numpy.concatenate([2], shape)
return 0.5 * numpy.ones(shape)
Example input-output pairs:
# default
In [5]: beta_jeffreys()
Out[5]: array([-0.5, -0.5])
# scalar
In [5]: beta_jeffreys(3)
Out[3]:
array([[-0.5, -0.5, -0.5],
[-0.5, -0.5, -0.5]])
# vector (1)
In [3]: beta_jeffreys((3,))
Out[3]:
array([[-0.5, -0.5, -0.5],
[-0.5, -0.5, -0.5]])
# vector (2)
In [7]: beta_jeffreys((2,3))
Out[7]:
array([[[-0.5, -0.5, -0.5],
[-0.5, -0.5, -0.5]],
[[-0.5, -0.5, -0.5],
[-0.5, -0.5, -0.5]]])
| [
"def halfs(shape=()):\n if isinstance(shape, tuple):\n return 0.5 * numpy.ones((2,) + shape)\n else:\n return 0.5 * numpy.ones((2, shape))\n\n\n\na = numpy.arange(5)\n# array([0, 1, 2, 3, 4])\n\n\nhalfs(a.shape)\n#array([[ 0.5, 0.5, 0.5, 0.5, 0.5],\n# [ 0.5, 0.5, 0.5, 0.5, 0.5]])\n... | [
1
] | [] | [] | [
"numpy",
"python"
] | stackoverflow_0004023574_numpy_python.txt |
Q:
XML-RPC for an object broker
is there any good reason not to use XML-RPC for an object-broker server/client architecture? Maybe something like "no it's already outfashioned, there is X for that now".
To give you more details: I want to build a framework which allows for standardized interaction and the exchange of results between many little tools (e. g. command-line tools). In case someone wants to integrate another tool she writes a wrapper for that purpose. The wrapper could, e. g., convert the STDOUT of a tool into objects usable by the architecture.
Currently I'm thinking of writing the proof-of-concept server in Python. Later it could be rewritten in C/C++. Just to make sure clients can be written in as many languages as possible I thought of using XML-RPC. CORBA seems to be too bloated for that purpose, since the server shouldn't be too complex.
Thanks for your advice and opinions,
Rainer
A:
XML-RPC has a lot going for it. It's simple to create and to consume, easy to understand and easy to code for.
I'd say avoid SOAP and CORBA like the plague. They are way too complex, and with SOAP you have endless problems because only implementations from single vendors tend to interact nicely - probably because the complexity of the standard leads to varying interpretations.
You may want to consider a RESTful architecture. REST and XML-RPC cannot be directly compared. XML-RPC is a specific implementation of RPC, and REST is an architectural style. REST does not mandate anything much - it's more a style of approach with a bunch of conventions and suggestions. REST can look a lot like XML-RPC, but it doesn't have to.
Have a look at http://en.wikipedia.org/wiki/Representational_State_Transfer and some of the externally linked articles.
One of the goals of REST is that by creating a stateless interface over HTTP, you allow the use of standard caching mechanisms and load balancing mechanisms without having to invent new ways of doing what has already been well solved by HTTP.
Having read about REST, which hopefully is an interesting read, you may decide that for your project XML-RPC is still the best solution, which would be a perfectly reasonable conclusion depending on what exactly you are trying to achieve.
| XML-RPC for an object broker | is there any good reason not to use XML-RPC for an object-broker server/client architecture? Maybe something like "no it's already outfashioned, there is X for that now".
To give you more details: I want to build a framework which allows for standardized interaction and the exchange of results between many little tools (e. g. command-line tools). In case someone wants to integrate another tool she writes a wrapper for that purpose. The wrapper could, e. g., convert the STDOUT of a tool into objects usable by the architecture.
Currently I'm thinking of writing the proof-of-concept server in Python. Later it could be rewritten in C/C++. Just to make sure clients can be written in as many languages as possible I thought of using XML-RPC. CORBA seems to be too bloated for that purpose, since the server shouldn't be too complex.
Thanks for your advice and opinions,
Rainer
| [
"XML-RPC has a lot going for it. It's simple to create and to consume, easy to understand and easy to code for.\nI'd say avoid SOAP and CORBA like the plague. They are way too complex, and with SOAP you have endless problems because only implementations from single vendors tend to interact nicely - probably becaus... | [
5
] | [] | [] | [
"corba",
"python",
"rpc",
"xml_rpc"
] | stackoverflow_0004022311_corba_python_rpc_xml_rpc.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.