content stringlengths 85 101k | title stringlengths 0 150 | question stringlengths 15 48k | answers list | answers_scores list | non_answers list | non_answers_scores list | tags list | name stringlengths 35 137 |
|---|---|---|---|---|---|---|---|---|
Q:
Matching 3 out 5 fields - Django
I'm finding this a bit tricky! Maybe someone can help me on this one
I have the following model:
class Unicorn(models.Model):
horn_length = models.IntegerField()
skin_color = models.CharField()
average_speed = models.IntegerField()
magical = models.BooleanField()
affinity = models.CharField()
I would like to search for all similar unicorns having at least 3 fields in common.
Is it too tricky? Or is it doable?
A:
It has to be done in the HAVING clause:
SELECT ... HAVING (IF(a.horn_length=b.horn_length, 1, 0) + ...) >= 3
There's no way to express HAVING in the Django ORM so you'll need to drop to raw SQL in order to perform it.
A:
You should use Q objects. The rough example is:
from django.db.models import Q
from itertools import combinations
# this -- the unicorn to be matched with
attr = ['horn_length', 'skin_color', 'average_speed', 'magical', 'affinity']
q = None
for c in combinations(attrs, 3):
q_ = Q(**{c[0]: getattr(this, c[0])}) & Q(**{c[1]: getattr(this, c[1])}) & Q(**{c[2]: getattr(this, c[2])})
if q is None:
q = q_
else:
q = q | q_
Unicorn.objects.get(q)
not tested, though
A:
This should cover your question, if I understood it right:
from django.db import models
Unicorn.objects.filter(models.Q(skin_color = 'white') | models.Q(magical = True))
This would filter all unicorns that have skin color white or have some magical stuff in common. More about the Q objects here http://docs.djangoproject.com/en/dev/topics/db/queries/#complex-lookups-with-q-objects
A:
I have never used Django and i'm rather novice in Python but perhaps you can do something like this:
make a method that compares two instances of the class Unicorn.
def similarity(self, another)
sim = 0
if (self.horn_length==another.horn_length):
sim+=1
if (self.skin_color==another.skin_color):
sim+=1
if (self.average_speed==another.average_speed):
sim+=1
if (self.magical==another.magical):
sim+=1
if (self.affinity==another.affinity):
sim+=1
return sim
Then you can test with something like:
myUnicorn
for x in unicornsList:
if myUnicorn.similarity(x) >=3:
...
| Matching 3 out 5 fields - Django | I'm finding this a bit tricky! Maybe someone can help me on this one
I have the following model:
class Unicorn(models.Model):
horn_length = models.IntegerField()
skin_color = models.CharField()
average_speed = models.IntegerField()
magical = models.BooleanField()
affinity = models.CharField()
I would like to search for all similar unicorns having at least 3 fields in common.
Is it too tricky? Or is it doable?
| [
"It has to be done in the HAVING clause:\nSELECT ... HAVING (IF(a.horn_length=b.horn_length, 1, 0) + ...) >= 3\n\nThere's no way to express HAVING in the Django ORM so you'll need to drop to raw SQL in order to perform it.\n",
"You should use Q objects. The rough example is:\nfrom django.db.models import Q\nfrom itertools import combinations\n# this -- the unicorn to be matched with\nattr = ['horn_length', 'skin_color', 'average_speed', 'magical', 'affinity']\nq = None\nfor c in combinations(attrs, 3):\n q_ = Q(**{c[0]: getattr(this, c[0])}) & Q(**{c[1]: getattr(this, c[1])}) & Q(**{c[2]: getattr(this, c[2])})\n if q is None:\n q = q_\n else:\n q = q | q_\nUnicorn.objects.get(q) \n\nnot tested, though\n",
"This should cover your question, if I understood it right:\nfrom django.db import models\n\nUnicorn.objects.filter(models.Q(skin_color = 'white') | models.Q(magical = True))\n\nThis would filter all unicorns that have skin color white or have some magical stuff in common. More about the Q objects here http://docs.djangoproject.com/en/dev/topics/db/queries/#complex-lookups-with-q-objects\n",
"I have never used Django and i'm rather novice in Python but perhaps you can do something like this:\nmake a method that compares two instances of the class Unicorn.\ndef similarity(self, another)\n sim = 0\n if (self.horn_length==another.horn_length):\n sim+=1\n if (self.skin_color==another.skin_color):\n sim+=1\n if (self.average_speed==another.average_speed):\n sim+=1\n if (self.magical==another.magical):\n sim+=1\n if (self.affinity==another.affinity):\n sim+=1\n return sim\n\nThen you can test with something like:\nmyUnicorn\nfor x in unicornsList:\n if myUnicorn.similarity(x) >=3:\n ...\n\n"
] | [
2,
2,
1,
1
] | [] | [] | [
"django",
"django_models",
"django_queryset",
"python",
"sql_server"
] | stackoverflow_0002964460_django_django_models_django_queryset_python_sql_server.txt |
Q:
Python - Converting CSV to Objects - Code Design
I have a small script we're using to read in a CSV file containing employees, and perform some basic manipulations on that data.
We read in the data (import_gd_dump), and create an Employees object, containing a list of Employee objects (maybe I should think of a better naming convention...lol). We then call clean_all_phone_numbers() on Employees, which calls clean_phone_number() on each Employee, as well as lookup_all_supervisors(), on Employees.
import csv
import re
import sys
#class CSVLoader:
# """Virtual class to assist with loading in CSV files."""
# def import_gd_dump(self, input_file='Gp Directory 20100331 original.csv'):
# gd_extract = csv.DictReader(open(input_file), dialect='excel')
# employees = []
# for row in gd_extract:
# curr_employee = Employee(row)
# employees.append(curr_employee)
# return employees
# #self.employees = {row['dbdirid']:row for row in gd_extract}
# Previously, this was inside a (virtual) class called "CSVLoader".
# However, according to here (http://tomayko.com/writings/the-static-method-thing) - the idiomatic way of doing this in Python is not with a class-function but with a module-level function
def import_gd_dump(input_file='Gp Directory 20100331 original.csv'):
"""Return a list ('employee') of dict objects, taken from a Group Directory CSV file."""
gd_extract = csv.DictReader(open(input_file), dialect='excel')
employees = []
for row in gd_extract:
employees.append(row)
return employees
def write_gd_formatted(employees_dict, output_file="gd_formatted.csv"):
"""Read in an Employees() object, and write out each Employee() inside this to a CSV file"""
gd_output_fieldnames = ('hrid', 'mail', 'givenName', 'sn', 'dbcostcenter', 'dbdirid', 'hrreportsto', 'PHFull', 'PHFull_message', 'SupervisorEmail', 'SupervisorFirstName', 'SupervisorSurname')
try:
gd_formatted = csv.DictWriter(open(output_file, 'w', newline=''), fieldnames=gd_output_fieldnames, extrasaction='ignore', dialect='excel')
except IOError:
print('Unable to open file, IO error (Is it locked?)')
sys.exit(1)
headers = {n:n for n in gd_output_fieldnames}
gd_formatted.writerow(headers)
for employee in employees_dict.employee_list:
# We're using the employee object's inbuilt __dict__ attribute - hmm, is this good practice?
gd_formatted.writerow(employee.__dict__)
class Employee:
"""An Employee in the system, with employee attributes (name, email, cost-centre etc.)"""
def __init__(self, employee_attributes):
"""We use the Employee constructor to convert a dictionary into instance attributes."""
for k, v in employee_attributes.items():
setattr(self, k, v)
def clean_phone_number(self):
"""Perform some rudimentary checks and corrections, to make sure numbers are in the right format.
Numbers should be in the form 0XYYYYYYYY, where X is the area code, and Y is the local number."""
if self.telephoneNumber is None or self.telephoneNumber == '':
return '', 'Missing phone number.'
else:
standard_format = re.compile(r'^\+(?P<intl_prefix>\d{2})\((?P<area_code>\d)\)(?P<local_first_half>\d{4})-(?P<local_second_half>\d{4})')
extra_zero = re.compile(r'^\+(?P<intl_prefix>\d{2})\(0(?P<area_code>\d)\)(?P<local_first_half>\d{4})-(?P<local_second_half>\d{4})')
missing_hyphen = re.compile(r'^\+(?P<intl_prefix>\d{2})\(0(?P<area_code>\d)\)(?P<local_first_half>\d{4})(?P<local_second_half>\d{4})')
if standard_format.search(self.telephoneNumber):
result = standard_format.search(self.telephoneNumber)
return '0' + result.group('area_code') + result.group('local_first_half') + result.group('local_second_half'), ''
elif extra_zero.search(self.telephoneNumber):
result = extra_zero.search(self.telephoneNumber)
return '0' + result.group('area_code') + result.group('local_first_half') + result.group('local_second_half'), 'Extra zero in area code - ask user to remediate. '
elif missing_hyphen.search(self.telephoneNumber):
result = missing_hyphen.search(self.telephoneNumber)
return '0' + result.group('area_code') + result.group('local_first_half') + result.group('local_second_half'), 'Missing hyphen in local component - ask user to remediate. '
else:
return '', "Number didn't match recognised format. Original text is: " + self.telephoneNumber
class Employees:
def __init__(self, import_list):
self.employee_list = []
for employee in import_list:
self.employee_list.append(Employee(employee))
def clean_all_phone_numbers(self):
for employee in self.employee_list:
#Should we just set this directly in Employee.clean_phone_number() instead?
employee.PHFull, employee.PHFull_message = employee.clean_phone_number()
# Hmm, the search is O(n^2) - there's probably a better way of doing this search?
def lookup_all_supervisors(self):
for employee in self.employee_list:
if employee.hrreportsto is not None and employee.hrreportsto != '':
for supervisor in self.employee_list:
if supervisor.hrid == employee.hrreportsto:
(employee.SupervisorEmail, employee.SupervisorFirstName, employee.SupervisorSurname) = supervisor.mail, supervisor.givenName, supervisor.sn
break
else:
(employee.SupervisorEmail, employee.SupervisorFirstName, employee.SupervisorSurname) = ('Supervisor not found.', 'Supervisor not found.', 'Supervisor not found.')
else:
(employee.SupervisorEmail, employee.SupervisorFirstName, employee.SupervisorSurname) = ('Supervisor not set.', 'Supervisor not set.', 'Supervisor not set.')
#Is thre a more pythonic way of doing this?
def print_employees(self):
for employee in self.employee_list:
print(employee.__dict__)
if __name__ == '__main__':
db_employees = Employees(import_gd_dump())
db_employees.clean_all_phone_numbers()
db_employees.lookup_all_supervisors()
#db_employees.print_employees()
write_gd_formatted(db_employees)
Firstly, my preamble question is, can you see anything inherently wrong with the above, from either a class design or Python point-of-view? Is the logic/design sound?
Anyhow, to the specifics:
The Employees object has a method, clean_all_phone_numbers(), which calls clean_phone_number() on each Employee object inside it. Is this bad design? If so, why? Also, is the way I'm calling lookup_all_supervisors() bad?
Originally, I wrapped the clean_phone_number() and lookup_supervisor() method in a single function, with a single for-loop inside it. clean_phone_number is O(n), I believe, lookup_supervisor is O(n^2) - is it ok splitting it into two loops like this?
In clean_all_phone_numbers(), I'm looping on the Employee objects, and settings their values using return/assignment - should I be setting this inside clean_phone_number() itself?
There's also a few things that I'm sorted of hacked out, not sure if they're bad practice - e.g. print_employee() and gd_formatted() both use __dict__, and the constructor for Employee uses setattr() to convert a dictionary into instance attributes.
I'd value any thoughts at all. If you think the questions are too broad, let me know and I can repost as several split up (I just didn't want to pollute the boards with multiple similar questions, and the three questions are more or less fairly tightly related).
Cheers,
Victor
A:
Looks fine to me. Good job. How often are you going to run this script? Most of your questions are moot if this is a one-off thing.
I like the way Employees.cleen_all_phone_numbers() delegates to Employee.clean_phone_number()
You really should be using an index (dictionary) here. You can index each employee by hrid when you create them in O(n) and then look them up in O(1).
But only do this if you ever have to run the script again...
Just get into the habit of using dictionaries. They are painless and make code easier to read. Whenever you write a method lookup_* you probably just want to index a dictionary.
not sure. I like explicitly setting state, but this is actually bad design - clean_phone_number() should do that, Employees should be responsible for their own state.
A:
you should close your files after reading them
I suggest moving all compiled re's tot he top level (otherwise you compile them every call)
if self.telephoneNumber is None or self.telephoneNumber == '':
cen be easily rewrittent as if not self.telephoneNumber
| Python - Converting CSV to Objects - Code Design | I have a small script we're using to read in a CSV file containing employees, and perform some basic manipulations on that data.
We read in the data (import_gd_dump), and create an Employees object, containing a list of Employee objects (maybe I should think of a better naming convention...lol). We then call clean_all_phone_numbers() on Employees, which calls clean_phone_number() on each Employee, as well as lookup_all_supervisors(), on Employees.
import csv
import re
import sys
#class CSVLoader:
# """Virtual class to assist with loading in CSV files."""
# def import_gd_dump(self, input_file='Gp Directory 20100331 original.csv'):
# gd_extract = csv.DictReader(open(input_file), dialect='excel')
# employees = []
# for row in gd_extract:
# curr_employee = Employee(row)
# employees.append(curr_employee)
# return employees
# #self.employees = {row['dbdirid']:row for row in gd_extract}
# Previously, this was inside a (virtual) class called "CSVLoader".
# However, according to here (http://tomayko.com/writings/the-static-method-thing) - the idiomatic way of doing this in Python is not with a class-function but with a module-level function
def import_gd_dump(input_file='Gp Directory 20100331 original.csv'):
"""Return a list ('employee') of dict objects, taken from a Group Directory CSV file."""
gd_extract = csv.DictReader(open(input_file), dialect='excel')
employees = []
for row in gd_extract:
employees.append(row)
return employees
def write_gd_formatted(employees_dict, output_file="gd_formatted.csv"):
"""Read in an Employees() object, and write out each Employee() inside this to a CSV file"""
gd_output_fieldnames = ('hrid', 'mail', 'givenName', 'sn', 'dbcostcenter', 'dbdirid', 'hrreportsto', 'PHFull', 'PHFull_message', 'SupervisorEmail', 'SupervisorFirstName', 'SupervisorSurname')
try:
gd_formatted = csv.DictWriter(open(output_file, 'w', newline=''), fieldnames=gd_output_fieldnames, extrasaction='ignore', dialect='excel')
except IOError:
print('Unable to open file, IO error (Is it locked?)')
sys.exit(1)
headers = {n:n for n in gd_output_fieldnames}
gd_formatted.writerow(headers)
for employee in employees_dict.employee_list:
# We're using the employee object's inbuilt __dict__ attribute - hmm, is this good practice?
gd_formatted.writerow(employee.__dict__)
class Employee:
"""An Employee in the system, with employee attributes (name, email, cost-centre etc.)"""
def __init__(self, employee_attributes):
"""We use the Employee constructor to convert a dictionary into instance attributes."""
for k, v in employee_attributes.items():
setattr(self, k, v)
def clean_phone_number(self):
"""Perform some rudimentary checks and corrections, to make sure numbers are in the right format.
Numbers should be in the form 0XYYYYYYYY, where X is the area code, and Y is the local number."""
if self.telephoneNumber is None or self.telephoneNumber == '':
return '', 'Missing phone number.'
else:
standard_format = re.compile(r'^\+(?P<intl_prefix>\d{2})\((?P<area_code>\d)\)(?P<local_first_half>\d{4})-(?P<local_second_half>\d{4})')
extra_zero = re.compile(r'^\+(?P<intl_prefix>\d{2})\(0(?P<area_code>\d)\)(?P<local_first_half>\d{4})-(?P<local_second_half>\d{4})')
missing_hyphen = re.compile(r'^\+(?P<intl_prefix>\d{2})\(0(?P<area_code>\d)\)(?P<local_first_half>\d{4})(?P<local_second_half>\d{4})')
if standard_format.search(self.telephoneNumber):
result = standard_format.search(self.telephoneNumber)
return '0' + result.group('area_code') + result.group('local_first_half') + result.group('local_second_half'), ''
elif extra_zero.search(self.telephoneNumber):
result = extra_zero.search(self.telephoneNumber)
return '0' + result.group('area_code') + result.group('local_first_half') + result.group('local_second_half'), 'Extra zero in area code - ask user to remediate. '
elif missing_hyphen.search(self.telephoneNumber):
result = missing_hyphen.search(self.telephoneNumber)
return '0' + result.group('area_code') + result.group('local_first_half') + result.group('local_second_half'), 'Missing hyphen in local component - ask user to remediate. '
else:
return '', "Number didn't match recognised format. Original text is: " + self.telephoneNumber
class Employees:
def __init__(self, import_list):
self.employee_list = []
for employee in import_list:
self.employee_list.append(Employee(employee))
def clean_all_phone_numbers(self):
for employee in self.employee_list:
#Should we just set this directly in Employee.clean_phone_number() instead?
employee.PHFull, employee.PHFull_message = employee.clean_phone_number()
# Hmm, the search is O(n^2) - there's probably a better way of doing this search?
def lookup_all_supervisors(self):
for employee in self.employee_list:
if employee.hrreportsto is not None and employee.hrreportsto != '':
for supervisor in self.employee_list:
if supervisor.hrid == employee.hrreportsto:
(employee.SupervisorEmail, employee.SupervisorFirstName, employee.SupervisorSurname) = supervisor.mail, supervisor.givenName, supervisor.sn
break
else:
(employee.SupervisorEmail, employee.SupervisorFirstName, employee.SupervisorSurname) = ('Supervisor not found.', 'Supervisor not found.', 'Supervisor not found.')
else:
(employee.SupervisorEmail, employee.SupervisorFirstName, employee.SupervisorSurname) = ('Supervisor not set.', 'Supervisor not set.', 'Supervisor not set.')
#Is thre a more pythonic way of doing this?
def print_employees(self):
for employee in self.employee_list:
print(employee.__dict__)
if __name__ == '__main__':
db_employees = Employees(import_gd_dump())
db_employees.clean_all_phone_numbers()
db_employees.lookup_all_supervisors()
#db_employees.print_employees()
write_gd_formatted(db_employees)
Firstly, my preamble question is, can you see anything inherently wrong with the above, from either a class design or Python point-of-view? Is the logic/design sound?
Anyhow, to the specifics:
The Employees object has a method, clean_all_phone_numbers(), which calls clean_phone_number() on each Employee object inside it. Is this bad design? If so, why? Also, is the way I'm calling lookup_all_supervisors() bad?
Originally, I wrapped the clean_phone_number() and lookup_supervisor() method in a single function, with a single for-loop inside it. clean_phone_number is O(n), I believe, lookup_supervisor is O(n^2) - is it ok splitting it into two loops like this?
In clean_all_phone_numbers(), I'm looping on the Employee objects, and settings their values using return/assignment - should I be setting this inside clean_phone_number() itself?
There's also a few things that I'm sorted of hacked out, not sure if they're bad practice - e.g. print_employee() and gd_formatted() both use __dict__, and the constructor for Employee uses setattr() to convert a dictionary into instance attributes.
I'd value any thoughts at all. If you think the questions are too broad, let me know and I can repost as several split up (I just didn't want to pollute the boards with multiple similar questions, and the three questions are more or less fairly tightly related).
Cheers,
Victor
| [
"Looks fine to me. Good job. How often are you going to run this script? Most of your questions are moot if this is a one-off thing.\n\nI like the way Employees.cleen_all_phone_numbers() delegates to Employee.clean_phone_number()\nYou really should be using an index (dictionary) here. You can index each employee by hrid when you create them in O(n) and then look them up in O(1). \n\n\nBut only do this if you ever have to run the script again...\nJust get into the habit of using dictionaries. They are painless and make code easier to read. Whenever you write a method lookup_* you probably just want to index a dictionary.\n\nnot sure. I like explicitly setting state, but this is actually bad design - clean_phone_number() should do that, Employees should be responsible for their own state.\n\n",
"you should close your files after reading them\nI suggest moving all compiled re's tot he top level (otherwise you compile them every call)\n if self.telephoneNumber is None or self.telephoneNumber == '':\ncen be easily rewrittent as if not self.telephoneNumber\n"
] | [
3,
2
] | [] | [] | [
"oop",
"python"
] | stackoverflow_0002963975_oop_python.txt |
Q:
CSRF error when trying to log onto Django admin page with w3m on Emacs23
I normally use Firefox and have had no problems with the admin page on my Django website. But I use Emacs23 for writing my posts, and wanted to be able to use w3m in Emacs to copy the stuff across. When I try to log into my admin pages, it gives the CSRF error:
CSRF verification failed. Request aborted.
Help
Reason given for failure:
No CSRF or session cookie.
...
Is there a way that I could get w3m to work with my admin page? I am not sure if the problem lies with the way the admin is set up on Django or with the Emacs or w3m settings.
A:
Django 1.2 requires a CSRF token by default for all form POSTs. I don't think there's a way to get the token via an API call in order to be able to to post from Emacs.
You could just remove the effects of the @protect_csrf decorator on the django-bundled view by copying and tweaking that view's code to make a bespoke version of the view that doesn't have the decorator.
I'm guessing from your limited info above that it's a non-protected version of contrib.auth's login() method that you going to need to replicate here, and I'd recommend you put access to this method under a rather non-obvious url route to maintain some semblance of CSRF for the outside world. (ie, don't override the /login/ path - wire up access to this view somewhere else)
| CSRF error when trying to log onto Django admin page with w3m on Emacs23 | I normally use Firefox and have had no problems with the admin page on my Django website. But I use Emacs23 for writing my posts, and wanted to be able to use w3m in Emacs to copy the stuff across. When I try to log into my admin pages, it gives the CSRF error:
CSRF verification failed. Request aborted.
Help
Reason given for failure:
No CSRF or session cookie.
...
Is there a way that I could get w3m to work with my admin page? I am not sure if the problem lies with the way the admin is set up on Django or with the Emacs or w3m settings.
| [
"Django 1.2 requires a CSRF token by default for all form POSTs. I don't think there's a way to get the token via an API call in order to be able to to post from Emacs. \nYou could just remove the effects of the @protect_csrf decorator on the django-bundled view by copying and tweaking that view's code to make a bespoke version of the view that doesn't have the decorator.\nI'm guessing from your limited info above that it's a non-protected version of contrib.auth's login() method that you going to need to replicate here, and I'd recommend you put access to this method under a rather non-obvious url route to maintain some semblance of CSRF for the outside world. (ie, don't override the /login/ path - wire up access to this view somewhere else) \n"
] | [
1
] | [] | [] | [
"django",
"emacs",
"python"
] | stackoverflow_0002964790_django_emacs_python.txt |
Q:
Upon USB insert, record unique identifer sting, format drive to FAT32 and copy a file. Bash or Python
This is what I want to do,
insert USB flash drive.
mount it.
record uniquie identifer string to a file.
format the drive to FAT32.
copy a text file to the drive.
unmount it.
remove the drive.
30 times
The situation is this, I have bought 30 usb drives. I need to format each one to ensure they are clean, I need the unique string from each device. I need to put the same txt file on each one.
I am not great at writing scripts but can read and follow bash and python.
Any pointers would be appreciated.
edit
Thank you for your resposes.
Here is what I have got so far, in windows.
I used USBDeview from nirsoft.net
options > advanced options > "execute the following command when you insert a USB device" and used the following command "python getserial.py %serial_number%"
the getserial.py script puts the %serial_number% passed from USBDeview into a text file, then copies a file to the USB device.
import sys
import shutil
sourceFile = "C:\\^READ ME.txt"
destinationFile = "E:\\^READ ME.txt"
f = open('serials.txt', 'a')
f.write(sys.argv[1] + '\n')
f.close()
from time import sleep
sleep(3)
shutil.copyfile(sourceFile, destinationFile)
Would still be interested in a full script that could do this but I think it is beyond my capabilities at the moment.
A:
In order to automatically detect an inserted USB flash drive, you could use autofs. Unfortunately it is not able to run a script when a device is inserted, otherwise the other steps could be performed quite easily.
So, you need to detect that autofs mounted a new flash drive. crontab might be a solution to periodically check whether a disk is mounted and if so your steps could be performed. Only thing is to detect whether you already processed the mounted disk or not (ie the disk is new or not)
In order to find the UUID you could take a look at ls /dev/disk/by-uuid or blkid and using their output to actually grab the UUID. Formatting your drive could be done using something like mkfs -t vfat /dev/<your usb drive here>.
Hopefully these pointers help you solving your problem.
| Upon USB insert, record unique identifer sting, format drive to FAT32 and copy a file. Bash or Python | This is what I want to do,
insert USB flash drive.
mount it.
record uniquie identifer string to a file.
format the drive to FAT32.
copy a text file to the drive.
unmount it.
remove the drive.
30 times
The situation is this, I have bought 30 usb drives. I need to format each one to ensure they are clean, I need the unique string from each device. I need to put the same txt file on each one.
I am not great at writing scripts but can read and follow bash and python.
Any pointers would be appreciated.
edit
Thank you for your resposes.
Here is what I have got so far, in windows.
I used USBDeview from nirsoft.net
options > advanced options > "execute the following command when you insert a USB device" and used the following command "python getserial.py %serial_number%"
the getserial.py script puts the %serial_number% passed from USBDeview into a text file, then copies a file to the USB device.
import sys
import shutil
sourceFile = "C:\\^READ ME.txt"
destinationFile = "E:\\^READ ME.txt"
f = open('serials.txt', 'a')
f.write(sys.argv[1] + '\n')
f.close()
from time import sleep
sleep(3)
shutil.copyfile(sourceFile, destinationFile)
Would still be interested in a full script that could do this but I think it is beyond my capabilities at the moment.
| [
"In order to automatically detect an inserted USB flash drive, you could use autofs. Unfortunately it is not able to run a script when a device is inserted, otherwise the other steps could be performed quite easily.\nSo, you need to detect that autofs mounted a new flash drive. crontab might be a solution to periodically check whether a disk is mounted and if so your steps could be performed. Only thing is to detect whether you already processed the mounted disk or not (ie the disk is new or not)\nIn order to find the UUID you could take a look at ls /dev/disk/by-uuid or blkid and using their output to actually grab the UUID. Formatting your drive could be done using something like mkfs -t vfat /dev/<your usb drive here>.\nHopefully these pointers help you solving your problem.\n"
] | [
2
] | [] | [] | [
"bash",
"detect",
"python",
"usb_drive"
] | stackoverflow_0002964749_bash_detect_python_usb_drive.txt |
Q:
How to see if there is one microphone active using python?
I want to see if there is a microphone active using Python.
How can I do it?
Thanks in advance!
A:
Microphones are analog devices, most api's probably couldn't even tell you if there is a microphone plugged in, your computer just reads data from one of your soundcards input channels.
What you probably want to know is if the input channels are turned on or off. Determining that is highly platform specific.
A:
This is what I wanted:
import ctypes
from ctypes import *
winmm= windll.winmm
print 'waveInGetNumDevs=',winmm.waveInGetNumDevs()
| How to see if there is one microphone active using python? | I want to see if there is a microphone active using Python.
How can I do it?
Thanks in advance!
| [
"Microphones are analog devices, most api's probably couldn't even tell you if there is a microphone plugged in, your computer just reads data from one of your soundcards input channels.\nWhat you probably want to know is if the input channels are turned on or off. Determining that is highly platform specific.\n",
"This is what I wanted:\nimport ctypes\nfrom ctypes import *\n\nwinmm= windll.winmm\nprint 'waveInGetNumDevs=',winmm.waveInGetNumDevs()\n\n"
] | [
4,
2
] | [] | [] | [
"microphone",
"python",
"python_2.7"
] | stackoverflow_0002797572_microphone_python_python_2.7.txt |
Q:
Add Header section to SOAP request using SOAPpy
I need to construct this SOAP query using python SOAPpy module:
<?xml version="1.0" encoding="utf-8"?>
<soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Header>
<LicenseHeader xmlns="http://schemas.acme.eu/">
<LicenseKey>88888-88888-8888-8888-888888888888</LicenseKey>
</LicenseHeader>
</soap:Header>
<soap:Body>
<GetProductClassification xmlns="http://schemas.acme.eu/">
<GetProductClassificationRequest />
</GetProductClassification>
</soap:Body>
</soap:Envelope>
So I use this code:
from SOAPpy import WSDL
wsdlFile = 'https://example.comt/1.0/service.asmx?wsdl'
server = WSDL.Proxy(wsdlFile)
result = server.GetProductClassification();
The request generated is:
<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope
SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"
>
<SOAP-ENV:Body>
<GetProductClassification SOAP-ENC:root="1">
</GetProductClassification>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
When I send request I get Object reference not set to an instance of an object. I think this might be because I don't have a header section with license key in my request.
How do I modify my code to add header section with LicenseHeader parameter?
A:
I am not sure how to do this in SOAPpy but I do know how to do it in suds. SUDS does the same thing as SOAPpy but it is newer and is still supported. I don't think SOAPpy is supported anymore. Below show's the code to connect to a WSDL and send a soap request:
class MySudsClass():
def sudsFunction(self):
url = "http://10.10.10.10/mywsdl.wsdl"
# connects to WSDL file and stores location in variable 'client'
client = Client(url)
#I have no address set in the wsdl to the camera I connect to so I set it's location here
client.options.location = 'http:/10.10.10.11'
# Create 'xml_value' object to pass as an argument using the 'factory' namespace
xml_value = client.factory.create('some_value_in_your_xml_body')
#This send the SOAP request.
client.service.WSDLFunction(xml_value)
Put this in the script before you send the soap request and it will add any headers you want.
# Namespaces to be added to XML sent
wsa_ns = ('wsa', 'http://schemas.xmlsoap.org/ws/2004/08/addressing')
wsdp_ns = ('http://schemas.xmlsoap.orf/ws/2006/02/devprof')
# Field information for extra XML headers
message = 'mymessage'
address_txt = 'myheader_information'
# Soapheaders to be added to the XML code sent
# addPrefix allow's you to addc a extra namespace. If not needed remove it.
message_header = Element('MessageID', ns=wsa_ns).setText(message)
address_header = Element('Address', ns=wsa_ns).setText(address_txt).addPrefix(p='wsdp', u=wsdp_ns)
header_list = [message_header, address_header]
# Soapheaders being added to suds command
client.set_options(soapheaders=header_list)
This will allow you to add in wsa encding that makes the XML understand:
# Attribute to be added to the headers to make sure camera verifies information as correct
mustAttribute = Attribute('SOAP-ENV:mustUnderstand', 'true')
for x in header_list:
x.append(mustAttribute)
If you use something like this you will be able to add any headers, namespaces etc. I have used this and it worked perfectly.
To add the license header in SUDS add:
license_key = Element('LicenseKey', ns=some_namespace).setText('88888-88888-8888-8888-888888888888')
license_header = Element('LicenseHeader', ns=some_namespace).insert(license_key)
license_attribute = Attribute(xmlns, "http://schemas.acme.eu/")
license_header.append(license_attribute)
| Add Header section to SOAP request using SOAPpy | I need to construct this SOAP query using python SOAPpy module:
<?xml version="1.0" encoding="utf-8"?>
<soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Header>
<LicenseHeader xmlns="http://schemas.acme.eu/">
<LicenseKey>88888-88888-8888-8888-888888888888</LicenseKey>
</LicenseHeader>
</soap:Header>
<soap:Body>
<GetProductClassification xmlns="http://schemas.acme.eu/">
<GetProductClassificationRequest />
</GetProductClassification>
</soap:Body>
</soap:Envelope>
So I use this code:
from SOAPpy import WSDL
wsdlFile = 'https://example.comt/1.0/service.asmx?wsdl'
server = WSDL.Proxy(wsdlFile)
result = server.GetProductClassification();
The request generated is:
<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope
SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"
>
<SOAP-ENV:Body>
<GetProductClassification SOAP-ENC:root="1">
</GetProductClassification>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
When I send request I get Object reference not set to an instance of an object. I think this might be because I don't have a header section with license key in my request.
How do I modify my code to add header section with LicenseHeader parameter?
| [
"I am not sure how to do this in SOAPpy but I do know how to do it in suds. SUDS does the same thing as SOAPpy but it is newer and is still supported. I don't think SOAPpy is supported anymore. Below show's the code to connect to a WSDL and send a soap request:\nclass MySudsClass():\n\ndef sudsFunction(self):\n\n url = \"http://10.10.10.10/mywsdl.wsdl\"\n\n # connects to WSDL file and stores location in variable 'client'\n client = Client(url)\n\n #I have no address set in the wsdl to the camera I connect to so I set it's location here\n client.options.location = 'http:/10.10.10.11'\n\n # Create 'xml_value' object to pass as an argument using the 'factory' namespace\n xml_value = client.factory.create('some_value_in_your_xml_body')\n\n #This send the SOAP request.\n client.service.WSDLFunction(xml_value)\n\nPut this in the script before you send the soap request and it will add any headers you want.\n # Namespaces to be added to XML sent \n wsa_ns = ('wsa', 'http://schemas.xmlsoap.org/ws/2004/08/addressing')\n wsdp_ns = ('http://schemas.xmlsoap.orf/ws/2006/02/devprof')\n\n # Field information for extra XML headers\n message = 'mymessage'\n address_txt = 'myheader_information'\n\n # Soapheaders to be added to the XML code sent\n # addPrefix allow's you to addc a extra namespace. If not needed remove it.\n message_header = Element('MessageID', ns=wsa_ns).setText(message)\n address_header = Element('Address', ns=wsa_ns).setText(address_txt).addPrefix(p='wsdp', u=wsdp_ns) \n\n header_list = [message_header, address_header]\n\n # Soapheaders being added to suds command\n client.set_options(soapheaders=header_list)\n\nThis will allow you to add in wsa encding that makes the XML understand:\n # Attribute to be added to the headers to make sure camera verifies information as correct\n mustAttribute = Attribute('SOAP-ENV:mustUnderstand', 'true')\n for x in header_list:\n x.append(mustAttribute)\n\nIf you use something like this you will be able to add any headers, namespaces etc. I have used this and it worked perfectly.\nTo add the license header in SUDS add:\n license_key = Element('LicenseKey', ns=some_namespace).setText('88888-88888-8888-8888-888888888888')\n license_header = Element('LicenseHeader', ns=some_namespace).insert(license_key)\n\n license_attribute = Attribute(xmlns, \"http://schemas.acme.eu/\")\n license_header.append(license_attribute)\n\n"
] | [
3
] | [] | [] | [
"python",
"soap",
"soappy",
"xml"
] | stackoverflow_0002964867_python_soap_soappy_xml.txt |
Q:
Is safe ( documented behaviour? ) to delete the domain of an iterator in execution
I wanted to know if is safe ( documented behaviour? ) to delete the domain space of an iterator in execution in Python.
Consider the code:
import os
import sys
sampleSpace = [ x*x for x in range( 7 ) ]
print sampleSpace
for dx in sampleSpace:
print str( dx )
if dx == 1:
del sampleSpace[ 1 ]
del sampleSpace[ 3 ]
elif dx == 25:
del sampleSpace[ -1 ]
print sampleSpace
'sampleSpace' is what I call 'the domain space of an iterator' ( if there is a more appropriate word/phrase, lemme know ).
What I am doing is deleting values from it while the iterator 'dx' is running through it.
Here is what I expect from the code :
Iteration versus element being pointed to (*):
0: [*0, 1, 4, 9, 16, 25, 36]
1: [0, *1, 4, 9, 16, 25, 36] ( delete 2nd and 5th element after this iteration )
2: [0, 4, *9, 25, 36]
3: [0, 4, 9, *25, 36] ( delete -1th element after this iteration )
4: [0, 4, 9, 25*] ( as the iterator points to nothing/end of list, the loop terminates )
.. and here is what I get:
[0, 1, 4, 9, 16, 25, 36]
0
1
9
25
[0, 4, 9, 25]
As you can see - what I expect is what I get - which is contrary to the behaviour I have had from other languages in such a scenario.
Hence - I wanted to ask you if there is some rule like "the iterator becomes invalid if you mutate its space during iteration" in Python?
Is it safe ( documented behaviour? ) in Python to do stuff like this?
A:
From the Python tutorial:
It is not safe to modify the sequence
being iterated over in the loop (this
can only happen for mutable sequence
types, such as lists). If you need to
modify the list you are iterating over
(for example, to duplicate selected
items) you must iterate over a copy.
The slice notation makes this
particularly convenient:
>>> for x in a[:]: # make a slice copy of the entire list
... if len(x) > 6: a.insert(0, x)
...
>>> a
['defenestrate', 'cat', 'window', 'defenestrate']
A:
Generally speaking no, it's not safe and you may get unpredictable behaviour. Iterators aren't required to behave in an specific way under these circumstances.
What's happening in your example is
# list is [0, 1, 4, 9, 16, 25, 36]
if dx == 1:
# we're at index 1 when this is true
del sampleSpace[ 1 ]
# we've removed the item at index 1, and the iterator will move to the next valid position - still index 1, but in a mutated list. We got lucky in this case
# the list now contains [0, 4, 9, 16, 25, 36]
del sampleSpace[ 3 ]
# we remove the item at index 3 which is (now) value 16
# the list now contains [0, 4, 9, 25, 36]
elif dx == 25:
del sampleSpace[ -1 ]
# we remove the final item, list now looks like
# the list now contains [0, 4, 9, 25]
A:
What do you mean by safe? Your code happens not to raise any errors, but it is a distinct possibility of course, consider this:
>>> a = range(3)
>>> for i in a:
del a
Traceback (most recent call last):
File "<pyshell#13>", line 2, in <module>
del a
NameError: name 'a' is not defined
>>> a
[0, 1, 2]
>>> for i in a:
del a[i+1]
Traceback (most recent call last):
File "<pyshell#27>", line 2, in <module>
del a[i+1]
IndexError: list assignment index out of range
It is not clear why would you want to do this, but there is no additional rules applicable to iterators. They're acting exactly as any other type would.
| Is safe ( documented behaviour? ) to delete the domain of an iterator in execution | I wanted to know if is safe ( documented behaviour? ) to delete the domain space of an iterator in execution in Python.
Consider the code:
import os
import sys
sampleSpace = [ x*x for x in range( 7 ) ]
print sampleSpace
for dx in sampleSpace:
print str( dx )
if dx == 1:
del sampleSpace[ 1 ]
del sampleSpace[ 3 ]
elif dx == 25:
del sampleSpace[ -1 ]
print sampleSpace
'sampleSpace' is what I call 'the domain space of an iterator' ( if there is a more appropriate word/phrase, lemme know ).
What I am doing is deleting values from it while the iterator 'dx' is running through it.
Here is what I expect from the code :
Iteration versus element being pointed to (*):
0: [*0, 1, 4, 9, 16, 25, 36]
1: [0, *1, 4, 9, 16, 25, 36] ( delete 2nd and 5th element after this iteration )
2: [0, 4, *9, 25, 36]
3: [0, 4, 9, *25, 36] ( delete -1th element after this iteration )
4: [0, 4, 9, 25*] ( as the iterator points to nothing/end of list, the loop terminates )
.. and here is what I get:
[0, 1, 4, 9, 16, 25, 36]
0
1
9
25
[0, 4, 9, 25]
As you can see - what I expect is what I get - which is contrary to the behaviour I have had from other languages in such a scenario.
Hence - I wanted to ask you if there is some rule like "the iterator becomes invalid if you mutate its space during iteration" in Python?
Is it safe ( documented behaviour? ) in Python to do stuff like this?
| [
"From the Python tutorial:\n\nIt is not safe to modify the sequence\n being iterated over in the loop (this\n can only happen for mutable sequence\n types, such as lists). If you need to\n modify the list you are iterating over\n (for example, to duplicate selected\n items) you must iterate over a copy.\n The slice notation makes this\n particularly convenient:\n>>> for x in a[:]: # make a slice copy of the entire list\n... if len(x) > 6: a.insert(0, x)\n...\n>>> a\n['defenestrate', 'cat', 'window', 'defenestrate']\n\n\n",
"Generally speaking no, it's not safe and you may get unpredictable behaviour. Iterators aren't required to behave in an specific way under these circumstances.\nWhat's happening in your example is\n# list is [0, 1, 4, 9, 16, 25, 36]\n\nif dx == 1:\n # we're at index 1 when this is true\n del sampleSpace[ 1 ]\n # we've removed the item at index 1, and the iterator will move to the next valid position - still index 1, but in a mutated list. We got lucky in this case\n # the list now contains [0, 4, 9, 16, 25, 36]\n del sampleSpace[ 3 ] \n # we remove the item at index 3 which is (now) value 16\n # the list now contains [0, 4, 9, 25, 36]\nelif dx == 25:\n\n del sampleSpace[ -1 ]\n # we remove the final item, list now looks like\n # the list now contains [0, 4, 9, 25]\n\n",
"What do you mean by safe? Your code happens not to raise any errors, but it is a distinct possibility of course, consider this:\n>>> a = range(3)\n>>> for i in a:\n del a\n\n\nTraceback (most recent call last):\n File \"<pyshell#13>\", line 2, in <module>\n del a\nNameError: name 'a' is not defined\n>>> a\n[0, 1, 2]\n>>> for i in a:\n del a[i+1]\n\n\nTraceback (most recent call last):\n File \"<pyshell#27>\", line 2, in <module>\n del a[i+1]\nIndexError: list assignment index out of range\n\nIt is not clear why would you want to do this, but there is no additional rules applicable to iterators. They're acting exactly as any other type would.\n"
] | [
6,
1,
0
] | [] | [] | [
"iteration",
"python",
"undocumented_behavior"
] | stackoverflow_0002965351_iteration_python_undocumented_behavior.txt |
Q:
Is there a 'hello world' website for django? OR (I've installed django, now what)?
I'm learning Python and decided to start familiarizing myself with the (defacto?) Python web framework - django.
I have successfully installed the latest release of django. I want a simple 'hello world' website that will get me up and running quickly. I am already familiar with web frameworks (albeit for different languages) - so I just need a simple 'hello world' example website to help me get going.
Ideally, I don't want to mess up with my Apache server settings (as I am still experimenting), so I want to use the lightweight web server that django bundles. I cant seem to find how to do this on the django website though - no doubt someone will post a link and shame me ...
So, does anyone know of a link that shows how to get a simple 'hello world' django website up and running with minimal fuss?
A:
Next step? The (free, online and excellent) Django book.
A:
Writing your first Django app, part 1 was a lot of help.
A:
"Hello World" of django is the "Polls and Votes"
A:
I think the official tutorial says it all...
cd /path/to/your/code
python django-admin.py startproject mysite #creates dir 'mysite'
python manage.py runserver
| Is there a 'hello world' website for django? OR (I've installed django, now what)? | I'm learning Python and decided to start familiarizing myself with the (defacto?) Python web framework - django.
I have successfully installed the latest release of django. I want a simple 'hello world' website that will get me up and running quickly. I am already familiar with web frameworks (albeit for different languages) - so I just need a simple 'hello world' example website to help me get going.
Ideally, I don't want to mess up with my Apache server settings (as I am still experimenting), so I want to use the lightweight web server that django bundles. I cant seem to find how to do this on the django website though - no doubt someone will post a link and shame me ...
So, does anyone know of a link that shows how to get a simple 'hello world' django website up and running with minimal fuss?
| [
"Next step? The (free, online and excellent) Django book.\n",
"Writing your first Django app, part 1 was a lot of help.\n",
"\"Hello World\" of django is the \"Polls and Votes\"\n",
"I think the official tutorial says it all...\ncd /path/to/your/code\npython django-admin.py startproject mysite #creates dir 'mysite'\npython manage.py runserver\n\n"
] | [
10,
3,
2,
1
] | [] | [] | [
"django",
"python"
] | stackoverflow_0002964452_django_python.txt |
Q:
feedparser - various errors
I need feedparser (se http://www.feedparser.org) for a project, and want to keep third party modules in a separate folder. I did this by adding a folder to my python path, and putting relevant modules there, among them feedparser.
This first attempt to import feedparser resulted in
>>> import feedparser
Traceback (most recent call last):
File "", line 1, in
File "/home/users/me/modules/feedparser.py", line 1
ed socket timeout; added support for chardet library
^
SyntaxError: invalid syntax
I found the text "socket timeout; added..." in the comments at the bottom of the file, removed these comments, and tried again:
>>> import feedparser
Traceback (most recent call last):
File "", line 1, in
File "/home/users/me/modules/feedparser.py", line 1
= [(key, value) for key, value in attrs if key in self.acceptable_attributes]
^
IndentationError: unexpected indent
Ok, so some indent error. I made sure the indent in the function in question where ok (moved some line breaks down to no-indent). And tried again:
>>> import feedparser
Traceback (most recent call last):
File "", line 1, in
File "/home/users/me/modules/feedparser.py", line 1
, value) for key, value in attrs if key in self.acceptable_attributes]
^
SyntaxError: invalid syntax
As much I google, I cannot find anything wrong with the syntax:
def unknown_starttag(self, tag, attrs):
if not tag in self.acceptable_elements:
if tag in self.unacceptable_elements_with_end_tag:
self.unacceptablestack += 1
return
attrs = self.normalize_attrs(attrs)
attrs = [(key, value) for key, value in attrs if key in self.acceptable_attributes]
_BaseHTMLProcessor.unknown_starttag(self, tag, attrs)
Now what?
Is my approach all wrong? Why do I keep producing these errors in a module that seems so well tested and trusted?
A:
The first error sounds like your copy of feedparser.py is corrupt. The last line of the file should be entirely a comment:
#4.1 - MAP - removed socket timeout; added support for chardet library
It sounds like a line break has been introduced resulting in an invalid statement at the end of the file:
#4.1 - MAP - remov
ed socket timeout; added support for chardet library
| feedparser - various errors | I need feedparser (se http://www.feedparser.org) for a project, and want to keep third party modules in a separate folder. I did this by adding a folder to my python path, and putting relevant modules there, among them feedparser.
This first attempt to import feedparser resulted in
>>> import feedparser
Traceback (most recent call last):
File "", line 1, in
File "/home/users/me/modules/feedparser.py", line 1
ed socket timeout; added support for chardet library
^
SyntaxError: invalid syntax
I found the text "socket timeout; added..." in the comments at the bottom of the file, removed these comments, and tried again:
>>> import feedparser
Traceback (most recent call last):
File "", line 1, in
File "/home/users/me/modules/feedparser.py", line 1
= [(key, value) for key, value in attrs if key in self.acceptable_attributes]
^
IndentationError: unexpected indent
Ok, so some indent error. I made sure the indent in the function in question where ok (moved some line breaks down to no-indent). And tried again:
>>> import feedparser
Traceback (most recent call last):
File "", line 1, in
File "/home/users/me/modules/feedparser.py", line 1
, value) for key, value in attrs if key in self.acceptable_attributes]
^
SyntaxError: invalid syntax
As much I google, I cannot find anything wrong with the syntax:
def unknown_starttag(self, tag, attrs):
if not tag in self.acceptable_elements:
if tag in self.unacceptable_elements_with_end_tag:
self.unacceptablestack += 1
return
attrs = self.normalize_attrs(attrs)
attrs = [(key, value) for key, value in attrs if key in self.acceptable_attributes]
_BaseHTMLProcessor.unknown_starttag(self, tag, attrs)
Now what?
Is my approach all wrong? Why do I keep producing these errors in a module that seems so well tested and trusted?
| [
"The first error sounds like your copy of feedparser.py is corrupt. The last line of the file should be entirely a comment:\n#4.1 - MAP - removed socket timeout; added support for chardet library\n\nIt sounds like a line break has been introduced resulting in an invalid statement at the end of the file:\n#4.1 - MAP - remov\ned socket timeout; added support for chardet library\n\n"
] | [
1
] | [] | [] | [
"feedparser",
"python"
] | stackoverflow_0002965324_feedparser_python.txt |
Q:
How can I include static text in a StringVar() and still have it update to variable changes?
I would like to create a StringVar() that looks something like this:
someText = "The Spanish Inquisition" # Here's a normal variable whose value I will change
eventually
TkEquivalent = StringVar() # and here's the StringVar()
TkEquivalent.set(string(someText)) #and here I set it equal to the normal variable. When someText changes, this variable will too...
HOWEVER:
TkEquivalent.set("Nobody Expects " + string(someText))
If I do this, the StringVar() will no longer automatically update! How can I include that static text and still have the StringVar() update to reflect changes made to someText?
Thanks for your help.
A:
A StringVar does not bind with a Python name (what you'd call a variable), but with a Tkinter widget, like this:
a_variable= Tkinter.StringVar()
an_entry= Tkinter.Entry(textvariable=a_variable)
From then on, any change of a_variable through its .set method will reflect in the an_entry contents, and any modification of the an_entry contents (e.g. by the user interface) will also update the a_variable contents.
However, if that is not what you want, you can have two (or more) references to the same StringVar in your code:
var1= var2= Tkinter.StringVar()
var1.set("some text")
assert var1.get() == var2.get() # they'll always be equal
| How can I include static text in a StringVar() and still have it update to variable changes? | I would like to create a StringVar() that looks something like this:
someText = "The Spanish Inquisition" # Here's a normal variable whose value I will change
eventually
TkEquivalent = StringVar() # and here's the StringVar()
TkEquivalent.set(string(someText)) #and here I set it equal to the normal variable. When someText changes, this variable will too...
HOWEVER:
TkEquivalent.set("Nobody Expects " + string(someText))
If I do this, the StringVar() will no longer automatically update! How can I include that static text and still have the StringVar() update to reflect changes made to someText?
Thanks for your help.
| [
"A StringVar does not bind with a Python name (what you'd call a variable), but with a Tkinter widget, like this:\na_variable= Tkinter.StringVar()\nan_entry= Tkinter.Entry(textvariable=a_variable)\n\nFrom then on, any change of a_variable through its .set method will reflect in the an_entry contents, and any modification of the an_entry contents (e.g. by the user interface) will also update the a_variable contents.\nHowever, if that is not what you want, you can have two (or more) references to the same StringVar in your code:\nvar1= var2= Tkinter.StringVar()\nvar1.set(\"some text\")\nassert var1.get() == var2.get() # they'll always be equal\n\n"
] | [
4
] | [] | [] | [
"concatenation",
"python",
"string",
"text",
"tkinter"
] | stackoverflow_0002770409_concatenation_python_string_text_tkinter.txt |
Q:
Send files between python+django and C#
i would like to know, what is the best way to send files between python and C# and vice versa.
I have my own protocol which work on socket level, and i can send string and numbers in both ways. Loops works too. With this i can send pretty much anything, like package of users id, if it is simple data. But soon i will start sending whole files, maybe xml or executables.
Simple server with files is no an option because i want sending files from client too.
I was thinking about serialization but i don't know it is the best solution, but if it is i will love some tips from stackoverflow community.
EDIT:
I added django to question and chose http.
A:
RPC may be a good idea for you, because it's relatively very high level. Instead of defining your own server and protocols, you can simply remotely execute routines over the network, pass in arguments and get back results.
For example, both languages have libraries for XML-RPC.
A:
The easier way on my use case was send files using HTTP because with python i have additionaly running django.
| Send files between python+django and C# | i would like to know, what is the best way to send files between python and C# and vice versa.
I have my own protocol which work on socket level, and i can send string and numbers in both ways. Loops works too. With this i can send pretty much anything, like package of users id, if it is simple data. But soon i will start sending whole files, maybe xml or executables.
Simple server with files is no an option because i want sending files from client too.
I was thinking about serialization but i don't know it is the best solution, but if it is i will love some tips from stackoverflow community.
EDIT:
I added django to question and chose http.
| [
"RPC may be a good idea for you, because it's relatively very high level. Instead of defining your own server and protocols, you can simply remotely execute routines over the network, pass in arguments and get back results. \nFor example, both languages have libraries for XML-RPC.\n",
"The easier way on my use case was send files using HTTP because with python i have additionaly running django.\n"
] | [
0,
0
] | [] | [] | [
"c#",
"python",
"sockets"
] | stackoverflow_0002930211_c#_python_sockets.txt |
Q:
whats the best way to parse and replace the string with its values?
I may have string like,
"""Hello, %(name)s,
how are you today,
here is amount needed: %(partner_id.account_id.debit_amount)d
"""
what would be the best solution for such template may i need to combine regular expression and eval, input string may differ like $partner_id.account_id.debit_amount$ - for the moment I've kept as python string format - just for testing.
A:
Python implemented a new .format() method on strings in Python 2.6 and 3.0. Check out this PEP: http://www.python.org/dev/peps/pep-3101/
It is more powerful and flexible than the % operator and built into python:
Here are some examples from the PEP:
"My name is {0}".format('Fred')
"My name is {0.name}".format(open('out.txt', 'w'))
"My name is {0[name]}".format({'name':'Fred'})
It may be enough for your needs, if not, look at a templating engine like Jinja as others mentioned.
A:
If you're looking for something simple, try looking at Python's builtin Template. I use it quite a bit for quick and easy templating without the overhead of installing additional packages.
There's also the new format() method.
A:
If you are going to do sophisticated templating, you may consider using an advanced template engine like Jinja. There are plenty others as well.
| whats the best way to parse and replace the string with its values? | I may have string like,
"""Hello, %(name)s,
how are you today,
here is amount needed: %(partner_id.account_id.debit_amount)d
"""
what would be the best solution for such template may i need to combine regular expression and eval, input string may differ like $partner_id.account_id.debit_amount$ - for the moment I've kept as python string format - just for testing.
| [
"Python implemented a new .format() method on strings in Python 2.6 and 3.0. Check out this PEP: http://www.python.org/dev/peps/pep-3101/\nIt is more powerful and flexible than the % operator and built into python:\nHere are some examples from the PEP:\n\"My name is {0}\".format('Fred')\n\"My name is {0.name}\".format(open('out.txt', 'w'))\n\"My name is {0[name]}\".format({'name':'Fred'})\n\nIt may be enough for your needs, if not, look at a templating engine like Jinja as others mentioned.\n",
"If you're looking for something simple, try looking at Python's builtin Template. I use it quite a bit for quick and easy templating without the overhead of installing additional packages.\nThere's also the new format() method.\n",
"If you are going to do sophisticated templating, you may consider using an advanced template engine like Jinja. There are plenty others as well.\n"
] | [
2,
1,
0
] | [] | [] | [
"python",
"replace",
"string_formatting",
"templates"
] | stackoverflow_0002965694_python_replace_string_formatting_templates.txt |
Q:
Reading and writing pickles to an encoded stream
A file format commonly used in our system is base64 encoded pickles - at the moment I can translate to and from strings in this trivial format with some simple code like this:
def dumps( objinput ):
"""
Return an encoded cPickle
"""
return cpickle_dumps( objinput ).encode( ENCODING )
def loads( strinput ):
"""
Return an object from an encoded cpickle
"""
return cpickle_loads( strinput.decode( ENCODING ) )
I'd like to implement a new function, called "load" which works much like the standard pickle load function except that it will read just enough bytes from a stream and then return the decoded object. The trick here is to read just enough bytes - the cPickle.load function does this, it pops bytes off the stream until the pickled data is complete. My new function needs to look like this:
def load( stream_input ):
""""
Return just one object popped from the stream.
If the stream has ended, raise an exception
Do not over-read the stream.
How might I do this when the data is base64 encoded pickle? The difficulty here seems to be that it's not obvious that this can be done either by wrapping the cPickle.load function or by wrapping the input stream, an additional difficulty is that what may be one-byte of pickle data may be encoded to more than one byte of base64 so it's not clear how to adapt stream_input.read(n) into something which is guaranteed to return n bytes of decoded data when the encoded input stream is base64.
A:
I think you can in fact do this by implementing a file-like object, which can then be passed directly to cPickle.load
class Base64PickleAdapter:
def __init__(self, base64_input):
...
def read(size=-1):
...
def readline():
... # Can be implemented in terms of read.
cPickle only requires read and readline. You don't need to return size bytes from Base64PickleAdapter.read. The read contract allow you to return less, which should simplify implementation.
| Reading and writing pickles to an encoded stream | A file format commonly used in our system is base64 encoded pickles - at the moment I can translate to and from strings in this trivial format with some simple code like this:
def dumps( objinput ):
"""
Return an encoded cPickle
"""
return cpickle_dumps( objinput ).encode( ENCODING )
def loads( strinput ):
"""
Return an object from an encoded cpickle
"""
return cpickle_loads( strinput.decode( ENCODING ) )
I'd like to implement a new function, called "load" which works much like the standard pickle load function except that it will read just enough bytes from a stream and then return the decoded object. The trick here is to read just enough bytes - the cPickle.load function does this, it pops bytes off the stream until the pickled data is complete. My new function needs to look like this:
def load( stream_input ):
""""
Return just one object popped from the stream.
If the stream has ended, raise an exception
Do not over-read the stream.
How might I do this when the data is base64 encoded pickle? The difficulty here seems to be that it's not obvious that this can be done either by wrapping the cPickle.load function or by wrapping the input stream, an additional difficulty is that what may be one-byte of pickle data may be encoded to more than one byte of base64 so it's not clear how to adapt stream_input.read(n) into something which is guaranteed to return n bytes of decoded data when the encoded input stream is base64.
| [
"I think you can in fact do this by implementing a file-like object, which can then be passed directly to cPickle.load\nclass Base64PickleAdapter:\n def __init__(self, base64_input):\n ...\n\n def read(size=-1):\n ... \n\n def readline():\n ... # Can be implemented in terms of read.\n\ncPickle only requires read and readline. You don't need to return size bytes from Base64PickleAdapter.read. The read contract allow you to return less, which should simplify implementation.\n"
] | [
1
] | [] | [] | [
"python"
] | stackoverflow_0002966042_python.txt |
Q:
How can I get mounted name and (Drive letter too) on Windows using python
I am using Daemon tool to mount an ISO image on Windows XP machine.I do mount using Daemon command (daemon.exe -mount 0,iso_path).
Above command will mount ISO image to device number. In my case I have 4 partition (C,D,E,F) and G for DVD/CD-RW. Now what happen, ISO gets mounted to drive letter 'H:' with name (as defined while creating ISO) say 'testmount'.
My queries:-
1) How can I get mount name of mounted ISO image (i.e. 'testmount').
Just another case; if there are already some mount points existing on machine and I created a new one using Daemon tool. Then If I can get latest one using script that will be great.
2) How to get drive letter where it did get mounted.
If anyone know python script or command (or even Win command ) to get these info. do let me know.
Thanks...
A:
The daemon tools exe itself has some command line parameters :
-get_count and -get_letter
But for me these do not work in the latest version (DLite).
Instead you can use the commands :
mountvol - lists all the mounted drives
dir - you can parse the output to get the volume label
What you should do is run mountvol before daemon, and after, so you can detect the new drive letter. After that use "dir" to get the volume label.
I believe you can run these commands using the os.system() call in python
A:
You can list drives using wmi console:
C:\>wmic logicaldisk get Name, DriveType
The numeric values of the drive types will let you distinguish between different types.
The WMI is available is python module as well, though this needs to be installed separately.
A:
adding newtover, getting list of drives from wmi console output
[i.strip() for i in os.popen('wmic logicaldisk get Name').readlines() if i.strip()<>''][1:]
| How can I get mounted name and (Drive letter too) on Windows using python | I am using Daemon tool to mount an ISO image on Windows XP machine.I do mount using Daemon command (daemon.exe -mount 0,iso_path).
Above command will mount ISO image to device number. In my case I have 4 partition (C,D,E,F) and G for DVD/CD-RW. Now what happen, ISO gets mounted to drive letter 'H:' with name (as defined while creating ISO) say 'testmount'.
My queries:-
1) How can I get mount name of mounted ISO image (i.e. 'testmount').
Just another case; if there are already some mount points existing on machine and I created a new one using Daemon tool. Then If I can get latest one using script that will be great.
2) How to get drive letter where it did get mounted.
If anyone know python script or command (or even Win command ) to get these info. do let me know.
Thanks...
| [
"The daemon tools exe itself has some command line parameters :\n-get_count and -get_letter\nBut for me these do not work in the latest version (DLite).\nInstead you can use the commands :\nmountvol - lists all the mounted drives\ndir - you can parse the output to get the volume label\nWhat you should do is run mountvol before daemon, and after, so you can detect the new drive letter. After that use \"dir\" to get the volume label.\nI believe you can run these commands using the os.system() call in python\n",
"You can list drives using wmi console:\nC:\\>wmic logicaldisk get Name, DriveType\n\nThe numeric values of the drive types will let you distinguish between different types.\nThe WMI is available is python module as well, though this needs to be installed separately.\n",
"adding newtover, getting list of drives from wmi console output\n[i.strip() for i in os.popen('wmic logicaldisk get Name').readlines() if i.strip()<>''][1:]\n"
] | [
1,
1,
0
] | [] | [] | [
"daemon",
"iso",
"mount",
"python"
] | stackoverflow_0002673236_daemon_iso_mount_python.txt |
Q:
multi threading python/ruby vs java?
i wonder if the multi threading in python/ruby is equivalent to the one in java?
by that i mean, is it as efficient?
cause if you want to create a chat application that use comet technology i know that you have to use multi threading.
does this mean that i can use python or ruby for that or is it better with java?
thanks
A:
This is not a question about Ruby, Python or Java, but more about a specific implementation of Ruby, Python or Java. There are Java implementations with extremely efficient threading implementations and there are Java implementations with extremely bad threading implementations. And the same is true for Ruby and Python, and really basically any language at all.
Even languages like Erlang, where an inefficient threading implementation doesn't even make sense, sometimes have bad threading implementations.
For example, if you use JRuby or Jython, then your Ruby and Python threads are Java threads. So, they are not only as efficient as Java threads, they are exactly the same as Java threads.
A:
Both Ruby and Python use a global interpreter lock. The issue is discussed in detail here: Does ruby have real multithreading?
A:
philosodad is not wrong to point out the constraint that the GIL presents. I won't speak for Ruby, but I am sure that it's safe to assume that when you refer to Python that you are in fact referring to the canonical cPython implementation.
In the case of cPython, the GIL matters most if you want to parallelize computationally intensive operations implemented in Python (as in not in C extensions where the GIL can be released).
However, when you are writing a non-intensive I/O-bound application such as a chat program, the efficiency of the threading implementation really just doesn't matter all that much.
| multi threading python/ruby vs java? | i wonder if the multi threading in python/ruby is equivalent to the one in java?
by that i mean, is it as efficient?
cause if you want to create a chat application that use comet technology i know that you have to use multi threading.
does this mean that i can use python or ruby for that or is it better with java?
thanks
| [
"This is not a question about Ruby, Python or Java, but more about a specific implementation of Ruby, Python or Java. There are Java implementations with extremely efficient threading implementations and there are Java implementations with extremely bad threading implementations. And the same is true for Ruby and Python, and really basically any language at all.\nEven languages like Erlang, where an inefficient threading implementation doesn't even make sense, sometimes have bad threading implementations.\nFor example, if you use JRuby or Jython, then your Ruby and Python threads are Java threads. So, they are not only as efficient as Java threads, they are exactly the same as Java threads.\n",
"Both Ruby and Python use a global interpreter lock. The issue is discussed in detail here: Does ruby have real multithreading?\n",
"philosodad is not wrong to point out the constraint that the GIL presents. I won't speak for Ruby, but I am sure that it's safe to assume that when you refer to Python that you are in fact referring to the canonical cPython implementation. \nIn the case of cPython, the GIL matters most if you want to parallelize computationally intensive operations implemented in Python (as in not in C extensions where the GIL can be released).\nHowever, when you are writing a non-intensive I/O-bound application such as a chat program, the efficiency of the threading implementation really just doesn't matter all that much. \n"
] | [
10,
3,
1
] | [] | [] | [
"java",
"python",
"ruby"
] | stackoverflow_0002963615_java_python_ruby.txt |
Q:
Python and IronPython on same machine?
I am a total newbie in the Python world.
I want to start to experiment with Python and IronPython and compare the results.
Is it possible to install Python and IronPython on the same machine with interfering each other or is it better to do this in the virtual machine.
Thx in advance.
A:
Yes, Python and IronPython are completely different applications that happen to implement (almost) the same language.
A:
Should be no problem, they have different executable filenames also.
A:
Sure, you could even install different versions of cPython interpreter (2.5, 2.6, 3.0, etc).
| Python and IronPython on same machine? | I am a total newbie in the Python world.
I want to start to experiment with Python and IronPython and compare the results.
Is it possible to install Python and IronPython on the same machine with interfering each other or is it better to do this in the virtual machine.
Thx in advance.
| [
"Yes, Python and IronPython are completely different applications that happen to implement (almost) the same language.\n",
"Should be no problem, they have different executable filenames also.\n",
"Sure, you could even install different versions of cPython interpreter (2.5, 2.6, 3.0, etc). \n"
] | [
6,
1,
0
] | [] | [] | [
"ironpython",
"python"
] | stackoverflow_0002964910_ironpython_python.txt |
Q:
Modify Django admin app index
I want to change the app index page so I add help text to the models themselves, e.g. under each model I want to add help text. I know that I should override AdminSite.app_index. What is the best way to do this?
A:
I can create a new AdminSite subclass, and override app_index method to send the help text to the template. In urls.py I can use an instance of MyAdminSite instead of django's vanilla AdminSite.
# urls.py
from mysite.admin import MyAdminSite
site = MyAdminSite()
urlpatterns = patterns('',
(r'^admin/', include(site.urls)),
)
# app/admin.py
site.register(MyModel)
| Modify Django admin app index | I want to change the app index page so I add help text to the models themselves, e.g. under each model I want to add help text. I know that I should override AdminSite.app_index. What is the best way to do this?
| [
"I can create a new AdminSite subclass, and override app_index method to send the help text to the template. In urls.py I can use an instance of MyAdminSite instead of django's vanilla AdminSite.\n# urls.py\nfrom mysite.admin import MyAdminSite\nsite = MyAdminSite()\n\nurlpatterns = patterns('', \n (r'^admin/', include(site.urls)),\n)\n\n# app/admin.py\nsite.register(MyModel)\n\n"
] | [
1
] | [] | [] | [
"django",
"django_admin",
"python"
] | stackoverflow_0002966300_django_django_admin_python.txt |
Q:
More efficient web framework than Web.py? Extremely Pythonic please!
I love webpy, it's really quite Pythonic but I don't like having to add the url mappings and create a class, typically with just 1 function inside it.
I'm interested in minimising code typing and prototyping fast.
Does anyone have any up and coming suggestions such as Bobo, Nagare, Bottle, Flask, Denied, cherrypy for a lover of webpy's good things?
What makes it a good reason?
Also I don't mind missing out (strongly) text based templating systems, I use object oriented HTML generation. Code should be able to look something like this:
def addTask(task):
db.tasks.append({'task':task,'done':False})
return 'Task Added'
def listTasks():
d = doc()
d.body.Add(Ol(id='tasks'))
for task in db.tasks:
taskStatus = 'notDoneTask'
if task.done: taskStatus = 'doneTask'
d.body.tasks.Add(Li(task.task,Class=taskStatus))
return d
Minimalistic CherryPy is looking like a strong contender at the moment. Will there be a last minute save by another?
A:
Flask, Armin Ronacher's microframework built on top of Werkzeug, Jinja2 and good intentions (though you can use whichever templating engine you like, or none at all), does URL mapping very concisely.
@app.route("/")
def index():
return """Hello, world. <a href="/thing/spam_eggs">Here's a thing.</a>"""
@app.route("/thing/<id>")
def show_thing(id):
return "Now showing you thing %s."%id
# (or:) return render_template('thing.html', id = id)
Maybe that's what you're looking for?
A:
CherryPy allows you to hook up handlers in a tree instead of regexes. Where web.py might write:
urls = (
'/', 'Index',
'/del/(\d+)', 'Delete'
)
class Index:
def GET(self): ...
class Delete:
def POST(self, id): ...
The equivalent CherryPy would be:
class Delete:
def POST(self, id): ....
class Index:
del = Delete()
def GET(self): ...
You can even dispense with classes entirely in CherryPy:
def delete(id): ...
def index(): ...
index.del = delete
A:
I was a user of webpy. And lately, I have found django, and I think that it is great. You can just focus on your business logic and the framework will do most things for you.
| More efficient web framework than Web.py? Extremely Pythonic please! | I love webpy, it's really quite Pythonic but I don't like having to add the url mappings and create a class, typically with just 1 function inside it.
I'm interested in minimising code typing and prototyping fast.
Does anyone have any up and coming suggestions such as Bobo, Nagare, Bottle, Flask, Denied, cherrypy for a lover of webpy's good things?
What makes it a good reason?
Also I don't mind missing out (strongly) text based templating systems, I use object oriented HTML generation. Code should be able to look something like this:
def addTask(task):
db.tasks.append({'task':task,'done':False})
return 'Task Added'
def listTasks():
d = doc()
d.body.Add(Ol(id='tasks'))
for task in db.tasks:
taskStatus = 'notDoneTask'
if task.done: taskStatus = 'doneTask'
d.body.tasks.Add(Li(task.task,Class=taskStatus))
return d
Minimalistic CherryPy is looking like a strong contender at the moment. Will there be a last minute save by another?
| [
"Flask, Armin Ronacher's microframework built on top of Werkzeug, Jinja2 and good intentions (though you can use whichever templating engine you like, or none at all), does URL mapping very concisely.\n@app.route(\"/\")\ndef index():\n return \"\"\"Hello, world. <a href=\"/thing/spam_eggs\">Here's a thing.</a>\"\"\"\n\n@app.route(\"/thing/<id>\")\ndef show_thing(id):\n return \"Now showing you thing %s.\"%id\n # (or:) return render_template('thing.html', id = id)\n\nMaybe that's what you're looking for?\n",
"CherryPy allows you to hook up handlers in a tree instead of regexes. Where web.py might write:\nurls = (\n '/', 'Index',\n '/del/(\\d+)', 'Delete'\n)\n\nclass Index:\n def GET(self): ...\n\nclass Delete:\n def POST(self, id): ...\n\nThe equivalent CherryPy would be:\nclass Delete:\n def POST(self, id): ....\n\nclass Index:\n del = Delete()\n def GET(self): ...\n\nYou can even dispense with classes entirely in CherryPy:\ndef delete(id): ...\ndef index(): ...\nindex.del = delete\n\n",
"I was a user of webpy. And lately, I have found django, and I think that it is great. You can just focus on your business logic and the framework will do most things for you. \n"
] | [
9,
8,
1
] | [] | [] | [
"cherrypy",
"python",
"web.py",
"web_applications"
] | stackoverflow_0002964281_cherrypy_python_web.py_web_applications.txt |
Q:
How to transition from PHP to Python Django?
Here's my background:
Decent experience with PHP/MySql.
Beginner's experience with OOP
Why I want to learn Python Django?
I gave in, based on many searches on SO and reading over some of the answers, Python is a great, clean, and structured language to learn. And with the framework Django, it's easier to write codes that are shorter than with PHP
Questions
Can i do everything in Django as in PHP?
Is Django a "big" hit in web development as PHP? I know Python is a
great general-purpose language but I'm
focused on web development and would
like to know how Django ranks in terms
of web development.
With PHP, PHP and Mysql are VERY closely related, is there a close relation between Django and Mysql?
In PHP, you can easily switch between HTML, CSS, PHP all in one script. Does Python offer this type of ease between other languages? Or how do I incorporate HTML, CSS, javascript along with Python?
A:
Can i do everything in Django as in PHP?
Always
Is Django a "big" hit in web development as PHP?
Only time will tell.
With PHP, PHP and Mysql are VERY closely related, is there a close relation between Django and Mysql?
Django supports several RDBMS interfaces. MySQL is popular, so is SQLite and Postgres.
In PHP, you can easily switch between HTML, CSS, PHP all in one script.
That doesn't really apply at all to Django.
Or how do I incorporate HTML, CSS, javascript along with Python?
Actually do the Django tutorial. You'll see how the presentation (via HTML created by templates) and the processing (via Python view functions) fit together. It's not like PHP.
A:
Yes.
It's very hard to tell exactly how popular it is.
MySQL is officially supported.
Yes, but probably not in the way you think. Please read this and also follow the tutorial that S.Lott mentions.
A:
No. You can only do a LOT better.
Awesome, popular. Supported by best hosters like Mediatemple.
No. You can just change 'mysql' to 'postgresql' or 'sqlite' in your settings.py.
NO! Python would never give you the right to mix up everything in one file and make the shittest shit in the world. Templates, static server.
Django is a Model-Template-View framework, great for any applications, from small to huge. PHP works fine only with small apps. Yeah, PHP == Personal Home Page, lol.
P.S. Also you can minify your CSS and JS. And compile to one single file (one js, one css). All with django-assets. And yeah, there's a lot more reusable Django apps (for registration, twi/facebook/openid auth, oembed, other stuff). Just search Bitbucket and Github for "django". No need to reinvent a bicycle, like you do with PHP.
A:
In PHP, you can easily switch between
HTML, CSS, PHP all in one script. Does
Python offer this type of ease between
other languages? Or how do I
incorporate HTML, CSS, javascript
along with Python?
That's one of the reasons why PHP is so easy to learn. And it's also exactly why so many (if not most) PHP projects are such a complete mess. It's what leads to the "spaghetti code" syndrome.
Django is all about complete separation of page design from view logic from URL routing (in fact this is true of most modern MVC or MTV frameworks). So templates are in one place, data structure definitions are in another, and the logic that defines their interaction is in another. It takes a bit of getting used to, but has a huge payoff.
Another thing that takes getting used to for people coming from PHP is that fact that file and foldernames no longer have a direct bearing on the URL. For example in PHP, you might have foldername/filename.php and the URL would be http://example.com/foldername/filename.php. It doesn't work like that in Django. Instead, you define a URL structure in a file (urls.py). In that "map" you define which piece of logic ("view code") will be called when a matching URL is intercepted. Everything is abstracted like that. The result is a much cleaner, more logical site layout and logic.
| How to transition from PHP to Python Django? | Here's my background:
Decent experience with PHP/MySql.
Beginner's experience with OOP
Why I want to learn Python Django?
I gave in, based on many searches on SO and reading over some of the answers, Python is a great, clean, and structured language to learn. And with the framework Django, it's easier to write codes that are shorter than with PHP
Questions
Can i do everything in Django as in PHP?
Is Django a "big" hit in web development as PHP? I know Python is a
great general-purpose language but I'm
focused on web development and would
like to know how Django ranks in terms
of web development.
With PHP, PHP and Mysql are VERY closely related, is there a close relation between Django and Mysql?
In PHP, you can easily switch between HTML, CSS, PHP all in one script. Does Python offer this type of ease between other languages? Or how do I incorporate HTML, CSS, javascript along with Python?
| [
"\nCan i do everything in Django as in PHP?\n\nAlways\n\nIs Django a \"big\" hit in web development as PHP?\n\nOnly time will tell.\n\nWith PHP, PHP and Mysql are VERY closely related, is there a close relation between Django and Mysql?\n\nDjango supports several RDBMS interfaces. MySQL is popular, so is SQLite and Postgres.\n\nIn PHP, you can easily switch between HTML, CSS, PHP all in one script.\n\nThat doesn't really apply at all to Django.\n\nOr how do I incorporate HTML, CSS, javascript along with Python?\n\nActually do the Django tutorial. You'll see how the presentation (via HTML created by templates) and the processing (via Python view functions) fit together. It's not like PHP.\n",
"\nYes.\nIt's very hard to tell exactly how popular it is. \nMySQL is officially supported.\nYes, but probably not in the way you think. Please read this and also follow the tutorial that S.Lott mentions.\n\n",
"\nNo. You can only do a LOT better.\nAwesome, popular. Supported by best hosters like Mediatemple.\nNo. You can just change 'mysql' to 'postgresql' or 'sqlite' in your settings.py.\nNO! Python would never give you the right to mix up everything in one file and make the shittest shit in the world. Templates, static server.\n\nDjango is a Model-Template-View framework, great for any applications, from small to huge. PHP works fine only with small apps. Yeah, PHP == Personal Home Page, lol.\nP.S. Also you can minify your CSS and JS. And compile to one single file (one js, one css). All with django-assets. And yeah, there's a lot more reusable Django apps (for registration, twi/facebook/openid auth, oembed, other stuff). Just search Bitbucket and Github for \"django\". No need to reinvent a bicycle, like you do with PHP.\n",
"\nIn PHP, you can easily switch between\n HTML, CSS, PHP all in one script. Does\n Python offer this type of ease between\n other languages? Or how do I\n incorporate HTML, CSS, javascript\n along with Python?\n\nThat's one of the reasons why PHP is so easy to learn. And it's also exactly why so many (if not most) PHP projects are such a complete mess. It's what leads to the \"spaghetti code\" syndrome.\nDjango is all about complete separation of page design from view logic from URL routing (in fact this is true of most modern MVC or MTV frameworks). So templates are in one place, data structure definitions are in another, and the logic that defines their interaction is in another. It takes a bit of getting used to, but has a huge payoff. \nAnother thing that takes getting used to for people coming from PHP is that fact that file and foldernames no longer have a direct bearing on the URL. For example in PHP, you might have foldername/filename.php and the URL would be http://example.com/foldername/filename.php. It doesn't work like that in Django. Instead, you define a URL structure in a file (urls.py). In that \"map\" you define which piece of logic (\"view code\") will be called when a matching URL is intercepted. Everything is abstracted like that. The result is a much cleaner, more logical site layout and logic.\n"
] | [
6,
1,
1,
1
] | [] | [] | [
"django",
"php",
"python"
] | stackoverflow_0002961402_django_php_python.txt |
Q:
How to force PyYAML to load strings as unicode objects?
The PyYAML package loads unmarked strings as either unicode or str objects, depending on their content.
I would like to use unicode objects throughout my program (and, unfortunately, can't switch to Python 3 just yet).
Is there an easy way to force PyYAML to always strings load unicode objects? I do not want to clutter my YAML with !!python/unicode tags.
# Encoding: UTF-8
import yaml
menu= u"""---
- spam
- eggs
- bacon
- crème brûlée
- spam
"""
print yaml.load(menu)
Output: ['spam', 'eggs', 'bacon', u'cr\xe8me br\xfbl\xe9e', 'spam']
I would like: [u'spam', u'eggs', u'bacon', u'cr\xe8me br\xfbl\xe9e', u'spam']
A:
Here's a version which overrides the PyYAML handling of strings by always outputting unicode. In reality, this is probably the identical result of the other response I posted except shorter (i.e. you still need to make sure that strings in custom classes are converted to unicode or passed unicode strings yourself if you use custom handlers):
# -*- coding: utf-8 -*-
import yaml
from yaml import Loader, SafeLoader
def construct_yaml_str(self, node):
# Override the default string handling function
# to always return unicode objects
return self.construct_scalar(node)
Loader.add_constructor(u'tag:yaml.org,2002:str', construct_yaml_str)
SafeLoader.add_constructor(u'tag:yaml.org,2002:str', construct_yaml_str)
print yaml.load(u"""---
- spam
- eggs
- bacon
- crème brûlée
- spam
""")
(The above gives [u'spam', u'eggs', u'bacon', u'cr\xe8me br\xfbl\xe9e', u'spam'])
I haven't tested it on LibYAML (the c-based parser) as I couldn't compile it though, so I'll leave the other answer as it was.
A:
Here's a function you could use to use to replace str with unicode types from the decoded output of PyYAML:
def make_str_unicode(obj):
t = type(obj)
if t in (list, tuple):
if t == tuple:
# Convert to a list if a tuple to
# allow assigning to when copying
is_tuple = True
obj = list(obj)
else:
# Otherwise just do a quick slice copy
obj = obj[:]
is_tuple = False
# Copy each item recursively
for x in xrange(len(obj)):
obj[x] = make_str_unicode(obj[x])
if is_tuple:
# Convert back into a tuple again
obj = tuple(obj)
elif t == dict:
for k in obj:
if type(k) == str:
# Make dict keys unicode
k = unicode(k)
obj[k] = make_str_unicode(obj[k])
elif t == str:
# Convert strings to unicode objects
obj = unicode(obj)
return obj
print make_str_unicode({'blah': ['the', 'quick', u'brown', 124]})
| How to force PyYAML to load strings as unicode objects? | The PyYAML package loads unmarked strings as either unicode or str objects, depending on their content.
I would like to use unicode objects throughout my program (and, unfortunately, can't switch to Python 3 just yet).
Is there an easy way to force PyYAML to always strings load unicode objects? I do not want to clutter my YAML with !!python/unicode tags.
# Encoding: UTF-8
import yaml
menu= u"""---
- spam
- eggs
- bacon
- crème brûlée
- spam
"""
print yaml.load(menu)
Output: ['spam', 'eggs', 'bacon', u'cr\xe8me br\xfbl\xe9e', 'spam']
I would like: [u'spam', u'eggs', u'bacon', u'cr\xe8me br\xfbl\xe9e', u'spam']
| [
"Here's a version which overrides the PyYAML handling of strings by always outputting unicode. In reality, this is probably the identical result of the other response I posted except shorter (i.e. you still need to make sure that strings in custom classes are converted to unicode or passed unicode strings yourself if you use custom handlers):\n# -*- coding: utf-8 -*-\nimport yaml\nfrom yaml import Loader, SafeLoader\n\ndef construct_yaml_str(self, node):\n # Override the default string handling function \n # to always return unicode objects\n return self.construct_scalar(node)\nLoader.add_constructor(u'tag:yaml.org,2002:str', construct_yaml_str)\nSafeLoader.add_constructor(u'tag:yaml.org,2002:str', construct_yaml_str)\n\nprint yaml.load(u\"\"\"---\n- spam\n- eggs\n- bacon\n- crème brûlée\n- spam\n\"\"\")\n\n(The above gives [u'spam', u'eggs', u'bacon', u'cr\\xe8me br\\xfbl\\xe9e', u'spam'])\nI haven't tested it on LibYAML (the c-based parser) as I couldn't compile it though, so I'll leave the other answer as it was.\n",
"Here's a function you could use to use to replace str with unicode types from the decoded output of PyYAML:\ndef make_str_unicode(obj):\n t = type(obj)\n\n if t in (list, tuple):\n if t == tuple:\n # Convert to a list if a tuple to \n # allow assigning to when copying\n is_tuple = True\n obj = list(obj)\n else: \n # Otherwise just do a quick slice copy\n obj = obj[:]\n is_tuple = False\n\n # Copy each item recursively\n for x in xrange(len(obj)):\n obj[x] = make_str_unicode(obj[x])\n\n if is_tuple: \n # Convert back into a tuple again\n obj = tuple(obj)\n\n elif t == dict: \n for k in obj:\n if type(k) == str:\n # Make dict keys unicode\n k = unicode(k)\n obj[k] = make_str_unicode(obj[k])\n\n elif t == str:\n # Convert strings to unicode objects\n obj = unicode(obj)\n return obj\n\nprint make_str_unicode({'blah': ['the', 'quick', u'brown', 124]})\n\n"
] | [
28,
3
] | [] | [] | [
"python",
"python_2.x",
"pyyaml"
] | stackoverflow_0002890146_python_python_2.x_pyyaml.txt |
Q:
Send a .png file using python cgi
How can I send a .png file using python cgi to a flex application?
Thanks in advance...
A:
The Python/CGI side of your question can be as simple as something like this, if you just need to send an existing image:
import sys
# Send the Content-Type header to let the client know what you're sending
sys.stdout.write('Content-Type: image/png\r\n\r\n')
# Send the actual image data
with open('path/to/image.png', 'rb') as f:
sys.stdout.write(f.read())
If, on the other hand, you're dynamically creating images with, say, PIL, you can do something along these lines:
import sys
import Image, ImageDraw # or whatever
sys.stdout.write('Content-Type: image/png\r\n\r\n')
# Dynamically create an image
image = Image.new('RGB', (100, 100))
# ... etc ...
# Send the image to the client
image.save(sys.stdout, 'PNG')
| Send a .png file using python cgi | How can I send a .png file using python cgi to a flex application?
Thanks in advance...
| [
"The Python/CGI side of your question can be as simple as something like this, if you just need to send an existing image:\nimport sys\n\n# Send the Content-Type header to let the client know what you're sending\nsys.stdout.write('Content-Type: image/png\\r\\n\\r\\n')\n\n# Send the actual image data\nwith open('path/to/image.png', 'rb') as f:\n sys.stdout.write(f.read())\n\nIf, on the other hand, you're dynamically creating images with, say, PIL, you can do something along these lines:\nimport sys\nimport Image, ImageDraw # or whatever\n\nsys.stdout.write('Content-Type: image/png\\r\\n\\r\\n')\n\n# Dynamically create an image\nimage = Image.new('RGB', (100, 100))\n# ... etc ...\n\n# Send the image to the client\nimage.save(sys.stdout, 'PNG')\n\n"
] | [
3
] | [] | [] | [
"apache_flex",
"cgi",
"python"
] | stackoverflow_0002965726_apache_flex_cgi_python.txt |
Q:
How do I configure the Python logging module in Django?
I'm trying to configure logging for a Django app using the Python logging module. I have placed the following bit of configuration code in my Django project's settings.py file:
import logging
import logging.handlers
import os
date_fmt = '%m/%d/%Y %H:%M:%S'
log_formatter = logging.Formatter(u'[%(asctime)s] %(levelname)-7s: %(message)s (%(filename)s:%(lineno)d)', datefmt=date_fmt)
log_dir = os.path.join(PROJECT_DIR, "var", "log", "my_app")
log_name = os.path.join(log_dir, "nyrb.log")
bytes = 1024 * 1024 # 1 MB
if not os.path.exists(log_dir):
os.makedirs(log_dir)
handler = logging.handlers.RotatingFileHandler(log_name, maxBytes=bytes, backupCount=7)
handler.setFormatter(log_formatter)
handler.setLevel(logging.DEBUG)
logging.getLogger().setLevel(logging.DEBUG)
logging.getLogger().addHandler(handler)
logging.getLogger(__name__).info("Initialized logging subsystem")
At startup, I get a couple Django-related messages, as well as the "Initialized logging subsystem", in the log files, but then all the log messages end up going to the web server logs (/var/log/apache2/error.log, since I'm using Apache), and use the standard log format (not the formatter I designated). Am I configuring logging incorrectly?
A:
Kind of anti-climactic, but it turns out there was a third-party app installed in the project that had its own logging configuration that was overriding the one I set up (it modified the root logger, for some reason -- not very kosher for a Django app!). Removed that code and everything works as expected.
A:
I used this with success (although it does not rotate):
# in settings.py
import logging
logging.basicConfig(
level = logging.DEBUG,
format = '%(asctime)s %(levelname)s %(funcName)s %(lineno)d \
\033[35m%(message)s\033[0m',
datefmt = '[%d/%b/%Y %H:%M:%S]',
filename = '/tmp/my_django_app.log',
filemode = 'a'
)
I'd suggest to try an absolute path, too.
A:
See this other answer. Note that settings.py is usually imported twice, so you should avoid creating multiple handlers. Better logging support is coming to Django in 1.3 (hopefully), but for now you should ensure that if your setup code is called more than once, there are no adverse effects.
I'm not sure why your logged messages are going to the Apache logs, unless you've (somewhere else in your code) added a StreamHandler to your root logger with sys.stdout or sys.stderr as the stream. You might want to print out logging.getLogger().handlers just to see it's what you'd expect to see.
A:
I guess logging stops when Apache forks the process. After that happened, because all file descriptors were closed during daemonization, logging system tries to reopen log file and as far as I understand uses relative file path:
log_dir = os.path.join(PROJECT_DIR, "var", "log", "my_app")
log_name = os.path.join(log_dir, "nyrb.log")
But there is no “current directory” when process has been daemonized. Try to use absolute log_dir path. Hope that helps.
| How do I configure the Python logging module in Django? | I'm trying to configure logging for a Django app using the Python logging module. I have placed the following bit of configuration code in my Django project's settings.py file:
import logging
import logging.handlers
import os
date_fmt = '%m/%d/%Y %H:%M:%S'
log_formatter = logging.Formatter(u'[%(asctime)s] %(levelname)-7s: %(message)s (%(filename)s:%(lineno)d)', datefmt=date_fmt)
log_dir = os.path.join(PROJECT_DIR, "var", "log", "my_app")
log_name = os.path.join(log_dir, "nyrb.log")
bytes = 1024 * 1024 # 1 MB
if not os.path.exists(log_dir):
os.makedirs(log_dir)
handler = logging.handlers.RotatingFileHandler(log_name, maxBytes=bytes, backupCount=7)
handler.setFormatter(log_formatter)
handler.setLevel(logging.DEBUG)
logging.getLogger().setLevel(logging.DEBUG)
logging.getLogger().addHandler(handler)
logging.getLogger(__name__).info("Initialized logging subsystem")
At startup, I get a couple Django-related messages, as well as the "Initialized logging subsystem", in the log files, but then all the log messages end up going to the web server logs (/var/log/apache2/error.log, since I'm using Apache), and use the standard log format (not the formatter I designated). Am I configuring logging incorrectly?
| [
"Kind of anti-climactic, but it turns out there was a third-party app installed in the project that had its own logging configuration that was overriding the one I set up (it modified the root logger, for some reason -- not very kosher for a Django app!). Removed that code and everything works as expected.\n",
"I used this with success (although it does not rotate):\n# in settings.py\nimport logging\nlogging.basicConfig(\n level = logging.DEBUG,\n format = '%(asctime)s %(levelname)s %(funcName)s %(lineno)d \\\n \\033[35m%(message)s\\033[0m', \n datefmt = '[%d/%b/%Y %H:%M:%S]',\n filename = '/tmp/my_django_app.log',\n filemode = 'a'\n)\n\nI'd suggest to try an absolute path, too.\n",
"See this other answer. Note that settings.py is usually imported twice, so you should avoid creating multiple handlers. Better logging support is coming to Django in 1.3 (hopefully), but for now you should ensure that if your setup code is called more than once, there are no adverse effects.\nI'm not sure why your logged messages are going to the Apache logs, unless you've (somewhere else in your code) added a StreamHandler to your root logger with sys.stdout or sys.stderr as the stream. You might want to print out logging.getLogger().handlers just to see it's what you'd expect to see.\n",
"I guess logging stops when Apache forks the process. After that happened, because all file descriptors were closed during daemonization, logging system tries to reopen log file and as far as I understand uses relative file path:\nlog_dir = os.path.join(PROJECT_DIR, \"var\", \"log\", \"my_app\")\nlog_name = os.path.join(log_dir, \"nyrb.log\")\n\nBut there is no “current directory” when process has been daemonized. Try to use absolute log_dir path. Hope that helps.\n"
] | [
3,
2,
2,
0
] | [] | [] | [
"django",
"logging",
"python"
] | stackoverflow_0002961001_django_logging_python.txt |
Q:
autocomplete-like feature with a python dict
In PHP, I had this line matches = preg_grep('/^for/', array_keys($hash)); What it would do is it would grab the words: fork, form etc. that are in $hash.
In Python, I have a dict with 400,000 words. It's keys are words I'd like to present in an auto-complete like feature (the values in this case are meaningless). How would I be able to return the keys from my dictionary that match the input?
For example (as used earlier), if I have
my_dic = t{"fork" : True, "form" : True, "fold" : True, "fame" : True}
and I get some input "for", It'll return a list of "fork", "form".
A:
>>> mydict={"fork" : True, "form" : True, "fold" : True, "fame" : True}
>>> [k for k in mydict if k.startswith("for")]
['fork', 'form']
This should be faster than using a regular expression (and sufficient if you're just looking for word beginnings).
A:
So this isn't a direct answer to what you ask, but..
It seems like you don't really want a dict for this sort of thing, you're looking for a tree-like structure, right?
Then you can walk the tree for each letter that is typed (constant time), and return leaves from that subsection of the tree as the words that match that prefix.
A:
>>> my_dict = {"fork" : True, "form" : True, "fold" : True, "fame" : True}
>>> import re
>>> [s for s in my_dict if re.search('^for', s) is not None]
['fork', 'form']
Use of regex is more universal as you could provide more complex search patterns, if it's only about prefixes, you could use string methods: str.startwith, for example:
>>> [s for s in my_dict if s.startswith('for')]
['fork', 'form']
A:
If you want a specific lookup strategy (such as the "startswith 3 chars" outlined above), you could probably get a quick win by creating a specific lookup dictionary based around that idea.
q = {"fork":1, "form":2, "fold":3, "fame":4}
from collections import defaultdict
q1 = defaultdict(dict)
for k,v in q.items():
q1[k[:3]][k]=v
This would let you do a .startswith type lookup over a much smaller set
def getChoices(frag):
d = q1.get(frag[:3])
if d is None:
return []
return [ k for k in d.keys() if k.startswith(frag) ]
Hopefully that should be a lot quicker than processing the whole 400,000 keys.
A:
You can get the keys from my_dict with my_dict.keys(). Then, you can search through each key to see if it matches your regular expression.
m = re.compile('^for')
keys = []
for key in my_dict.keys():
if m.match(key) != None:
keys.append(key)
| autocomplete-like feature with a python dict | In PHP, I had this line matches = preg_grep('/^for/', array_keys($hash)); What it would do is it would grab the words: fork, form etc. that are in $hash.
In Python, I have a dict with 400,000 words. It's keys are words I'd like to present in an auto-complete like feature (the values in this case are meaningless). How would I be able to return the keys from my dictionary that match the input?
For example (as used earlier), if I have
my_dic = t{"fork" : True, "form" : True, "fold" : True, "fame" : True}
and I get some input "for", It'll return a list of "fork", "form".
| [
">>> mydict={\"fork\" : True, \"form\" : True, \"fold\" : True, \"fame\" : True}\n>>> [k for k in mydict if k.startswith(\"for\")]\n['fork', 'form']\n\nThis should be faster than using a regular expression (and sufficient if you're just looking for word beginnings).\n",
"So this isn't a direct answer to what you ask, but..\nIt seems like you don't really want a dict for this sort of thing, you're looking for a tree-like structure, right?\nThen you can walk the tree for each letter that is typed (constant time), and return leaves from that subsection of the tree as the words that match that prefix.\n",
">>> my_dict = {\"fork\" : True, \"form\" : True, \"fold\" : True, \"fame\" : True}\n>>> import re\n>>> [s for s in my_dict if re.search('^for', s) is not None]\n['fork', 'form']\n\nUse of regex is more universal as you could provide more complex search patterns, if it's only about prefixes, you could use string methods: str.startwith, for example:\n>>> [s for s in my_dict if s.startswith('for')]\n['fork', 'form']\n\n",
"If you want a specific lookup strategy (such as the \"startswith 3 chars\" outlined above), you could probably get a quick win by creating a specific lookup dictionary based around that idea.\nq = {\"fork\":1, \"form\":2, \"fold\":3, \"fame\":4}\nfrom collections import defaultdict\nq1 = defaultdict(dict)\nfor k,v in q.items():\n q1[k[:3]][k]=v\n\nThis would let you do a .startswith type lookup over a much smaller set\ndef getChoices(frag):\n d = q1.get(frag[:3])\n if d is None:\n return []\n return [ k for k in d.keys() if k.startswith(frag) ]\n\nHopefully that should be a lot quicker than processing the whole 400,000 keys.\n",
"You can get the keys from my_dict with my_dict.keys(). Then, you can search through each key to see if it matches your regular expression.\nm = re.compile('^for')\nkeys = []\nfor key in my_dict.keys():\n if m.match(key) != None:\n keys.append(key)\n\n"
] | [
6,
3,
1,
1,
0
] | [] | [] | [
"autocomplete",
"python"
] | stackoverflow_0002967799_autocomplete_python.txt |
Q:
How to minimize one application using c# or python?
How can I minimize Microsoft Speech Recognition:
(source: microsoft.com)
using C# or python?
A:
For C#:
Using System.Diagnostics.Process you can select the process when it's running. From there you can get the MainWindow Handle at .MainWindowHandle and then call the windows API to minimize the application.
Unfortunately I do not know the specifics for that call, you'd have to google it.
| How to minimize one application using c# or python? | How can I minimize Microsoft Speech Recognition:
(source: microsoft.com)
using C# or python?
| [
"For C#:\nUsing System.Diagnostics.Process you can select the process when it's running. From there you can get the MainWindow Handle at .MainWindowHandle and then call the windows API to minimize the application.\nUnfortunately I do not know the specifics for that call, you'd have to google it.\n"
] | [
1
] | [] | [] | [
"c#",
"minimize",
"process",
"python",
"window"
] | stackoverflow_0002967965_c#_minimize_process_python_window.txt |
Q:
Problems installing a package from PyPI: root files not installed
After installing the BitTorrent-bencode package, either via easy_install BitTorrent-bencode or pip install BitTorrent-bencode, or by downloading the tarball and installing that via easy_install $tarball, I discover that /usr/local/lib/python2.6/dist-packages/BitTorrent_bencode-5.0.8-py2.6.egg/ contains EGG-INFO/ and test/ directories. Although both of these subdirectories contain files, there are no files in the BitTorr* directory itself. The tarball does contain bencode.py, which is meant to be the actual source for this package, but it's not installed by either of those utils.
I'm pretty new to all of this so I'm not sure if this is a problem with the package or with what I'm doing. The package was packaged a while ago (2007), so perhaps it's using some deprecated configuration aspect that I need to supply a command-line flag for.
I'm more interested in learning what's wrong with either the package or my procedures than in getting this particular package installed; there is another package called hunnyb that seems to do a decent enough job of decoding bencoded data. Mostly I'd like to know how to deal with such problems in other packages. I'd also like to let the package maintainer know if the package needs updating.
edit
@Andrey Popp explains that the problem is likely with the setup.py file. I guess the only way I can really get an answer to my question is by actually R-ing TFM. However since I likely won't have time to do that thoroughly for a while yet, I've posted the setup.py file here.
A quick browse through the easy_install manual reveals that the function find_modules(), which this module's setup.py makes use of, searches for files named __init__.py within the package. The source code file in question is named bencode.py, so perhaps this is the problem: it should be named __init__.py?
edit 2
Having now learned Python packaging, I gather that the problem is that this module is using setuptools.find_packages, and has its source at the root of its directory structure, but hasn't passed anything in package_dir. It would seem to be fairly trivial to fix. However, the author is not reachable by his PyPI contact info. The module's PyPI page lists a "Package Index Owner" as well. I'm not sure what that's supposed to mean, but I did manage to get in touch with that person, who I think is maybe not in a position to maintain the module. In any case, it's still in the same state as when I posted this question back in June.
Given that the module seems to be more or less abandoned, and that there's a suitable replacement for it in hunnyb, I've accepted that @andreypopp's answer is about as good of one as I'm going to get.
A:
It seems this package's setup.py is broken — it does not define right package for distribution. I think, you need to check setup.py in source release and if it is true — report a bug to author of this package.
| Problems installing a package from PyPI: root files not installed | After installing the BitTorrent-bencode package, either via easy_install BitTorrent-bencode or pip install BitTorrent-bencode, or by downloading the tarball and installing that via easy_install $tarball, I discover that /usr/local/lib/python2.6/dist-packages/BitTorrent_bencode-5.0.8-py2.6.egg/ contains EGG-INFO/ and test/ directories. Although both of these subdirectories contain files, there are no files in the BitTorr* directory itself. The tarball does contain bencode.py, which is meant to be the actual source for this package, but it's not installed by either of those utils.
I'm pretty new to all of this so I'm not sure if this is a problem with the package or with what I'm doing. The package was packaged a while ago (2007), so perhaps it's using some deprecated configuration aspect that I need to supply a command-line flag for.
I'm more interested in learning what's wrong with either the package or my procedures than in getting this particular package installed; there is another package called hunnyb that seems to do a decent enough job of decoding bencoded data. Mostly I'd like to know how to deal with such problems in other packages. I'd also like to let the package maintainer know if the package needs updating.
edit
@Andrey Popp explains that the problem is likely with the setup.py file. I guess the only way I can really get an answer to my question is by actually R-ing TFM. However since I likely won't have time to do that thoroughly for a while yet, I've posted the setup.py file here.
A quick browse through the easy_install manual reveals that the function find_modules(), which this module's setup.py makes use of, searches for files named __init__.py within the package. The source code file in question is named bencode.py, so perhaps this is the problem: it should be named __init__.py?
edit 2
Having now learned Python packaging, I gather that the problem is that this module is using setuptools.find_packages, and has its source at the root of its directory structure, but hasn't passed anything in package_dir. It would seem to be fairly trivial to fix. However, the author is not reachable by his PyPI contact info. The module's PyPI page lists a "Package Index Owner" as well. I'm not sure what that's supposed to mean, but I did manage to get in touch with that person, who I think is maybe not in a position to maintain the module. In any case, it's still in the same state as when I posted this question back in June.
Given that the module seems to be more or less abandoned, and that there's a suitable replacement for it in hunnyb, I've accepted that @andreypopp's answer is about as good of one as I'm going to get.
| [
"It seems this package's setup.py is broken — it does not define right package for distribution. I think, you need to check setup.py in source release and if it is true — report a bug to author of this package.\n"
] | [
1
] | [] | [] | [
"easy_install",
"pip",
"pypi",
"python"
] | stackoverflow_0002963302_easy_install_pip_pypi_python.txt |
Q:
How to share an array in Python with a C++ Program?
I two programs running, one in Python and one in C++, and I need to share a two-dimensional array (just of decimal numbers) between them. I am currently looking into serialization, but pickle is python-specific, unfortunately. What is the best way to do this?
Thanks
Edit: It is likely that the array will only have 50 elements or so, but the transfer of data will need to occur very frequently: 60x per second or more.
A:
I suggest Google's protobuf
A:
You could try using boost::python to make your applications interoperable.
Some information about pickle support and plain boost::python documentation.
A:
You could try hosting the array in a Memory-mapped file, although you will need to synchronize access to the file to avoid race conditions.
Alternatively you could establish a socket (or pipe) connection between both processes and pass values by exchanging messages.
A:
Your case is handled very well by PyUblas, a bridge between Numpy and Boost.Ublas using Boost.Python. This bridge supports copy-free transfer of vectors and matrices and is very easy to use.
A:
How large is this array? If it isn't very large, then JSON serialization is a good fit. There are libraries readily available for C++, and Python has JSON serialization in its standard library as of version 2.6. See http://www.json.org/ for more info.
A:
I would propose simply to use c arrays(via ctypes on the python side) and simply pull/push the raw data through an socket
A:
Serialization is one problem while IPC is another. Do you have the IPC portion figured out? (pipes, sockets, mmap, etc?)
On to serialization - if you're concerned about performance more than robustness (being able to plug more modules into this architecture) and security, then you should take a look at the struct module. This will let you pack data into C structures using format strings to define the structure (takes care of padding, alignment, and byte ordering for you!) In the C++ program, cast a pointer to the buffer to the corresponding structure type.
This works well with a tightly-coupled Python script and C++ program that is only run internally.
| How to share an array in Python with a C++ Program? | I two programs running, one in Python and one in C++, and I need to share a two-dimensional array (just of decimal numbers) between them. I am currently looking into serialization, but pickle is python-specific, unfortunately. What is the best way to do this?
Thanks
Edit: It is likely that the array will only have 50 elements or so, but the transfer of data will need to occur very frequently: 60x per second or more.
| [
"I suggest Google's protobuf\n",
"You could try using boost::python to make your applications interoperable.\nSome information about pickle support and plain boost::python documentation.\n",
"You could try hosting the array in a Memory-mapped file, although you will need to synchronize access to the file to avoid race conditions.\nAlternatively you could establish a socket (or pipe) connection between both processes and pass values by exchanging messages.\n",
"Your case is handled very well by PyUblas, a bridge between Numpy and Boost.Ublas using Boost.Python. This bridge supports copy-free transfer of vectors and matrices and is very easy to use.\n",
"How large is this array? If it isn't very large, then JSON serialization is a good fit. There are libraries readily available for C++, and Python has JSON serialization in its standard library as of version 2.6. See http://www.json.org/ for more info.\n",
"I would propose simply to use c arrays(via ctypes on the python side) and simply pull/push the raw data through an socket\n",
"Serialization is one problem while IPC is another. Do you have the IPC portion figured out? (pipes, sockets, mmap, etc?)\nOn to serialization - if you're concerned about performance more than robustness (being able to plug more modules into this architecture) and security, then you should take a look at the struct module. This will let you pack data into C structures using format strings to define the structure (takes care of padding, alignment, and byte ordering for you!) In the C++ program, cast a pointer to the buffer to the corresponding structure type. \nThis works well with a tightly-coupled Python script and C++ program that is only run internally.\n"
] | [
5,
4,
4,
3,
2,
1,
1
] | [] | [] | [
"c++",
"python",
"serialization"
] | stackoverflow_0002968172_c++_python_serialization.txt |
Q:
extend php with java/c++?
i only know php and i wonder if you can extend a php web application with c++ or java when needed? i dont want to convert my code with quercus, cause that is very error prone. is there another way to extend it?
cause from what i have read python can extend it with c++ without converting the python code and use java with jython?
A:
Most of PHP is written in modular C code. You can create your own PHP extensions in C. See http://php.net/internals, the PHP wiki and the book "Extending and Embedding PHP" by Sara Golemon.
| extend php with java/c++? | i only know php and i wonder if you can extend a php web application with c++ or java when needed? i dont want to convert my code with quercus, cause that is very error prone. is there another way to extend it?
cause from what i have read python can extend it with c++ without converting the python code and use java with jython?
| [
"Most of PHP is written in modular C code. You can create your own PHP extensions in C. See http://php.net/internals, the PHP wiki and the book \"Extending and Embedding PHP\" by Sara Golemon.\n"
] | [
3
] | [] | [] | [
"c++",
"java",
"php",
"php_extension",
"python"
] | stackoverflow_0002968814_c++_java_php_php_extension_python.txt |
Q:
parsing xml file with similar tags and different attributes!
I am sorry if this is a repeated question or a basic one as I am new to Python. I am trying to parse the following XML commands so that I can "extract" the tag value for Daniel and George. I want the answer to look like Daniel = 78, George = 90.
<epas:property name="Tom">12</epas:property>
<epas:property name="Alice">34</epas:property>
<epas:property name="John">56</epas:property>
<epas:property name="Danial">78</epas:property>
<epas:property name="George">90</epas:property>
<epas:property name="Luise">11</epas:property>
The xml commands are stored in one string. i.e. myString so here is the first part of code that I tried to parse this string (myString):
element = xml.dom.minidom.parseString(myString).getElementByTagName ("epas:property")
if not element:
print "error message"
else:
for el in element:
value [el.getAttribute("name")] = el.firstChild.data
I tried to reference Daniel and George to the array index to get the value but looks I am not doing it correctly. I would appreciate your ideas/comments on this.
Cheers, Bill
A:
Don't use xml.dom.minidom, it's a terrible library! Use ElementTree or lxml (ElementTree is in the standard library and will probably work fine for you).
You should have an XML namespace, i.e., something like xmlns:epas="http://something". Also you can't have bare elements, they need to be enclosed. If you have "fake" namespaces (i.e., no declaration) you could punt and do:
myString = '<doc xmlns:epas="dummy">%s</doc>' % myString
With ElementTree it's something like this:
import xml.etree.ElementTree as ET
doc = ET.fromstring(myString)
result = {}
for el in doc.findall('{http://something}property):
result[el.get('name')] = int(el.text)
| parsing xml file with similar tags and different attributes! | I am sorry if this is a repeated question or a basic one as I am new to Python. I am trying to parse the following XML commands so that I can "extract" the tag value for Daniel and George. I want the answer to look like Daniel = 78, George = 90.
<epas:property name="Tom">12</epas:property>
<epas:property name="Alice">34</epas:property>
<epas:property name="John">56</epas:property>
<epas:property name="Danial">78</epas:property>
<epas:property name="George">90</epas:property>
<epas:property name="Luise">11</epas:property>
The xml commands are stored in one string. i.e. myString so here is the first part of code that I tried to parse this string (myString):
element = xml.dom.minidom.parseString(myString).getElementByTagName ("epas:property")
if not element:
print "error message"
else:
for el in element:
value [el.getAttribute("name")] = el.firstChild.data
I tried to reference Daniel and George to the array index to get the value but looks I am not doing it correctly. I would appreciate your ideas/comments on this.
Cheers, Bill
| [
"Don't use xml.dom.minidom, it's a terrible library! Use ElementTree or lxml (ElementTree is in the standard library and will probably work fine for you).\nYou should have an XML namespace, i.e., something like xmlns:epas=\"http://something\". Also you can't have bare elements, they need to be enclosed. If you have \"fake\" namespaces (i.e., no declaration) you could punt and do:\nmyString = '<doc xmlns:epas=\"dummy\">%s</doc>' % myString\n\nWith ElementTree it's something like this:\nimport xml.etree.ElementTree as ET\ndoc = ET.fromstring(myString)\nresult = {}\nfor el in doc.findall('{http://something}property):\n result[el.get('name')] = int(el.text)\n\n"
] | [
1
] | [] | [] | [
"python"
] | stackoverflow_0002968824_python.txt |
Q:
2 techniques for including files in a Python distribution: which is better?
I'm working on packaging a small Python project as a zip or egg file so that it can be distributed. I've come across 2 ways to include the project's config files, both of which seem to produce identical results.
Method 1:
Include this code in setup.py:
from distutils.core import setup
setup(name='ProjectName',
version='1.0',
packages=['somePackage'],
data_files = [('config', ['config\propFiles1.ini',
'config\propFiles2.ini',
'config\propFiles3.ini'])]
)
Method 2:
Include this code in setup.py:
from distutils.core import setup
setup(name='ProjectName',
version='1.0',
packages=['somePackage']
)
Then, create a MANIFEST.in file with this line in it:
include config\*
Is there any difference between the methods? Which one is preferred? I tend to lean towards the first because then no MANIFEST.in file is necessary at all. However, in the first method you have to specify each file individually while in the second you can just include the whole folder. Is there anything else I should be taking into consideration? What's the standard practice?
A:
MANIFEST.in controls what files are put into the distribution zip file when you call python setup.py sdist. It does not control what is installed. data_files (or better package_data) controls what files are installed (and I think also makes sure files are included in the zip file). Use MANIFEST.in for files you won't install, like documentation, and package_data for files you use that aren't Python code (like an image or template).
| 2 techniques for including files in a Python distribution: which is better? | I'm working on packaging a small Python project as a zip or egg file so that it can be distributed. I've come across 2 ways to include the project's config files, both of which seem to produce identical results.
Method 1:
Include this code in setup.py:
from distutils.core import setup
setup(name='ProjectName',
version='1.0',
packages=['somePackage'],
data_files = [('config', ['config\propFiles1.ini',
'config\propFiles2.ini',
'config\propFiles3.ini'])]
)
Method 2:
Include this code in setup.py:
from distutils.core import setup
setup(name='ProjectName',
version='1.0',
packages=['somePackage']
)
Then, create a MANIFEST.in file with this line in it:
include config\*
Is there any difference between the methods? Which one is preferred? I tend to lean towards the first because then no MANIFEST.in file is necessary at all. However, in the first method you have to specify each file individually while in the second you can just include the whole folder. Is there anything else I should be taking into consideration? What's the standard practice?
| [
"MANIFEST.in controls what files are put into the distribution zip file when you call python setup.py sdist. It does not control what is installed. data_files (or better package_data) controls what files are installed (and I think also makes sure files are included in the zip file). Use MANIFEST.in for files you won't install, like documentation, and package_data for files you use that aren't Python code (like an image or template).\n"
] | [
29
] | [] | [] | [
"distribution",
"distutils",
"python"
] | stackoverflow_0002968701_distribution_distutils_python.txt |
Q:
Output MySQL query results in Django shell
I have the following Django Model which retrieves 3 records from a database. The class below represents a Model within a Django application I'm building. I realize that the parameters taken in by the create_hotspots function are not being used. I just simplified what the code looked like previously for the purposes of explaining my problem.
from django.db import models
from django.db import connection, transaction
import math
import MySQLdb
class Victims(models.Model):
def __init__(self):
self.results=[]
def create_hotspots(self,radius,latitude,longitude):
self.radius=str(radius)
self.latitude=str(latitude)
self.longitude=str(longitude)
db=MySQLdb.connect (host = "localhost", user = "root",passwd = "pass",db = "test")
cursor = db.cursor ()
cursor.execute("""SELECT * FROM poi_table""")
self.results=cursor.fetchall()
cursor.close ()
db.close ()
def __unicode__(self):
return self.results
Now, assume that I open the Django shell and I execute the following instructions:
>>from PyLayar.layer.models import Victims
C:\Python26\lib\site-packages\MySQLdb\__init__.py:34: DeprecationWarning: the sets module is deprecated
from sets import ImmutableSet
>>> v=Victims()
>>> v.create_hotspots(1000, 42.3931955679, -72.5289916992)
>>> Victims.objects.all()
[]
My question is why does the Victims.objects.all() instruction return no results. The SQL query returns 3 results, and the results variable stores the resulting tuple.
A:
The Victims.create_hotspots method has no return statement. What did you expect it to return?
Also, Victims.create_hotspots does not do a save() to save the Victims instance.
BTW, the use of raw SQL inside a models object is often a really poor idea. You should consider making your "poi_table" a proper part of the Django model and using proper relational database design to avoid querying one class while trying to create an instance of another.
If you are trying to create "persistent aggregate" objects, you should consider doing something different.
v= Victim.objects.create( radius, lat, lon, Poi.objects.all())
v.save()
This will disentangle your two models allowing you to write simpler, less heavily-entangled models using simple Django processing.
| Output MySQL query results in Django shell | I have the following Django Model which retrieves 3 records from a database. The class below represents a Model within a Django application I'm building. I realize that the parameters taken in by the create_hotspots function are not being used. I just simplified what the code looked like previously for the purposes of explaining my problem.
from django.db import models
from django.db import connection, transaction
import math
import MySQLdb
class Victims(models.Model):
def __init__(self):
self.results=[]
def create_hotspots(self,radius,latitude,longitude):
self.radius=str(radius)
self.latitude=str(latitude)
self.longitude=str(longitude)
db=MySQLdb.connect (host = "localhost", user = "root",passwd = "pass",db = "test")
cursor = db.cursor ()
cursor.execute("""SELECT * FROM poi_table""")
self.results=cursor.fetchall()
cursor.close ()
db.close ()
def __unicode__(self):
return self.results
Now, assume that I open the Django shell and I execute the following instructions:
>>from PyLayar.layer.models import Victims
C:\Python26\lib\site-packages\MySQLdb\__init__.py:34: DeprecationWarning: the sets module is deprecated
from sets import ImmutableSet
>>> v=Victims()
>>> v.create_hotspots(1000, 42.3931955679, -72.5289916992)
>>> Victims.objects.all()
[]
My question is why does the Victims.objects.all() instruction return no results. The SQL query returns 3 results, and the results variable stores the resulting tuple.
| [
"The Victims.create_hotspots method has no return statement. What did you expect it to return?\nAlso, Victims.create_hotspots does not do a save() to save the Victims instance.\nBTW, the use of raw SQL inside a models object is often a really poor idea. You should consider making your \"poi_table\" a proper part of the Django model and using proper relational database design to avoid querying one class while trying to create an instance of another.\nIf you are trying to create \"persistent aggregate\" objects, you should consider doing something different.\nv= Victim.objects.create( radius, lat, lon, Poi.objects.all())\nv.save()\n\nThis will disentangle your two models allowing you to write simpler, less heavily-entangled models using simple Django processing.\n"
] | [
3
] | [] | [] | [
"django",
"mysql",
"python"
] | stackoverflow_0002969086_django_mysql_python.txt |
Q:
How to do a back-reference on Google AppEngine?
I'm trying to access an object that is linked to by a db.ReferenceProperty in Google app engine. Here's the model's code:
class InquiryQuestion(db.Model):
inquiry_ref = db.ReferenceProperty(reference_class=GiftInquiry, required=True, collection_name="inquiry_ref")
And I am trying to access it in the following way:
linkedObject = question.inquiry_ref
and then
linkedKey = linkedObject.key
but it's not working. Can anyone please help?
A:
Your naming conventions are a bit confusing. inquiry_ref is both your ReferenceProperty name and your back-reference collection name, so question.inquiry_ref gives you a GiftInquiry Key object, but question.inquiry_ref.inquiry_ref gives you a Query object filtered to InquiryQuestion entities.
Let's say we have the following domain model, with a one-to-many relationship between articles and comments.
class Article(db.Model):
body = db.TextProperty()
class Comment(db.Model):
article = db.ReferenceProperty(Article)
body = db.TextProperty()
comment = Comment.all().get()
# The explicit reference from one comment to one article
# is represented by a Key object
article_key = comment.article
# which gets lazy-loaded to a Model instance by accessing a property
article_body = comment.article.body
# The implicit back-reference from one article to many comments
# is represented by a Query object
article_comments = comment.article.comment_set
# If the article only has one comment, this gives us a round trip
comment = comment.article.comment_set.all().get()
A:
The back reference is just a query. You need to use fetch() or get() to actually retrieve the entity or entities from the datastore:
linkedObject = question.inquiry_ref.get()
should do the trick. Or, you would use fetch() if you were expecting the back ref to refer to more than one entity.
Actually, the way that your class is constructed makes it ambiguous as to what exactly is happening here.
If you have a GiftInquiry entity, it will get an automatic property called inquiry_ref that will be a query (as I described above) that will return all InquiryQuestion entities that have their inquiry_ref property set to that GiftInquiry's Key.
On the other hand, if you have an InquiryQuestion entity, and you want to get the GiftInquiry entity to which its inquiry_ref property is set, you would do this:
linkedObject = db.get(question.inquiry_ref)
as the inquiry_ref is just the Key of the referred-to GiftInquiry, but that is not technically a BackReference.
Check out the explanation of ReferenceProperty and back references from the docs.
| How to do a back-reference on Google AppEngine? | I'm trying to access an object that is linked to by a db.ReferenceProperty in Google app engine. Here's the model's code:
class InquiryQuestion(db.Model):
inquiry_ref = db.ReferenceProperty(reference_class=GiftInquiry, required=True, collection_name="inquiry_ref")
And I am trying to access it in the following way:
linkedObject = question.inquiry_ref
and then
linkedKey = linkedObject.key
but it's not working. Can anyone please help?
| [
"Your naming conventions are a bit confusing. inquiry_ref is both your ReferenceProperty name and your back-reference collection name, so question.inquiry_ref gives you a GiftInquiry Key object, but question.inquiry_ref.inquiry_ref gives you a Query object filtered to InquiryQuestion entities.\nLet's say we have the following domain model, with a one-to-many relationship between articles and comments.\nclass Article(db.Model):\n body = db.TextProperty()\n\nclass Comment(db.Model):\n article = db.ReferenceProperty(Article)\n body = db.TextProperty()\n\ncomment = Comment.all().get()\n\n# The explicit reference from one comment to one article\n# is represented by a Key object\narticle_key = comment.article\n\n# which gets lazy-loaded to a Model instance by accessing a property\narticle_body = comment.article.body\n\n# The implicit back-reference from one article to many comments\n# is represented by a Query object\narticle_comments = comment.article.comment_set\n\n# If the article only has one comment, this gives us a round trip\ncomment = comment.article.comment_set.all().get()\n\n",
"The back reference is just a query. You need to use fetch() or get() to actually retrieve the entity or entities from the datastore:\nlinkedObject = question.inquiry_ref.get()\n\nshould do the trick. Or, you would use fetch() if you were expecting the back ref to refer to more than one entity.\nActually, the way that your class is constructed makes it ambiguous as to what exactly is happening here.\nIf you have a GiftInquiry entity, it will get an automatic property called inquiry_ref that will be a query (as I described above) that will return all InquiryQuestion entities that have their inquiry_ref property set to that GiftInquiry's Key.\nOn the other hand, if you have an InquiryQuestion entity, and you want to get the GiftInquiry entity to which its inquiry_ref property is set, you would do this:\nlinkedObject = db.get(question.inquiry_ref)\n\nas the inquiry_ref is just the Key of the referred-to GiftInquiry, but that is not technically a BackReference.\nCheck out the explanation of ReferenceProperty and back references from the docs.\n"
] | [
5,
3
] | [] | [] | [
"google_app_engine",
"python",
"web_applications"
] | stackoverflow_0002968231_google_app_engine_python_web_applications.txt |
Q:
Python - a clean approach to this problem?
I am having trouble picking the best data structure for solving a problem.
The problem is as below:
I have a nested list of identity codes where the sublists are of varying length.
li = [['abc', 'ghi', 'lmn'], ['kop'], ['hgi', 'ghy']]
I have a file with two entries on each line; an identity code and a number.
abc 2.93
ghi 3.87
lmn 5.96
Each sublist represents a cluster. I wish to select the i.d. from each sublist with the highest number associated with it, append that i.d. to a new list and ultimately write it to a new file.
What data structure should the file with numbers be read in as?
Also, how would you iterate over said data structure to return the i.d. with the highest number that matches the i.d. within a sublist?
Thanks,
S :-)
A:
You can read the file into a dictionary (string=>int), then use a list comprehension to get the highest identity code from each sublist.
d = {}
with open("data", 'rb') as data:
for line in data:
key, val = line.split(' ')
d[key] = float(val)
ids = [max(sublist, key=lambda k: d[k]) for sublist in li]
For Python 2.4, use:
ids = []
for sublist in li:
subnums = map(lambda x: d[x], sublist)
ids.append(sublist[subnums.index(max(subnums))])
As noted, this is O(n).
A:
My solution assumes that you only want the highest number and not the id that is associated with it.
I'd read the identity codes and the numbers in a dictionary as suggested by Matthew
NEW_LIST = []
ID2NUM = {}
with file('codes') as codes:
for line in codes:
id, num = line.rstrip().split()
ID2NUM[id] = num
I added some numbers so every id has a value. My ID2NUM looks like this:
{'abc': 2.9300000000000002,
'ghi': 3.8700000000000001,
'ghy': 1.2,
'hgi': 0.40000000000000002,
'kop': 4.3499999999999996,
'lmn': 5.96}
Then process the list li:
for l in li:
NEW_LIST.append(max([d[x] for x in l]))
>>> NEW_LIST
[5.96, 4.3499999999999996, 1.2]
To write the new list to a file, one number per line:
with file('new_list', 'w') as new_list:
new_list.write('\n'.join(NEW_LIST))
A:
How about storing each sublist as a binary search tree? You'd get O(log n) search performance on average.
Another option would be to use max-heaps and you'd get O(1) to get the maximum value.
| Python - a clean approach to this problem? | I am having trouble picking the best data structure for solving a problem.
The problem is as below:
I have a nested list of identity codes where the sublists are of varying length.
li = [['abc', 'ghi', 'lmn'], ['kop'], ['hgi', 'ghy']]
I have a file with two entries on each line; an identity code and a number.
abc 2.93
ghi 3.87
lmn 5.96
Each sublist represents a cluster. I wish to select the i.d. from each sublist with the highest number associated with it, append that i.d. to a new list and ultimately write it to a new file.
What data structure should the file with numbers be read in as?
Also, how would you iterate over said data structure to return the i.d. with the highest number that matches the i.d. within a sublist?
Thanks,
S :-)
| [
"You can read the file into a dictionary (string=>int), then use a list comprehension to get the highest identity code from each sublist.\nd = {}\nwith open(\"data\", 'rb') as data:\n for line in data:\n key, val = line.split(' ')\n d[key] = float(val)\n\nids = [max(sublist, key=lambda k: d[k]) for sublist in li]\n\nFor Python 2.4, use:\nids = []\nfor sublist in li:\n subnums = map(lambda x: d[x], sublist)\n ids.append(sublist[subnums.index(max(subnums))])\n\nAs noted, this is O(n).\n",
"My solution assumes that you only want the highest number and not the id that is associated with it.\nI'd read the identity codes and the numbers in a dictionary as suggested by Matthew\nNEW_LIST = []\nID2NUM = {}\nwith file('codes') as codes:\n for line in codes:\n id, num = line.rstrip().split()\n ID2NUM[id] = num\n\nI added some numbers so every id has a value. My ID2NUM looks like this:\n{'abc': 2.9300000000000002,\n 'ghi': 3.8700000000000001,\n 'ghy': 1.2,\n 'hgi': 0.40000000000000002,\n 'kop': 4.3499999999999996,\n 'lmn': 5.96}\n\nThen process the list li:\nfor l in li:\n NEW_LIST.append(max([d[x] for x in l]))\n\n>>> NEW_LIST\n[5.96, 4.3499999999999996, 1.2]\n\nTo write the new list to a file, one number per line:\nwith file('new_list', 'w') as new_list:\n new_list.write('\\n'.join(NEW_LIST))\n\n",
"How about storing each sublist as a binary search tree? You'd get O(log n) search performance on average.\nAnother option would be to use max-heaps and you'd get O(1) to get the maximum value.\n"
] | [
4,
2,
0
] | [] | [] | [
"data_structures",
"file",
"python"
] | stackoverflow_0002958799_data_structures_file_python.txt |
Q:
What c# equivalent encoding does Python's hash.digest() use?
I am trying to port a python program to c#. Here is the line that's supposed to be a walkthrough but is currently tormenting me:
hash = hashlib.md5(inputstring).digest()
After generating a similar MD5 hash in c# It is absolutely vital that I create a similar hash string as the original python program or my whole application will fail.
My confusion lies in which encoding to use when converting to string in c# i.e
?Encoding enc = new ?Encoding();
string Hash =enc.GetString(HashBytes); //HashBytes is my generated hash
Because I am unable to create two similar hashes when using Encoding.Default i.e
string Hash = Encoding.Default.GetString(HashBytes);
So I'm thinking knowing the deafult hash.digest() encoding for python would help
EDIT
Ok maybe some more code will articulate my problem more. After the hash is calculated in the python program some calculations are carried out i.e
hash = hashlib.md5(inputstring).digest()
for i in range(0,6):
value += ord(hash[i])
return value
Now can you see why two different Hash strings will be problematic? Some of the characters that appear when the python program is ran are repalced by a '?' in C#.
A:
I presume you're using an earlier version of Python than 3, and your string is a normal str.
If you're talking about the output, the digest method returns a string consisting on raw bytes . The equivalent type in C# is byte[], which you already seem to have. It's not text, so using the Encoding class makes no sense.
If you're talking about the input, the md5 function takes in a normal str, which is a string of bytes. You'll have to look at the code before that to figure out what encoding the data is in.
Edit:
Regarding the code you posted, all it's doing is it's taking the values of the six first bytes in the hash and adding them together. You should be able to figure out how to do that in C#.
And make sure you learn the difference between a string of bytes and a string of characters.
A:
It is not encoded at all, it is just an array of bytes in both languages.
A:
According to the documentation, Python strings are ASCII by default. Alternate encodings must be explicitly specified. Therefore, you should be able to pass an ASCII string to the C# MD5 library and get the same hash bytes as if you passed the string to the Python MD5 library.
| What c# equivalent encoding does Python's hash.digest() use? | I am trying to port a python program to c#. Here is the line that's supposed to be a walkthrough but is currently tormenting me:
hash = hashlib.md5(inputstring).digest()
After generating a similar MD5 hash in c# It is absolutely vital that I create a similar hash string as the original python program or my whole application will fail.
My confusion lies in which encoding to use when converting to string in c# i.e
?Encoding enc = new ?Encoding();
string Hash =enc.GetString(HashBytes); //HashBytes is my generated hash
Because I am unable to create two similar hashes when using Encoding.Default i.e
string Hash = Encoding.Default.GetString(HashBytes);
So I'm thinking knowing the deafult hash.digest() encoding for python would help
EDIT
Ok maybe some more code will articulate my problem more. After the hash is calculated in the python program some calculations are carried out i.e
hash = hashlib.md5(inputstring).digest()
for i in range(0,6):
value += ord(hash[i])
return value
Now can you see why two different Hash strings will be problematic? Some of the characters that appear when the python program is ran are repalced by a '?' in C#.
| [
"I presume you're using an earlier version of Python than 3, and your string is a normal str.\nIf you're talking about the output, the digest method returns a string consisting on raw bytes . The equivalent type in C# is byte[], which you already seem to have. It's not text, so using the Encoding class makes no sense.\nIf you're talking about the input, the md5 function takes in a normal str, which is a string of bytes. You'll have to look at the code before that to figure out what encoding the data is in.\nEdit:\nRegarding the code you posted, all it's doing is it's taking the values of the six first bytes in the hash and adding them together. You should be able to figure out how to do that in C#.\nAnd make sure you learn the difference between a string of bytes and a string of characters.\n",
"It is not encoded at all, it is just an array of bytes in both languages. \n",
"According to the documentation, Python strings are ASCII by default. Alternate encodings must be explicitly specified. Therefore, you should be able to pass an ASCII string to the C# MD5 library and get the same hash bytes as if you passed the string to the Python MD5 library.\n"
] | [
5,
2,
0
] | [] | [] | [
"c#",
"digest",
"encoding",
"python"
] | stackoverflow_0002969492_c#_digest_encoding_python.txt |
Q:
EOF error using recv in python
I am doing this in my code,
HOST = '192.168.1.3'
PORT = 50007
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))
query_details = {"page" : page, "query" : query, "type" : type}
s.send(str(query_details))
#data = eval(pickle.loads(s.recv(4096)))
data = s.recv(16384)
But I am continually getting EOF at the last line. The code I am sending with,
self.request.send(pickle.dumps(results))
A:
s.send is not guaranteed to send every byte you give it; use s.sendall instead.
Similarly, s.recv is not guaranteed to receive every byte you ask -- in that case you need to know by other ways exactly how many bytes you need to receive (e.g., send first the length of the string you're sending, encoded with the struct module) and you're responsible for doing the looping yourself to that purpose. There isn't and cannot be any recvall because stream sockets are not "self-delimiting" in any way -- they're just streams, broken up into totally arbitrary packets of sizes not semantically relevant.
You shouldn't ever get an EOF from the recv itself, though of course you can expect to get it in the line you've commented out, from the pickle.loads (because its argument may well be only a part of the bytes that the counterpart sent: as explained in the previous paragraph).
| EOF error using recv in python | I am doing this in my code,
HOST = '192.168.1.3'
PORT = 50007
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))
query_details = {"page" : page, "query" : query, "type" : type}
s.send(str(query_details))
#data = eval(pickle.loads(s.recv(4096)))
data = s.recv(16384)
But I am continually getting EOF at the last line. The code I am sending with,
self.request.send(pickle.dumps(results))
| [
"s.send is not guaranteed to send every byte you give it; use s.sendall instead.\nSimilarly, s.recv is not guaranteed to receive every byte you ask -- in that case you need to know by other ways exactly how many bytes you need to receive (e.g., send first the length of the string you're sending, encoded with the struct module) and you're responsible for doing the looping yourself to that purpose. There isn't and cannot be any recvall because stream sockets are not \"self-delimiting\" in any way -- they're just streams, broken up into totally arbitrary packets of sizes not semantically relevant.\nYou shouldn't ever get an EOF from the recv itself, though of course you can expect to get it in the line you've commented out, from the pickle.loads (because its argument may well be only a part of the bytes that the counterpart sent: as explained in the previous paragraph).\n"
] | [
8
] | [] | [] | [
"python",
"sockets"
] | stackoverflow_0002969509_python_sockets.txt |
Q:
A means to access my db in python - what is my problem afterall?
I have a remote database (at the moment sqlite, but eventually mysql) that I want to be able to call from a webpage dynamically. Basically to query for data that will populate goog viz charts etc on the page (and possibly images).
I have a small, slow server that i can basically run anything on. I've also located the python lib that lets me painlessly generate the data in a format that is friendly for goog viz (http://code.google.com/apis/visualization/documentation/dev/gviz_api_lib.html)
but, what i cant understand is how do i go from there to the point where i have a url, to which i can pass arguments and get the data back? googling suggests django (or pylons), but isnt that a bit OTT (i dont have experiance with either). I'm opening to learning how to use them, but atm im very confused as to what exactly i need to be able to run this webservice.
help greatly appreciated.
thanks!
A:
The first thing you'll need to do is deploy a web server (apache is a common choice).
Once your server is running, you can test it by pushing simple HTML files to it and make sure they are accessible to you from a browser.
Once your server is properly configured, you have a number of options available for interfacing your web-server with Python:
The easiest is probably to follow the deployment guide of a popular web framework, as you mentioned.
Pylons Deployment Guide
Django Installation Guide
You can use mod_python directly, which requires adding boilerplate code around your code to work inside a mod_python instance.
mod_python documentation
You can use CGI, which does not scale very well but makes it fairly easy to interact with the web-server:
Python's built-in CGI module
| A means to access my db in python - what is my problem afterall? | I have a remote database (at the moment sqlite, but eventually mysql) that I want to be able to call from a webpage dynamically. Basically to query for data that will populate goog viz charts etc on the page (and possibly images).
I have a small, slow server that i can basically run anything on. I've also located the python lib that lets me painlessly generate the data in a format that is friendly for goog viz (http://code.google.com/apis/visualization/documentation/dev/gviz_api_lib.html)
but, what i cant understand is how do i go from there to the point where i have a url, to which i can pass arguments and get the data back? googling suggests django (or pylons), but isnt that a bit OTT (i dont have experiance with either). I'm opening to learning how to use them, but atm im very confused as to what exactly i need to be able to run this webservice.
help greatly appreciated.
thanks!
| [
"The first thing you'll need to do is deploy a web server (apache is a common choice).\nOnce your server is running, you can test it by pushing simple HTML files to it and make sure they are accessible to you from a browser.\nOnce your server is properly configured, you have a number of options available for interfacing your web-server with Python:\n\nThe easiest is probably to follow the deployment guide of a popular web framework, as you mentioned.\n\n\nPylons Deployment Guide\nDjango Installation Guide\n\nYou can use mod_python directly, which requires adding boilerplate code around your code to work inside a mod_python instance.\n\n\nmod_python documentation\n\nYou can use CGI, which does not scale very well but makes it fairly easy to interact with the web-server:\n\n\nPython's built-in CGI module\n\n\n"
] | [
2
] | [] | [] | [
"django",
"pylons",
"python",
"web_services"
] | stackoverflow_0002969640_django_pylons_python_web_services.txt |
Q:
How to use HTTP method DELETE on Google App Engine?
I can use this verb in the Python Windows SDK. But not in production. Why? What am I doing wrong?
The error message includes (only seen via firebug or fiddler)
Malformed request
or something like that
My code looks like:
from google.appengine.ext import db
from google.appengine.ext import webapp
class Handler(webapp.RequestHandler):
def delete(self):
key = self.request.get('key')
item = db.get(key)
item.delete()
self.response.out.write(key)
A:
Your handler looks OK, are you sure you're sending the request correctly? Using jQuery, this works for me (both using dev_appserver and google app engine production):
$('#delete-button').click(function() {
$.ajax({
'type': 'DELETE',
'url': '/some/url/that/handles/delete'
})
});
class DeleteHandler(webapp.RequestHandler):
def delete(self):
if users.get_current_user() == allowed_user:
the_data_model.delete()
else:
self.response.out.write('Permission denied')
Sending a response body/message did not work for me (e.g. the "permission denied" message in my example won't get to the client). Have you verified your items aren't deleted?
| How to use HTTP method DELETE on Google App Engine? | I can use this verb in the Python Windows SDK. But not in production. Why? What am I doing wrong?
The error message includes (only seen via firebug or fiddler)
Malformed request
or something like that
My code looks like:
from google.appengine.ext import db
from google.appengine.ext import webapp
class Handler(webapp.RequestHandler):
def delete(self):
key = self.request.get('key')
item = db.get(key)
item.delete()
self.response.out.write(key)
| [
"Your handler looks OK, are you sure you're sending the request correctly? Using jQuery, this works for me (both using dev_appserver and google app engine production):\n$('#delete-button').click(function() {\n $.ajax({\n 'type': 'DELETE',\n 'url': '/some/url/that/handles/delete'\n })\n});\n\nclass DeleteHandler(webapp.RequestHandler):\n\n def delete(self):\n if users.get_current_user() == allowed_user:\n the_data_model.delete()\n else:\n self.response.out.write('Permission denied')\n\nSending a response body/message did not work for me (e.g. the \"permission denied\" message in my example won't get to the client). Have you verified your items aren't deleted?\n"
] | [
3
] | [] | [] | [
"google_app_engine",
"http",
"http_delete",
"python"
] | stackoverflow_0002398012_google_app_engine_http_http_delete_python.txt |
Q:
How do i send a file with sockets in python?
I am already familiar with python and socket usage and can send strings of text over these. But how would i go about sending, say, an MP3 file?
A:
The following code would do what you literally ask (assuming thesocket is a connected stream socket):
with open('thefile.mp3', 'rb') as f:
thesocket.sendall(f.read())
but of course it's unlikely to be much use without some higher-level protocol to help the counterpart know how much data it's going to receive, what type of data, and so forth.
| How do i send a file with sockets in python? | I am already familiar with python and socket usage and can send strings of text over these. But how would i go about sending, say, an MP3 file?
| [
"The following code would do what you literally ask (assuming thesocket is a connected stream socket):\nwith open('thefile.mp3', 'rb') as f:\n thesocket.sendall(f.read())\n\nbut of course it's unlikely to be much use without some higher-level protocol to help the counterpart know how much data it's going to receive, what type of data, and so forth.\n"
] | [
1
] | [] | [] | [
"mp3",
"networking",
"python",
"send",
"sockets"
] | stackoverflow_0002970019_mp3_networking_python_send_sockets.txt |
Q:
php equivalent to jython?
i wonder if there is a php equivalent to jython so you can use java classes with php?
thanks
A:
http://www.caucho.com/resin-3.0/quercus/
http://php-java-bridge.sourceforge.net/pjb/
A:
I'm not quite sure what you are asking, since you are talking about two completely different things: a PHP equivalent to Jython, and accessing Java classes from PHP. So, I'm going to answer both.
Jython is a Python implementation for the JVM. So, the PHP equivalent would be a PHP implementation for the JVM. There are actually two that I know of: IBM's P8, which is part of Project Zero and Quercus.
However, you don't need to run your PHP on Java if you want to run it with Java. A PHP-to-Java bridge would be enough, you don't need a PHP-on-Java implementation. I know that at some point in the past at least one such bridge must have existed, because someone once told me that they used one, but that is about all I know.
A:
I just googled php jvm and got a bunch of hits. Never tried any of them.
A:
Well: Java Server Pages (JSP) are "equivalent" to PHP, but using java classes.
It's "equivalent" in that it's HTML with embedded java code, but not at all compatible to PHP syntax.
A:
Fayer,
Try PHP/Java Bridge that integrates PHP and Java, as recommended in PHP manual (Java Class - dead- URL: www.php.net/manual/en/java.installation.php).
Please, let me know how it worked for you.
You may have to use Zend Server CE (www.zend.com/en/products/server-ce/), instead of Apache.
Best.
| php equivalent to jython? | i wonder if there is a php equivalent to jython so you can use java classes with php?
thanks
| [
"http://www.caucho.com/resin-3.0/quercus/\nhttp://php-java-bridge.sourceforge.net/pjb/\n",
"I'm not quite sure what you are asking, since you are talking about two completely different things: a PHP equivalent to Jython, and accessing Java classes from PHP. So, I'm going to answer both.\nJython is a Python implementation for the JVM. So, the PHP equivalent would be a PHP implementation for the JVM. There are actually two that I know of: IBM's P8, which is part of Project Zero and Quercus.\nHowever, you don't need to run your PHP on Java if you want to run it with Java. A PHP-to-Java bridge would be enough, you don't need a PHP-on-Java implementation. I know that at some point in the past at least one such bridge must have existed, because someone once told me that they used one, but that is about all I know.\n",
"I just googled php jvm and got a bunch of hits. Never tried any of them.\n",
"Well: Java Server Pages (JSP) are \"equivalent\" to PHP, but using java classes. \nIt's \"equivalent\" in that it's HTML with embedded java code, but not at all compatible to PHP syntax. \n",
"Fayer,\nTry PHP/Java Bridge that integrates PHP and Java, as recommended in PHP manual (Java Class - dead- URL: www.php.net/manual/en/java.installation.php).\nPlease, let me know how it worked for you.\nYou may have to use Zend Server CE (www.zend.com/en/products/server-ce/), instead of Apache.\nBest.\n"
] | [
7,
5,
1,
1,
1
] | [] | [] | [
"java",
"jython",
"php",
"python"
] | stackoverflow_0002968381_java_jython_php_python.txt |
Q:
CherryPy configuration for CSS file access
The following is the result of CherryPy and css pathing problems I have recently posted, both of which have been answered, but another problem has arisen.
I have a html page which I preview in a browser (via. editor/IDE) and which calls a css file from a css folder in parallel with my application folder (containing main.py and My.html file). For this I use relative pathing in the html header...
<link rel="stylesheet" href="..\css\commoncss.css" type="text/css">
All good so far. However, when I run Main.py, the css file cannot be found, and the page looks a mess :-( The CP configuration file includes the following line...
tools.staticdir.root = "my\app\folder" # contains Main.py and My.html
but no other staticdir declarations because CP should be looking for ..\css\commoncss.css relative to the static root folder (am I right?)
I could have my CSS folder as a top-level folder (then I could use href="/css/commoncss" and declare /css as a staticdir), but that's ugly. Alternatively the CSS folder could be a subfolder of the app folder, but I really need the freedom to be able to put the .css file(s) in a different path if possible (could be common to more than one app.).
I really would like to crack this problem, because otherwise it means the html designer cannot use the same template as the Python programmer without changing the href directive.
Any help would be appreciated.
Alan
A:
but no other staticdir declarations
because CP should be looking for
..\css\commoncss.css relative to the
static root folder (am I right?)
You can't reach into your physical file directory (static dir) via URLs, nor should you want to.
Cherrypy is looking for the css file relative to your HTML file in the URL hierarchy. If your HTML file is at root, then this won't work. If it's at, say: /stuff/blarg.html, then it would go down to the root and look for the css folder.
I think it's easier to just give an absolute path, because it's reasonable to stipulate that the css directory be in a known location: "/css/commoncss.css"
| CherryPy configuration for CSS file access | The following is the result of CherryPy and css pathing problems I have recently posted, both of which have been answered, but another problem has arisen.
I have a html page which I preview in a browser (via. editor/IDE) and which calls a css file from a css folder in parallel with my application folder (containing main.py and My.html file). For this I use relative pathing in the html header...
<link rel="stylesheet" href="..\css\commoncss.css" type="text/css">
All good so far. However, when I run Main.py, the css file cannot be found, and the page looks a mess :-( The CP configuration file includes the following line...
tools.staticdir.root = "my\app\folder" # contains Main.py and My.html
but no other staticdir declarations because CP should be looking for ..\css\commoncss.css relative to the static root folder (am I right?)
I could have my CSS folder as a top-level folder (then I could use href="/css/commoncss" and declare /css as a staticdir), but that's ugly. Alternatively the CSS folder could be a subfolder of the app folder, but I really need the freedom to be able to put the .css file(s) in a different path if possible (could be common to more than one app.).
I really would like to crack this problem, because otherwise it means the html designer cannot use the same template as the Python programmer without changing the href directive.
Any help would be appreciated.
Alan
| [
"\nbut no other staticdir declarations\n because CP should be looking for\n ..\\css\\commoncss.css relative to the\n static root folder (am I right?)\n\nYou can't reach into your physical file directory (static dir) via URLs, nor should you want to.\nCherrypy is looking for the css file relative to your HTML file in the URL hierarchy. If your HTML file is at root, then this won't work. If it's at, say: /stuff/blarg.html, then it would go down to the root and look for the css folder.\nI think it's easier to just give an absolute path, because it's reasonable to stipulate that the css directory be in a known location: \"/css/commoncss.css\"\n"
] | [
1
] | [] | [] | [
"cherrypy",
"configuration",
"css",
"python"
] | stackoverflow_0002970767_cherrypy_configuration_css_python.txt |
Q:
Change embedded image type in APIC ID3 tag via Mutagen
I have a large music library which I have just spent around 30 hours organizing. For some of the MP3 files, I embedded the cover art image as type 0 (Other) and I'd like to change it to type 3 (Front Cover). Is there a way to do this in Python, specifically in Mutagen?
A:
Here's how I was able to pull it off.
First, get access to the file in Mutagen:
audio = MP3("filename.mp3")
Then, get a reference to the tag you're looking for:
picturetag = audio.tags['APIC:Folder.jpg']
Then, modify the type attribute:
picturetag.type = 3
Then, assign it back into the audio file, just to be sure
audio.tags['APIC:Folder.jpg'] = picturetag
Finally, save it!
audio.save()
And you're there! The APIC tag comes with its own class that sports everything you'd need to modify pictures and picture tagging info. Happy music organizing!
| Change embedded image type in APIC ID3 tag via Mutagen | I have a large music library which I have just spent around 30 hours organizing. For some of the MP3 files, I embedded the cover art image as type 0 (Other) and I'd like to change it to type 3 (Front Cover). Is there a way to do this in Python, specifically in Mutagen?
| [
"Here's how I was able to pull it off.\nFirst, get access to the file in Mutagen:\naudio = MP3(\"filename.mp3\")\n\nThen, get a reference to the tag you're looking for:\npicturetag = audio.tags['APIC:Folder.jpg']\n\nThen, modify the type attribute:\npicturetag.type = 3\n\nThen, assign it back into the audio file, just to be sure\naudio.tags['APIC:Folder.jpg'] = picturetag\n\nFinally, save it!\naudio.save()\n\nAnd you're there! The APIC tag comes with its own class that sports everything you'd need to modify pictures and picture tagging info. Happy music organizing!\n"
] | [
8
] | [] | [] | [
"apic",
"id3",
"mp3",
"mutagen",
"python"
] | stackoverflow_0002970473_apic_id3_mp3_mutagen_python.txt |
Q:
Is processing a dead project?
Look at the last updated release, Python 2.5??
http://pypi.python.org/pypi/processing
A:
It became multiprocessing.
| Is processing a dead project? | Look at the last updated release, Python 2.5??
http://pypi.python.org/pypi/processing
| [
"It became multiprocessing.\n"
] | [
5
] | [] | [] | [
"python"
] | stackoverflow_0002970871_python.txt |
Q:
win32com equivalent of xlrd's sheet.ncols
xlrd makes it pretty easy to know what the last column is.
is there an easy way using win32com?
I have tried using ws.UsedRange.Rows.Count but this doesnt seem to give a correct answer.
A:
That's defined to give the count of rows in the used range (which may not start at cell A1). You need the number of columns in the worksheet.
Try something like this:
used = ws.UsedRange
nrows = used.Row + used.Rows.Count - 1
ncols = used.Column + used.Columns.Count - 1
| win32com equivalent of xlrd's sheet.ncols | xlrd makes it pretty easy to know what the last column is.
is there an easy way using win32com?
I have tried using ws.UsedRange.Rows.Count but this doesnt seem to give a correct answer.
| [
"That's defined to give the count of rows in the used range (which may not start at cell A1). You need the number of columns in the worksheet.\nTry something like this:\nused = ws.UsedRange\nnrows = used.Row + used.Rows.Count - 1\nncols = used.Column + used.Columns.Count - 1\n\n"
] | [
6
] | [] | [] | [
"com",
"excel",
"python",
"win32com",
"xlrd"
] | stackoverflow_0002968830_com_excel_python_win32com_xlrd.txt |
Q:
best way to form Validation on gae
(1) is this way : http://code.google.com/intl/en/appengine/articles/djangoforms.html
(2) is write by self :
#/usr/bin/env python2.5
#----------------------------
# Datastore models for user & signup
#----------------------------
from base64 import b64encode as b64
from hashlib import md5, sha256
from random import randint
from time import time
from google.appengine.ext import db
N_SALT = 8 # length of the password salt
def salt_n_hash(password, salt=None):
"""
Generate a salt and return in base64 encoding the hash of the
password with the salt and the character '$' prepended to it.
"""
salt = salt or b64( ''.join(chr(randint(0, 0xff)) for _ in range(N_SALT)) )
return salt + '$' + b64( sha256(salt+password.encode("ascii")).digest() )
class User(db.Model):
nickname = db.StringProperty(required=True)
email = db.EmailProperty(required=True)
pwd = db.StringProperty(required=True)
suspended = db.BooleanProperty(default=True)
@classmethod
def authenticate(klass, nickname, password):
"""Return an User() entity instance if password is correct"""
user = klass.get_by_key_name(nickname)
if user:
n_salt = user.pwd.index('$')
if user.pwd == salt_n_hash(password, salt=user.pwd[:n_salt]):
return user
def __eq__(self, other):
return self.nickname == other.nickname
def signup_id(nickname):
return md5( nickname + repr(time()) ).hexdigest()
class UserSignup(db.Model):
user = db.ReferenceProperty(User, required=True)
date = db.DateProperty(auto_now_add=True)
which way is better ,
or did you have better way to do this , ex: a simply form Validation framework,
thanks
A:
If you're using Django, djangoforms is definitely the way to go. If tipfy or other light-weight frameworks, try wtforms (it's also in the tipfy source tree).
| best way to form Validation on gae | (1) is this way : http://code.google.com/intl/en/appengine/articles/djangoforms.html
(2) is write by self :
#/usr/bin/env python2.5
#----------------------------
# Datastore models for user & signup
#----------------------------
from base64 import b64encode as b64
from hashlib import md5, sha256
from random import randint
from time import time
from google.appengine.ext import db
N_SALT = 8 # length of the password salt
def salt_n_hash(password, salt=None):
"""
Generate a salt and return in base64 encoding the hash of the
password with the salt and the character '$' prepended to it.
"""
salt = salt or b64( ''.join(chr(randint(0, 0xff)) for _ in range(N_SALT)) )
return salt + '$' + b64( sha256(salt+password.encode("ascii")).digest() )
class User(db.Model):
nickname = db.StringProperty(required=True)
email = db.EmailProperty(required=True)
pwd = db.StringProperty(required=True)
suspended = db.BooleanProperty(default=True)
@classmethod
def authenticate(klass, nickname, password):
"""Return an User() entity instance if password is correct"""
user = klass.get_by_key_name(nickname)
if user:
n_salt = user.pwd.index('$')
if user.pwd == salt_n_hash(password, salt=user.pwd[:n_salt]):
return user
def __eq__(self, other):
return self.nickname == other.nickname
def signup_id(nickname):
return md5( nickname + repr(time()) ).hexdigest()
class UserSignup(db.Model):
user = db.ReferenceProperty(User, required=True)
date = db.DateProperty(auto_now_add=True)
which way is better ,
or did you have better way to do this , ex: a simply form Validation framework,
thanks
| [
"If you're using Django, djangoforms is definitely the way to go. If tipfy or other light-weight frameworks, try wtforms (it's also in the tipfy source tree).\n"
] | [
1
] | [] | [] | [
"forms",
"google_app_engine",
"python",
"validation"
] | stackoverflow_0002971093_forms_google_app_engine_python_validation.txt |
Q:
Unexpected Blank lines in python output
I have a bit of code that runs through a dictionary and outputs the values from it in a CSV format. Strangely I'm getting a couple of blank lines where all the output of all of the dictionary entries is blank. I've read the code and can't understand has anything except lines with commas can be output. The blank line should have values in it, so extra \n is not the cause. Can anyone advise why I'd be getting blank lines? Other times I run the missing line appears.
Missing line:
6415, 6469, -4.60, clerical, 2, ,,,joe,030193027org,joelj,030155640dup
Using python 2.6.5
Bit of code:
tfile = file(path, 'w')
tfile.write('Rec_ID_A, Rec_ID_B, Weight, Assigned, Run, By, On, Comment\n')
rec_num_a = 0
while (rec_num_a <= max_rec_num_a):
try:
value = self.dict['DA'+str(rec_num_a)]
except:
value = [0,0,0,'rejected']
if (value[3]!='rejected'):
weightValue = "%0.2f" % value[2]
line = value[0][1:] + ', ' + value[1][1:] + ', ' + weightValue \
+ ', ' + str(value[3]) + ', ' + str(value[4])
if (len(value)>5):
line = line + ', ' + value[5] + ',' + value[6] + ',' + value[7]
(a_pkey, b_pkey) = self.derive_pkeys(value)
line = line + a_pkey + b_pkey
tfile.write( line + '\n')
rec_num_a +=1
Sample output
6388, 2187, 76.50, clerical, 1, ,,,cameron,030187639org,cameron,030187639org
6398, 2103, 70.79, clerical, 1, ,,,caleb,030189225org,caldb,030189225dup
6402, 2205, 1.64, clerical, 2, ,,,jenna,030190334org,cameron,020305169dup
6409, 7892, 79.09, clerical, 1, ,,,liam,030191863org,liam,030191863org
6416, 11519, 79.09, clerical, 1, ,,,thomas,030193156org,thomas,030193156org
6417, 8854, 6.10, clerical, 2, ,,,ruby,030193713org,mia,020160397org
6421, 2864, -0.84, clerical, 2, ,,,kristin,030194394org,connou,020023478dup
6423, 413, 75.63, clerical, 1, ,,,adrian,030194795org,adriah,030194795dup
A:
Why are you not using Python's built-in csv module?
Then, what does self.derive_pkeys(value) do? Could it be that b_pkey sometimes ends with \n?
A:
without seeing the source data it is hard to tell, but I could speculate that your data has some stray \n characters in it, like in b_pkey . You could try and do a .strip() on that value to make sure it is clean.
A:
(1) Please say precisely what a "blank line" is -- contains only a newline? contains one or more spaces? Other whitespace characters?
(2) How did you determine the answer to Q1? ["looked at in a text editor" is not a good answer, nor is "printed it to my terminal and eyeballed it"; try print repr(line)]
(3) How have you determined that the "missing data" is actually in the input dictionary?
(4) Some runs work, some don't ... so what else is different? From what is the dict populated? A multiuser database? A static file?
| Unexpected Blank lines in python output | I have a bit of code that runs through a dictionary and outputs the values from it in a CSV format. Strangely I'm getting a couple of blank lines where all the output of all of the dictionary entries is blank. I've read the code and can't understand has anything except lines with commas can be output. The blank line should have values in it, so extra \n is not the cause. Can anyone advise why I'd be getting blank lines? Other times I run the missing line appears.
Missing line:
6415, 6469, -4.60, clerical, 2, ,,,joe,030193027org,joelj,030155640dup
Using python 2.6.5
Bit of code:
tfile = file(path, 'w')
tfile.write('Rec_ID_A, Rec_ID_B, Weight, Assigned, Run, By, On, Comment\n')
rec_num_a = 0
while (rec_num_a <= max_rec_num_a):
try:
value = self.dict['DA'+str(rec_num_a)]
except:
value = [0,0,0,'rejected']
if (value[3]!='rejected'):
weightValue = "%0.2f" % value[2]
line = value[0][1:] + ', ' + value[1][1:] + ', ' + weightValue \
+ ', ' + str(value[3]) + ', ' + str(value[4])
if (len(value)>5):
line = line + ', ' + value[5] + ',' + value[6] + ',' + value[7]
(a_pkey, b_pkey) = self.derive_pkeys(value)
line = line + a_pkey + b_pkey
tfile.write( line + '\n')
rec_num_a +=1
Sample output
6388, 2187, 76.50, clerical, 1, ,,,cameron,030187639org,cameron,030187639org
6398, 2103, 70.79, clerical, 1, ,,,caleb,030189225org,caldb,030189225dup
6402, 2205, 1.64, clerical, 2, ,,,jenna,030190334org,cameron,020305169dup
6409, 7892, 79.09, clerical, 1, ,,,liam,030191863org,liam,030191863org
6416, 11519, 79.09, clerical, 1, ,,,thomas,030193156org,thomas,030193156org
6417, 8854, 6.10, clerical, 2, ,,,ruby,030193713org,mia,020160397org
6421, 2864, -0.84, clerical, 2, ,,,kristin,030194394org,connou,020023478dup
6423, 413, 75.63, clerical, 1, ,,,adrian,030194795org,adriah,030194795dup
| [
"Why are you not using Python's built-in csv module?\nThen, what does self.derive_pkeys(value) do? Could it be that b_pkey sometimes ends with \\n?\n",
"without seeing the source data it is hard to tell, but I could speculate that your data has some stray \\n characters in it, like in b_pkey . You could try and do a .strip() on that value to make sure it is clean.\n",
"(1) Please say precisely what a \"blank line\" is -- contains only a newline? contains one or more spaces? Other whitespace characters? \n(2) How did you determine the answer to Q1? [\"looked at in a text editor\" is not a good answer, nor is \"printed it to my terminal and eyeballed it\"; try print repr(line)]\n(3) How have you determined that the \"missing data\" is actually in the input dictionary?\n(4) Some runs work, some don't ... so what else is different? From what is the dict populated? A multiuser database? A static file? \n"
] | [
3,
1,
1
] | [] | [] | [
"csv",
"file",
"python"
] | stackoverflow_0002971804_csv_file_python.txt |
Q:
Python iteration
I'm trying to do a simple script in Python that will print hex values and increment value like this:
char = 0
char2 = 0
def doublehex():
global char,char2
for x in range(255):
char = char + 1
a = str(chr(char)).encode("hex")
for p in range(255):
char2 = char2 + 1
b = str(chr(char2)).encode("hex")
c = a+" "+b
print "testing with:%s"%(c)
doublehex()
Output:
testing with:01 01
testing with:01 02
testing with:01 03
[snip]
testing with:01 fd
testing with:01 fe
testing with:01 ff
Traceback (most recent call last):
File "test2.py", line 16, in doublehex
b = str(chr(char2)).encode("hex")
ValueError: chr() arg not in range(256)
Actually what I'm trying to do is:
01 01
01 02
[snip]
01 ff
02 01
02 02
And so on, until ff ff. What's wrong in my script?
Also it seems I can't try:
00 01
00 02
I don't know why.
A:
for x in xrange(256):
for y in xrange(256):
print '%02x %02x' % (x, y)
A:
You need to set char2 = 0 before
for p in range(255):
And actually, you don't need counters - char,char2
Following will work from 0 to ff
for x in range(256):
for p in range(256):
print chr(x).encode("hex"),chr(p).encode("hex")
A:
Why not something simple like?
for x in range(0, int("FFFF", 16)):
print "%x" % x
A:
A one liner as well (minus the import):
import itertools
hexs = itertools.product(*([[chr(x).encode("hex") for x in range(256)]] * 2))
A:
To print a hex value, just do something like this:
for i in range(255):
print "%x" % i
A:
If you're using python 2.6, there is a 4-line way to do what you're trying:
import itertools
char_pair_list = itertools.product(range(256),range(256))
for char_pair in char_pair_list:
print str(chr(char_pair[0])).encode("hex"), ',' , str(chr(char_pair[1])).encode("hex")
| Python iteration | I'm trying to do a simple script in Python that will print hex values and increment value like this:
char = 0
char2 = 0
def doublehex():
global char,char2
for x in range(255):
char = char + 1
a = str(chr(char)).encode("hex")
for p in range(255):
char2 = char2 + 1
b = str(chr(char2)).encode("hex")
c = a+" "+b
print "testing with:%s"%(c)
doublehex()
Output:
testing with:01 01
testing with:01 02
testing with:01 03
[snip]
testing with:01 fd
testing with:01 fe
testing with:01 ff
Traceback (most recent call last):
File "test2.py", line 16, in doublehex
b = str(chr(char2)).encode("hex")
ValueError: chr() arg not in range(256)
Actually what I'm trying to do is:
01 01
01 02
[snip]
01 ff
02 01
02 02
And so on, until ff ff. What's wrong in my script?
Also it seems I can't try:
00 01
00 02
I don't know why.
| [
"for x in xrange(256):\n for y in xrange(256):\n print '%02x %02x' % (x, y)\n\n",
"You need to set char2 = 0 before\nfor p in range(255):\n\nAnd actually, you don't need counters - char,char2\nFollowing will work from 0 to ff\nfor x in range(256):\n for p in range(256):\n print chr(x).encode(\"hex\"),chr(p).encode(\"hex\")\n\n",
"Why not something simple like?\nfor x in range(0, int(\"FFFF\", 16)):\n print \"%x\" % x\n\n",
"A one liner as well (minus the import):\nimport itertools\nhexs = itertools.product(*([[chr(x).encode(\"hex\") for x in range(256)]] * 2))\n\n",
"To print a hex value, just do something like this:\nfor i in range(255):\n print \"%x\" % i\n\n",
"If you're using python 2.6, there is a 4-line way to do what you're trying:\nimport itertools\n\nchar_pair_list = itertools.product(range(256),range(256))\nfor char_pair in char_pair_list:\n print str(chr(char_pair[0])).encode(\"hex\"), ',' , str(chr(char_pair[1])).encode(\"hex\")\n\n"
] | [
6,
4,
1,
1,
0,
0
] | [] | [] | [
"for_loop",
"hex",
"increment",
"loops",
"python"
] | stackoverflow_0002972048_for_loop_hex_increment_loops_python.txt |
Q:
python date appears in last year
How do I do a check in python that a date appears in the last year. i.e. date between now and (now-1 year)
Thanks
A:
In [10]: today=datetime.date.today()
In [11]: datetime.date(2010,5,5) < today
Out[11]: True
In [12]: today-datetime.timedelta(days=365) <= datetime.date(2010,5,5) < today
Out[12]: True
In [13]: today-datetime.timedelta(days=365) <= datetime.date(2009,5,5) < today
Out[13]: False
Edit: if today is the leap year 2000-2-29, then today-datetime.timedelta(days=365) is 1999-3-1. If you'd like one year ago to be 1999-2-28 then you could use
def add_years(date,num):
try:
result=datetime.date(date.year+num,date.month,date.day)
except ValueError:
result=datetime.date(date.year+num,date.month,date.day-1)
return result
today=datetime.date(2000,2,29)
print(add_years(today,-1))
# 1999-02-28
A:
This should work for leap years:
>>> from datetime import date
>>> today = date.today()
>>> date(today.year - 1, today.month, today.day) < date(2009, 06, 05) <= today
True
>>> date(today.year - 1, today.month, today.day) < date(2009, 06, 04) <= today
False
>>> date(today.year - 1, today.month, today.day) < date(2010, 07, 04) <= today
False
| python date appears in last year | How do I do a check in python that a date appears in the last year. i.e. date between now and (now-1 year)
Thanks
| [
"In [10]: today=datetime.date.today()\n\nIn [11]: datetime.date(2010,5,5) < today\nOut[11]: True\n\nIn [12]: today-datetime.timedelta(days=365) <= datetime.date(2010,5,5) < today\nOut[12]: True\n\nIn [13]: today-datetime.timedelta(days=365) <= datetime.date(2009,5,5) < today\nOut[13]: False\n\nEdit: if today is the leap year 2000-2-29, then today-datetime.timedelta(days=365) is 1999-3-1. If you'd like one year ago to be 1999-2-28 then you could use \ndef add_years(date,num):\n try:\n result=datetime.date(date.year+num,date.month,date.day)\n except ValueError:\n result=datetime.date(date.year+num,date.month,date.day-1)\n return result\n\ntoday=datetime.date(2000,2,29)\nprint(add_years(today,-1))\n# 1999-02-28\n\n",
"This should work for leap years:\n>>> from datetime import date\n>>> today = date.today()\n>>> date(today.year - 1, today.month, today.day) < date(2009, 06, 05) <= today\nTrue\n>>> date(today.year - 1, today.month, today.day) < date(2009, 06, 04) <= today\nFalse\n>>> date(today.year - 1, today.month, today.day) < date(2010, 07, 04) <= today\nFalse\n\n"
] | [
2,
2
] | [] | [] | [
"python"
] | stackoverflow_0002972742_python.txt |
Q:
pydev 1.5.3 not working fine with Easy Eclipse 1.3.1
I installed Pydev 1.5.3 (so that I could get the merged version of Pydev Extensions in core PyDev) in an EasyEclipse 1.3.1 installation. After this, Compare with > Base revision etc. comparison operations stopped working. I had to disable the PyDev 1.5.3 and revert back to the pre-installed Pydev 1.3.13 (part of EasyEclipse 1.3.1).
Has anybody faced similar problem? Is there any work-around for this?
A:
My pydev broke entirely with 1.5.3.
I had to downgrade yum downgrade eclipse-pydev and keep yum from updating it ever since.
A:
I am now using PyDev 1.5.6 and its working fine with EasyEclipse (along with SubClipse). The issues in comparison seem to have been resolved. In fact, the file diff in 1.5.6 is looking much more beautiful than before.
| pydev 1.5.3 not working fine with Easy Eclipse 1.3.1 | I installed Pydev 1.5.3 (so that I could get the merged version of Pydev Extensions in core PyDev) in an EasyEclipse 1.3.1 installation. After this, Compare with > Base revision etc. comparison operations stopped working. I had to disable the PyDev 1.5.3 and revert back to the pre-installed Pydev 1.3.13 (part of EasyEclipse 1.3.1).
Has anybody faced similar problem? Is there any work-around for this?
| [
"My pydev broke entirely with 1.5.3.\nI had to downgrade yum downgrade eclipse-pydev and keep yum from updating it ever since.\n",
"I am now using PyDev 1.5.6 and its working fine with EasyEclipse (along with SubClipse). The issues in comparison seem to have been resolved. In fact, the file diff in 1.5.6 is looking much more beautiful than before. \n"
] | [
0,
0
] | [] | [] | [
"eclipse",
"pydev",
"python"
] | stackoverflow_0001938929_eclipse_pydev_python.txt |
Q:
django basic pagination problem
i have a microblog app, and i'm trying to paginate the entries, to show only 10 per page, for example.
though i've followed the tutorial, my pagination doesn't seem t be working.
the listing function looks like that:
def listing(request):
blog_list = Blog.objects.all()
paginator = Paginator(blog_list, 10)
try:
page = int(request.GET.get('page','1'))
except ValueError:
page = 1
try:
posts = paginator.page(page)
except (EmptyPage, InvalidPage):
posts = paginator.page(paginator.num_pages)
return render_to_response('profile/publicProfile.html', {"posts": posts})
and in my template:
<div class="pagination">
<span class="step-links">
{% if posts.has_previous %}
<a href="?page={{ posts.previous_page_number }}">previous</a>
{% endif %}
<span class="current">
Page {{ posts.number }} of {{ posts.paginator.num_pages }}.
</span>
{% if object.has_next %}
<a href="?page={{ posts.next_page_number }}">next</a>
{% endif %}
</span>
thanks!
A:
You can use django-pagination which makes it possible to implement pagination without writing a single line of Python code, you only pass list of all objects to template (i.e. blog_list = Blog.objects.all() in your case), and then use three tags in you template:
{% load pagination_tags %}
{% autopaginate blog_list 10 %}
{% paginate %}
A:
Return the object_list generic view that takes the paginate_by argument, rather than return the render_to_response
| django basic pagination problem | i have a microblog app, and i'm trying to paginate the entries, to show only 10 per page, for example.
though i've followed the tutorial, my pagination doesn't seem t be working.
the listing function looks like that:
def listing(request):
blog_list = Blog.objects.all()
paginator = Paginator(blog_list, 10)
try:
page = int(request.GET.get('page','1'))
except ValueError:
page = 1
try:
posts = paginator.page(page)
except (EmptyPage, InvalidPage):
posts = paginator.page(paginator.num_pages)
return render_to_response('profile/publicProfile.html', {"posts": posts})
and in my template:
<div class="pagination">
<span class="step-links">
{% if posts.has_previous %}
<a href="?page={{ posts.previous_page_number }}">previous</a>
{% endif %}
<span class="current">
Page {{ posts.number }} of {{ posts.paginator.num_pages }}.
</span>
{% if object.has_next %}
<a href="?page={{ posts.next_page_number }}">next</a>
{% endif %}
</span>
thanks!
| [
"You can use django-pagination which makes it possible to implement pagination without writing a single line of Python code, you only pass list of all objects to template (i.e. blog_list = Blog.objects.all() in your case), and then use three tags in you template:\n {% load pagination_tags %}\n {% autopaginate blog_list 10 %}\n {% paginate %}\n\n",
"Return the object_list generic view that takes the paginate_by argument, rather than return the render_to_response\n"
] | [
6,
5
] | [] | [] | [
"django",
"pagination",
"python"
] | stackoverflow_0002973151_django_pagination_python.txt |
Q:
Can anyone figure out my problem [Python]
I have been trying to debug the below python cgi code but doesn't seems to work. When i try in new file it these three lines seems to work
filename=unique_file('C:/wamp/www/project/input.fasta')
prefix, suffix = os.path.splitext(filename)
fd, filename = tempfile.mkstemp(suffix, prefix+"_", dirname)
But, when i try like this way then i get error unique_file is not define >>>
form=cgi.FieldStorage()
i=(form["dfile"].value)
j=(form["sequence"].value)
if (i!="" and j=="" ):
filename=(form["dfile"].filename)
(name, ext) = os.path.splitext(filename)
alignfile=name + '.aln'
elif(j!="" and i==""):
filename=unique_file('C:/wamp/www/project/input.fasta')
prefix, suffix = os.path.splitext(filename)
fd, filename = tempfile.mkstemp(suffix, prefix+"_", dirname)
file = open(filename, 'w')
value=str(j)
file.write(value)
file.close()
(name, ext) = os.path.splitext(filename)
alignfile=name + '.aln'
What i am trying to do is check two options from form:- Fileupload and textarea. If fileupload is true then there is nothing to do except separating file and its extension. But when textarea is true then i have to generate unique file name and write content in it and pass filename and its extension.
Error i got is...
type 'exceptions.NameError'>: name 'unique_file' is not defined
args = ("name 'unique_file' is not defined",)
message = "name 'unique_file' is not defined"
Any suggestions and corrections are appreciated
Thanks for your concern
A:
unique_file() isn't a built-in function of Python. So I assume, either you forget a line in your first code snippet which actually imports this function, or you configured your python interpreter to load a startup file (http://docs.python.org/using/cmdline.html#envvar-PYTHONSTARTUP). In the second case, the CGI script can't find this function because it runs with the web server identity which probably lacks the PYTHONSTARTUP env. variable definition.
A:
You need to either import or define your unique_file method before using it.
They will look something like:
from mymodule import unique_file
or:
def unique_file():
# return a unique file
A:
Usually when a compiler or interpreter says something isn't defined, that's precisely what the problem is. So, you have to answer the question "why is it not defined?". Have you actually defined a method named "unique_file"? If so, maybe the name is misspelled, or maybe it's not defined before this code is executed.
If the function is in another file or module, have you imported that module to gain access to the function?
When you say it works in one way but not the other, what's the difference? Does one method auto-import some functions that the other does not?
Since unique_file is not a built-in command, you're probably forgetting to actually define a function with that name, or forgetting to import it from an existing module.
| Can anyone figure out my problem [Python] | I have been trying to debug the below python cgi code but doesn't seems to work. When i try in new file it these three lines seems to work
filename=unique_file('C:/wamp/www/project/input.fasta')
prefix, suffix = os.path.splitext(filename)
fd, filename = tempfile.mkstemp(suffix, prefix+"_", dirname)
But, when i try like this way then i get error unique_file is not define >>>
form=cgi.FieldStorage()
i=(form["dfile"].value)
j=(form["sequence"].value)
if (i!="" and j=="" ):
filename=(form["dfile"].filename)
(name, ext) = os.path.splitext(filename)
alignfile=name + '.aln'
elif(j!="" and i==""):
filename=unique_file('C:/wamp/www/project/input.fasta')
prefix, suffix = os.path.splitext(filename)
fd, filename = tempfile.mkstemp(suffix, prefix+"_", dirname)
file = open(filename, 'w')
value=str(j)
file.write(value)
file.close()
(name, ext) = os.path.splitext(filename)
alignfile=name + '.aln'
What i am trying to do is check two options from form:- Fileupload and textarea. If fileupload is true then there is nothing to do except separating file and its extension. But when textarea is true then i have to generate unique file name and write content in it and pass filename and its extension.
Error i got is...
type 'exceptions.NameError'>: name 'unique_file' is not defined
args = ("name 'unique_file' is not defined",)
message = "name 'unique_file' is not defined"
Any suggestions and corrections are appreciated
Thanks for your concern
| [
"unique_file() isn't a built-in function of Python. So I assume, either you forget a line in your first code snippet which actually imports this function, or you configured your python interpreter to load a startup file (http://docs.python.org/using/cmdline.html#envvar-PYTHONSTARTUP). In the second case, the CGI script can't find this function because it runs with the web server identity which probably lacks the PYTHONSTARTUP env. variable definition.\n",
"You need to either import or define your unique_file method before using it.\nThey will look something like:\nfrom mymodule import unique_file\n\nor:\ndef unique_file():\n # return a unique file\n\n",
"Usually when a compiler or interpreter says something isn't defined, that's precisely what the problem is. So, you have to answer the question \"why is it not defined?\". Have you actually defined a method named \"unique_file\"? If so, maybe the name is misspelled, or maybe it's not defined before this code is executed.\nIf the function is in another file or module, have you imported that module to gain access to the function? \nWhen you say it works in one way but not the other, what's the difference? Does one method auto-import some functions that the other does not?\nSince unique_file is not a built-in command, you're probably forgetting to actually define a function with that name, or forgetting to import it from an existing module. \n"
] | [
2,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0002973545_python.txt |
Q:
Django chat with ajax polling
I need to create a chat similar to facebook chat.
I am thinking to create a simple application Chat and then using ajax polling ( to send request every 2-3 seconds ).
Is this a good approach ?
A:
I'd go with something that involves push/real-time messaging controlled by the server. You'll get proper real-time chat and it will scale a lot better. Take a look at http://www.orbited.org/ which is the way to go, I reckon. It's not core django, but it's Python and will sit well alongside a Django app on your server
A:
ajax is the best here
what you will need:
1) server view that will return recent messages
2) client-side caller by timer (I prefer jQuery and its timers plugin) and success handler, that will populate the chat window
| Django chat with ajax polling | I need to create a chat similar to facebook chat.
I am thinking to create a simple application Chat and then using ajax polling ( to send request every 2-3 seconds ).
Is this a good approach ?
| [
"I'd go with something that involves push/real-time messaging controlled by the server. You'll get proper real-time chat and it will scale a lot better. Take a look at http://www.orbited.org/ which is the way to go, I reckon. It's not core django, but it's Python and will sit well alongside a Django app on your server \n",
"ajax is the best here\nwhat you will need:\n1) server view that will return recent messages\n2) client-side caller by timer (I prefer jQuery and its timers plugin) and success handler, that will populate the chat window\n"
] | [
6,
0
] | [] | [] | [
"ajax",
"chat",
"comet",
"django",
"python"
] | stackoverflow_0002973591_ajax_chat_comet_django_python.txt |
Q:
Reorganizing many to many relationships in Django
I have a many to many relationship in my models and i'm trying to reorganize it on one of my pages.
My site has videos. On each video's page i'm trying to list the actors that are in that video with links to each time they are in the video(the links will skip to that part of the video)
Here's an illustration
Flash Video embedded here
Actors...
Ted smith: 1:25, 5:30
jon jones: 5:00, 2:00
Here are the pertinent parts of my models
class Video(models.Model):
actor = models.ManyToManyField( Actor, through='Actor_Video' )
# more stuff removed
class Actor_Video(models.Model):
actor = models.ForeignKey( Actor )
video = models.ForeignKey( Video)
time = models.IntegerField()
Here's what my Actor_Video table looks like, maybe it will be easier to see what im doing
id actor_id video_id time (in seconds)
1 1 3 34
2 1 3 90
i feel like i have to reorganize the info in my view, but i cant figure it out. It doesn't seem to be possible in the template using djangos orm. I've tried a couple things with creating dictionaries/lists but i've had no luck. Any help is appreciated. Thanks.
A:
I think the most Django-ish way of doing this would be using the "regroup" template tag:
{% regroup video.actor_video_set.all by actor as video_times %}
{% for actor_times in video_times %}
<li>{{ actor_times.grouper }}: # this will output the actor's name
{% for time in actor_times %}
<li>{{ time }}</li> # this will output the time
{% endfor %}
</li>
{% endfor %}
That way you'd avoid having to use more logic than you want in your template. BTW, you can read on the regroup tag here
A:
I fashioned it into a dictionary of time lists
actor_sets = data['video'].video_actor_set.all()
data['actors'] = {}
for actor_set in actor_sets:
if not data['actors'].has_key( actor_set.actor ):
data['actors'][actor_set.actor] = []
data['actors'][actor_set.actor].append( actor_set.time )
And in the template i looped over that instead of running the queries in the actual template
A:
I would suggest putting your logic in the view function rather than the template. If I understand correctly, on each page you have only one video, which makes things reasonably simple
def video_view(request,video_id)
video = Video.objects.get(pk=video_id)
actors = Actor.objects.filter(video=video)
#now add a custom property to each actor called times
#which represents a sorted list of times they appear in this video
for actor in actors:
actor.times = [at.time for at in actor.actor_video_set.filter(video=video).order_by('time')] #check syntax here
then in the template, you can just loop through actor.times:
<ul>
{% for actor in video.actors.all.distinct %}
<li>{{ actor }}:
<ul>
{% for t in actor.times %} #this now returns only the times corresponding to this actor/video
<li><a href="?time={{ t.time }}">{{ t.time }}</a></li> #these are now sorted
NB - wrote all the code here without using an IDE, you'll need to check syntax. hope it helps!
for bonus points: define a times(video) function as a custom function of the Actor model class
| Reorganizing many to many relationships in Django | I have a many to many relationship in my models and i'm trying to reorganize it on one of my pages.
My site has videos. On each video's page i'm trying to list the actors that are in that video with links to each time they are in the video(the links will skip to that part of the video)
Here's an illustration
Flash Video embedded here
Actors...
Ted smith: 1:25, 5:30
jon jones: 5:00, 2:00
Here are the pertinent parts of my models
class Video(models.Model):
actor = models.ManyToManyField( Actor, through='Actor_Video' )
# more stuff removed
class Actor_Video(models.Model):
actor = models.ForeignKey( Actor )
video = models.ForeignKey( Video)
time = models.IntegerField()
Here's what my Actor_Video table looks like, maybe it will be easier to see what im doing
id actor_id video_id time (in seconds)
1 1 3 34
2 1 3 90
i feel like i have to reorganize the info in my view, but i cant figure it out. It doesn't seem to be possible in the template using djangos orm. I've tried a couple things with creating dictionaries/lists but i've had no luck. Any help is appreciated. Thanks.
| [
"I think the most Django-ish way of doing this would be using the \"regroup\" template tag:\n{% regroup video.actor_video_set.all by actor as video_times %}\n{% for actor_times in video_times %}\n <li>{{ actor_times.grouper }}: # this will output the actor's name\n {% for time in actor_times %}\n <li>{{ time }}</li> # this will output the time\n {% endfor %}\n </li>\n{% endfor %}\n\nThat way you'd avoid having to use more logic than you want in your template. BTW, you can read on the regroup tag here\n",
"I fashioned it into a dictionary of time lists\nactor_sets = data['video'].video_actor_set.all()\ndata['actors'] = {}\n\nfor actor_set in actor_sets:\n if not data['actors'].has_key( actor_set.actor ):\n data['actors'][actor_set.actor] = []\n data['actors'][actor_set.actor].append( actor_set.time )\n\nAnd in the template i looped over that instead of running the queries in the actual template\n",
"I would suggest putting your logic in the view function rather than the template. If I understand correctly, on each page you have only one video, which makes things reasonably simple\ndef video_view(request,video_id)\n video = Video.objects.get(pk=video_id)\n actors = Actor.objects.filter(video=video)\n #now add a custom property to each actor called times\n #which represents a sorted list of times they appear in this video\n for actor in actors:\n actor.times = [at.time for at in actor.actor_video_set.filter(video=video).order_by('time')] #check syntax here\n\nthen in the template, you can just loop through actor.times:\n<ul>\n{% for actor in video.actors.all.distinct %}\n <li>{{ actor }}:\n\n <ul>\n {% for t in actor.times %} #this now returns only the times corresponding to this actor/video\n <li><a href=\"?time={{ t.time }}\">{{ t.time }}</a></li> #these are now sorted\n\nNB - wrote all the code here without using an IDE, you'll need to check syntax. hope it helps!\nfor bonus points: define a times(video) function as a custom function of the Actor model class\n"
] | [
1,
0,
0
] | [] | [] | [
"django",
"django_templates",
"django_views",
"python"
] | stackoverflow_0002893198_django_django_templates_django_views_python.txt |
Q:
Python - Launch a Long Running Process from a Web App
I have a python web application that needs to launch a long running process. The catch is I don't want it to wait around for the process to finish. Just launch and finish.
I'm running on windows XP, and the web app is running under IIS (if that matters).
So far I tried popen but that didn't seem to work. It waited until the child process finished.
A:
Ok, I finally figured this out! This seems to work:
from subprocess import Popen
from win32process import DETACHED_PROCESS
pid = Popen(["C:\python24\python.exe", "long_run.py"],creationflags=DETACHED_PROCESS,shell=True).pid
print pid
print 'done'
#I can now close the console or anything I want and long_run.py continues!
Note: I added shell=True. Otherwise calling print in the child process gave me the error "IOError: [Errno 9] Bad file descriptor"
DETACHED_PROCESS is a Process Creation Flag that is passed to the underlying WINAPI CreateProcess function.
A:
Instead of directly starting processes from your webapp, you could write jobs into a message queue. A separate service reads from the message queue and runs the jobs. Have a look at Celery, a Distributed Task Queue written in Python.
A:
This almost works (from here):
from subprocess import Popen
pid = Popen(["C:\python24\python.exe", "long_run.py"]).pid
print pid
print 'done'
'done' will get printed right away. The problem is that the process above keeps running until long_run.py returns and if I close the process it kills long_run.py's process.
Surely there is some way to make a process completely independent of the parent process.
A:
subprocess.Popen does that.
| Python - Launch a Long Running Process from a Web App | I have a python web application that needs to launch a long running process. The catch is I don't want it to wait around for the process to finish. Just launch and finish.
I'm running on windows XP, and the web app is running under IIS (if that matters).
So far I tried popen but that didn't seem to work. It waited until the child process finished.
| [
"Ok, I finally figured this out! This seems to work:\nfrom subprocess import Popen\nfrom win32process import DETACHED_PROCESS\n\npid = Popen([\"C:\\python24\\python.exe\", \"long_run.py\"],creationflags=DETACHED_PROCESS,shell=True).pid\nprint pid\nprint 'done' \n#I can now close the console or anything I want and long_run.py continues!\n\nNote: I added shell=True. Otherwise calling print in the child process gave me the error \"IOError: [Errno 9] Bad file descriptor\"\nDETACHED_PROCESS is a Process Creation Flag that is passed to the underlying WINAPI CreateProcess function.\n",
"Instead of directly starting processes from your webapp, you could write jobs into a message queue. A separate service reads from the message queue and runs the jobs. Have a look at Celery, a Distributed Task Queue written in Python.\n",
"This almost works (from here):\nfrom subprocess import Popen\n\npid = Popen([\"C:\\python24\\python.exe\", \"long_run.py\"]).pid\nprint pid\nprint 'done'\n\n'done' will get printed right away. The problem is that the process above keeps running until long_run.py returns and if I close the process it kills long_run.py's process.\nSurely there is some way to make a process completely independent of the parent process.\n",
"subprocess.Popen does that.\n"
] | [
7,
2,
1,
0
] | [] | [] | [
"long_running_processes",
"popen",
"python",
"winapi",
"windows"
] | stackoverflow_0002970045_long_running_processes_popen_python_winapi_windows.txt |
Q:
can WTForms check two password is or not same when someone register
WTForms is a forms validation and rendering library for python web development
and i write this code to check two password is or not same :
from wtforms import Form, BooleanField, TextField, validators
class SignUpForm(Form):
username = TextField('Username', [validators.Length(min=4, max=25)])
email = TextField('Email', [validators.Length(min=6, max=120), validators.Email()])
password1 = PasswordField('Password1')
password2 = PasswordField('Password2')
def sameP(self):
return self.password1 ==self.password2
but , i want to know : can WTForms do this themself .
thanks
A:
use wtforms.validators.EqualTo.
It took less than a minute to find this in TFM, having never used this library before.
| can WTForms check two password is or not same when someone register | WTForms is a forms validation and rendering library for python web development
and i write this code to check two password is or not same :
from wtforms import Form, BooleanField, TextField, validators
class SignUpForm(Form):
username = TextField('Username', [validators.Length(min=4, max=25)])
email = TextField('Email', [validators.Length(min=6, max=120), validators.Email()])
password1 = PasswordField('Password1')
password2 = PasswordField('Password2')
def sameP(self):
return self.password1 ==self.password2
but , i want to know : can WTForms do this themself .
thanks
| [
"use wtforms.validators.EqualTo.\nIt took less than a minute to find this in TFM, having never used this library before.\n"
] | [
6
] | [] | [] | [
"google_app_engine",
"passwords",
"python"
] | stackoverflow_0002973149_google_app_engine_passwords_python.txt |
Q:
What kind of data do I pass into a Django Model.save() method?
Lets say that we are getting POSTed a form like this in Django:
rate=10
items= [23,12,31,52,83,34]
The items are primary keys of an Item model. I have a bunch of business logic that will run and create more items based on this data, the results of some db lookups, and some business logic. I want to put that logic into a save signal or an overridden Model.save() method of another model (let's call it Inventory). The business logic will run when I create a new Inventory object using this form data. Inventory will look like this:
class Inventory(models.Model):
picked_items = models.ManyToManyField(Item, related_name="items_picked_set")
calculated_items = models.ManyToManyField(Item, related_name="items_calculated_set")
rate = models.DecimalField()
... other fields here ...
New calculated_items will be created based on the passed in items which will be stored as picked_items.
My question is this: is it better for the save() method on this model to accept:
the request object (I don't really like this coupling)
the form data as arguments or kwargs (a list of primary keys and the other form fields)
a list of Items (The caller form or view will lookup the list of Items and create a list as well as pass in the other form fields)
some other approach?
I know this is a bit subjective, but I was wondering what the general idea is. I've looked through a lot of code but I'm having a hard time finding a pattern I like.
Clarification:
OK, so the consensus is that it should go into a different function on the model, something like inventory.calculate(...) which will then create everything, do the business logic, etc... That's good to know. My question remains: where is the best place to look up the form data into db objects? Should the caller of this function convert primary keys to database models or should the model methods accept primary keys and do it themselves? It's something that I want to do the same way project-wide.
Clarification 2:
OK so now there is some disagreement as to wether overriding save is ok or not.
When you get a form submission for a simple CRUD type operation, you pass the models and values as arguments to Model.objects.create(...) or override save or use signals or whatever.
I think the core of my question is this:
If the form submission has related models used for business logic, then you need to write some business logic into your model layer. When you do this, where should it go and should that method accept a list of objects or a list of id's? Should the model API's accept objects or id's?
A:
OK so the first two answers I got have now been contradicted by others. I've been researching this and I'm going to take a stab at answering it myself. Please vote if you think this is correct and/or comment if you disagree with my reasoning.
Methods on models should accept objects and lists of objects, not ids as int/long or lists of id's or anything like that. This is because it will probably be called from a view or form and they have access to full objects from cleaned_data dict. The create() method on manager classes are another example where django itself accepts objects.
The caller of the model layer methods should look up and convert the id's into full objects.
You can override save() but if you do, you should be careful to accept args and **kwargs
If the models span applications, you should consider signals instead of overriding save
Don't try to get clever overriding the model manager create method. It wont get called if the view layer creates a new object and saves it. If you need to do extra processing before the save, you can override save or __init__ or catch a signal. if you override __init__ you can check for a pk to determine if it exists in the db yet or not.
I'm going to put my create code in a separate method for now until I figure out which of the techniques I like best.
I think this is a good set of guidelines for adding methods to the model layer. Anything I missed?
| What kind of data do I pass into a Django Model.save() method? | Lets say that we are getting POSTed a form like this in Django:
rate=10
items= [23,12,31,52,83,34]
The items are primary keys of an Item model. I have a bunch of business logic that will run and create more items based on this data, the results of some db lookups, and some business logic. I want to put that logic into a save signal or an overridden Model.save() method of another model (let's call it Inventory). The business logic will run when I create a new Inventory object using this form data. Inventory will look like this:
class Inventory(models.Model):
picked_items = models.ManyToManyField(Item, related_name="items_picked_set")
calculated_items = models.ManyToManyField(Item, related_name="items_calculated_set")
rate = models.DecimalField()
... other fields here ...
New calculated_items will be created based on the passed in items which will be stored as picked_items.
My question is this: is it better for the save() method on this model to accept:
the request object (I don't really like this coupling)
the form data as arguments or kwargs (a list of primary keys and the other form fields)
a list of Items (The caller form or view will lookup the list of Items and create a list as well as pass in the other form fields)
some other approach?
I know this is a bit subjective, but I was wondering what the general idea is. I've looked through a lot of code but I'm having a hard time finding a pattern I like.
Clarification:
OK, so the consensus is that it should go into a different function on the model, something like inventory.calculate(...) which will then create everything, do the business logic, etc... That's good to know. My question remains: where is the best place to look up the form data into db objects? Should the caller of this function convert primary keys to database models or should the model methods accept primary keys and do it themselves? It's something that I want to do the same way project-wide.
Clarification 2:
OK so now there is some disagreement as to wether overriding save is ok or not.
When you get a form submission for a simple CRUD type operation, you pass the models and values as arguments to Model.objects.create(...) or override save or use signals or whatever.
I think the core of my question is this:
If the form submission has related models used for business logic, then you need to write some business logic into your model layer. When you do this, where should it go and should that method accept a list of objects or a list of id's? Should the model API's accept objects or id's?
| [
"OK so the first two answers I got have now been contradicted by others. I've been researching this and I'm going to take a stab at answering it myself. Please vote if you think this is correct and/or comment if you disagree with my reasoning.\n\nMethods on models should accept objects and lists of objects, not ids as int/long or lists of id's or anything like that. This is because it will probably be called from a view or form and they have access to full objects from cleaned_data dict. The create() method on manager classes are another example where django itself accepts objects.\nThe caller of the model layer methods should look up and convert the id's into full objects.\nYou can override save() but if you do, you should be careful to accept args and **kwargs\nIf the models span applications, you should consider signals instead of overriding save\nDon't try to get clever overriding the model manager create method. It wont get called if the view layer creates a new object and saves it. If you need to do extra processing before the save, you can override save or __init__ or catch a signal. if you override __init__ you can check for a pk to determine if it exists in the db yet or not.\n\nI'm going to put my create code in a separate method for now until I figure out which of the techniques I like best.\nI think this is a good set of guidelines for adding methods to the model layer. Anything I missed?\n"
] | [
1
] | [] | [] | [
"django",
"django_models",
"python"
] | stackoverflow_0002947397_django_django_models_python.txt |
Q:
Using NetBeans for Python GUI development
Is NetBeans recommended for developing a GUI for a Python app?
Does it have a form/screen builder for Python apps, like Dabo?
A:
Although it isn't "built-in" to Netbeans, I've found Qt Designer to be an excellent tool for building GUIs for Python. Of course, this only works if you're using PyQt or PySide but it's kept me quite happy for years. According to the Netbeans Docs, integrated Qt Designer support is available. I haven't tried it personally to see if it works from within a Python project but even if it doesn't I doubt the annoyance of launching Designer by hand would be sufficient to disuade you from using an otherwise excellent tool.
A:
I haven't seen a built-in GUI builder for CPython. You could use Jython + Matisse (the built-in Netbeans Java-based GUI builder).
A:
Oracle is not going to support dynamic languages in netbeans. In their marketing speak "dynamic languages on netbeans will be supported by the community" i.e. not by Oracle.
See this webcast at time 11:55.
For the record, I used to use netbeans for python development and thought it was excellent but have now moved to eclipse + pydev.
Netbeans does have the "netbeans GUI Builder" but that is for the Java (or Jython) platform and does not support the common GUI frameworks used in python such as xwpython, Qt or Tkinter
| Using NetBeans for Python GUI development | Is NetBeans recommended for developing a GUI for a Python app?
Does it have a form/screen builder for Python apps, like Dabo?
| [
"Although it isn't \"built-in\" to Netbeans, I've found Qt Designer to be an excellent tool for building GUIs for Python. Of course, this only works if you're using PyQt or PySide but it's kept me quite happy for years. According to the Netbeans Docs, integrated Qt Designer support is available. I haven't tried it personally to see if it works from within a Python project but even if it doesn't I doubt the annoyance of launching Designer by hand would be sufficient to disuade you from using an otherwise excellent tool.\n",
"I haven't seen a built-in GUI builder for CPython. You could use Jython + Matisse (the built-in Netbeans Java-based GUI builder).\n",
"Oracle is not going to support dynamic languages in netbeans. In their marketing speak \"dynamic languages on netbeans will be supported by the community\" i.e. not by Oracle.\nSee this webcast at time 11:55.\nFor the record, I used to use netbeans for python development and thought it was excellent but have now moved to eclipse + pydev.\nNetbeans does have the \"netbeans GUI Builder\" but that is for the Java (or Jython) platform and does not support the common GUI frameworks used in python such as xwpython, Qt or Tkinter\n"
] | [
4,
2,
0
] | [
"Is Google broken?\nThere's the first hit I got from Googling \"Netbeans Python\"\nhttp://netbeans.org/features/python/index.html\nI don't know what kind of \"recommendation\" you're looking for, but it's certainly supported. \n"
] | [
-1
] | [
"netbeans",
"python"
] | stackoverflow_0002971094_netbeans_python.txt |
Q:
Python - making counters, making loops?
I am having some trouble with a piece of code below:
Input: li is a nested list as below:
li = [['>0123456789 mouse gene 1\n', 'ATGTTGGGTT/CTTAGTTG\n', 'ATGGGGTTCCT/A\n'], ['>9876543210 mouse gene 2\n', 'ATTTGGTTTCCT\n', 'ATTCAATTTTAAGGGGGGGG\n']]
Using the function below, my desired output is simply the 2nd to the 9th digits following '>' under the condition that the number of '/' present in the entire sublist is > 1.
Instead, my code gives the digits to all entries. Also, it gives them multiple times. I therefore assume something is wrong with my counter and my for loop. I can't quite figure this out.
Any help, greatly appreciated.
import os
cwd = os.getcwd()
def func_one():
outp = open('something.txt', 'w') #output file
li = []
for i in os.listdir(cwd):
if i.endswith('.ext'):
inp = open(i, 'r').readlines()
li.append(inp)
count = 0
lis = []
for i in li:
for j in i:
for k in j[1:] #ignore first entry in sublist
if k == '/':
count += 1
if count > 1:
lis.append(i[0][1:10])
next_func(lis, outp)
Thanks,
S :-)
A:
Your indentation is possibly wrong, you should check count > 1 within the for j in i loop, not within the one that checks every single character in j[1:].
Also, here's a much easier way to do the same thing:
def count_slashes(items):
return sum(item.count('/') for item in items)
for item in li:
if count_slashes(item[1:]) > 1:
print item[0][1:10]
Or, if you need the IDs in a list:
result = [item[0][1:10] for item in li if count_slashes(item[1:]) > 1]
Python list comprehensions and generator expressions are really powerful tools, try to learn how to use them as it makes your life much simpler. The count_slashes function above uses a generator expression, and my last code snippet uses a list comprehension to construct the result list in a nice and concise way.
A:
Tamás has suggested a good solution, although it uses a very different style of coding than you do. Still, since your question was "I am having some trouble with a piece of code below", I think something more is called for.
How to avoid these problems in the future
You've made several mistakes in your approach to getting from "I think I know how to write this code" to having actual working code.
You are using meaningless names for your variables which makes it nearly impossible to understand your code, including for yourself. The thought "but I know what each variable means" is obviously wrong, otherwise you would have managed to solve this yourself. Notice below, where I fix your code, how difficult it is to describe and discuss your code.
You are trying to solve the whole problem at once instead of breaking it down into pieces. Write small functions or pieces of code that do just one thing, one piece at a time. For each piece you work on, get it right and test it to make sure it is right. Then go on writing other pieces which perhaps use pieces you've already got. I'm saying "pieces" but usually this means functions, methods or classes.
Fixing your code
That is what you asked for and nobody else has done so.
You need to move the count = 0 line to after the for i in li: line (indented appropriately). This will reset the counter for every sub-list. Second, once you have appended to lis and run your next_func, you need to break out of the for k in j[1:] loop and the encompassing for j in i: loop.
Here's a working code example (without the next_func but you can add that next to the append):
>>> li = [['>0123456789 mouse gene 1\n', 'ATGTTGGGTT/CTTAGTTG\n', 'ATGGGGTTCCT/A\n'], ['>9876543210 mouse gene 2\n', 'ATTTGGTTTCCT\n', 'ATTCAATTTTAAGGGGGGGG\n']]
>>> lis = []
>>> for i in li:
count = 0
for j in i:
break_out = False
for k in j[1:]:
if k == '/':
count += 1
if count > 1:
lis.append(i[0][1:10])
break_out = True
break
if break_out:
break
>>> lis
['012345678']
Re-writing you code to make it readable
This is so you see what I meant in the beginning of my answer.
>>> def count_slashes(gene):
"count the number of '/' character in the DNA sequences of the gene."
count = 0
dna_sequences = gene[1:]
for sequence in dna_sequences:
count += sequence.count('/')
return count
>>> def get_gene_name(gene):
"get the name of the gene"
gene_title_line = gene[0]
gene_name = gene_title_line[1:10]
return gene_name
>>> genes = [['>0123456789 mouse gene 1\n', 'ATGTTGGGTT/CTTAGTTG\n', 'ATGGGGTTCCT/A\n'], ['>9876543210 mouse gene 2\n', 'ATTTGGTTTCCT\n', 'ATTCAATTTTAAGGGGGGGG\n']]
>>> results = []
>>> for gene in genes:
if count_slashes(gene) > 1:
results.append(get_gene_name(gene))
>>> results
['012345678']
>>>
A:
import itertools
import glob
lis = []
with open('output.txt', 'w') as outfile:
for file in glob.iglob('*.ext'):
content = open(file).read()
if content.partition('\n')[2].count('/') > 1:
lis.append(content[1:10])
next_func(lis, outfile)
The reason you digits to all entries, is because you're not resetting the counter.
| Python - making counters, making loops? | I am having some trouble with a piece of code below:
Input: li is a nested list as below:
li = [['>0123456789 mouse gene 1\n', 'ATGTTGGGTT/CTTAGTTG\n', 'ATGGGGTTCCT/A\n'], ['>9876543210 mouse gene 2\n', 'ATTTGGTTTCCT\n', 'ATTCAATTTTAAGGGGGGGG\n']]
Using the function below, my desired output is simply the 2nd to the 9th digits following '>' under the condition that the number of '/' present in the entire sublist is > 1.
Instead, my code gives the digits to all entries. Also, it gives them multiple times. I therefore assume something is wrong with my counter and my for loop. I can't quite figure this out.
Any help, greatly appreciated.
import os
cwd = os.getcwd()
def func_one():
outp = open('something.txt', 'w') #output file
li = []
for i in os.listdir(cwd):
if i.endswith('.ext'):
inp = open(i, 'r').readlines()
li.append(inp)
count = 0
lis = []
for i in li:
for j in i:
for k in j[1:] #ignore first entry in sublist
if k == '/':
count += 1
if count > 1:
lis.append(i[0][1:10])
next_func(lis, outp)
Thanks,
S :-)
| [
"Your indentation is possibly wrong, you should check count > 1 within the for j in i loop, not within the one that checks every single character in j[1:].\nAlso, here's a much easier way to do the same thing:\ndef count_slashes(items):\n return sum(item.count('/') for item in items)\n\nfor item in li:\n if count_slashes(item[1:]) > 1:\n print item[0][1:10]\n\nOr, if you need the IDs in a list:\nresult = [item[0][1:10] for item in li if count_slashes(item[1:]) > 1]\n\nPython list comprehensions and generator expressions are really powerful tools, try to learn how to use them as it makes your life much simpler. The count_slashes function above uses a generator expression, and my last code snippet uses a list comprehension to construct the result list in a nice and concise way.\n",
"Tamás has suggested a good solution, although it uses a very different style of coding than you do. Still, since your question was \"I am having some trouble with a piece of code below\", I think something more is called for.\nHow to avoid these problems in the future\nYou've made several mistakes in your approach to getting from \"I think I know how to write this code\" to having actual working code.\nYou are using meaningless names for your variables which makes it nearly impossible to understand your code, including for yourself. The thought \"but I know what each variable means\" is obviously wrong, otherwise you would have managed to solve this yourself. Notice below, where I fix your code, how difficult it is to describe and discuss your code.\nYou are trying to solve the whole problem at once instead of breaking it down into pieces. Write small functions or pieces of code that do just one thing, one piece at a time. For each piece you work on, get it right and test it to make sure it is right. Then go on writing other pieces which perhaps use pieces you've already got. I'm saying \"pieces\" but usually this means functions, methods or classes.\nFixing your code\nThat is what you asked for and nobody else has done so.\nYou need to move the count = 0 line to after the for i in li: line (indented appropriately). This will reset the counter for every sub-list. Second, once you have appended to lis and run your next_func, you need to break out of the for k in j[1:] loop and the encompassing for j in i: loop.\nHere's a working code example (without the next_func but you can add that next to the append):\n>>> li = [['>0123456789 mouse gene 1\\n', 'ATGTTGGGTT/CTTAGTTG\\n', 'ATGGGGTTCCT/A\\n'], ['>9876543210 mouse gene 2\\n', 'ATTTGGTTTCCT\\n', 'ATTCAATTTTAAGGGGGGGG\\n']]\n>>> lis = []\n>>> for i in li:\n count = 0\n for j in i:\n break_out = False\n for k in j[1:]:\n if k == '/':\n count += 1\n if count > 1:\n lis.append(i[0][1:10])\n break_out = True\n break\n if break_out:\n break\n\n>>> lis\n['012345678']\n\nRe-writing you code to make it readable\nThis is so you see what I meant in the beginning of my answer.\n>>> def count_slashes(gene):\n \"count the number of '/' character in the DNA sequences of the gene.\"\n count = 0\n dna_sequences = gene[1:]\n for sequence in dna_sequences:\n count += sequence.count('/')\n return count\n>>> def get_gene_name(gene):\n \"get the name of the gene\"\n gene_title_line = gene[0]\n gene_name = gene_title_line[1:10]\n return gene_name\n>>> genes = [['>0123456789 mouse gene 1\\n', 'ATGTTGGGTT/CTTAGTTG\\n', 'ATGGGGTTCCT/A\\n'], ['>9876543210 mouse gene 2\\n', 'ATTTGGTTTCCT\\n', 'ATTCAATTTTAAGGGGGGGG\\n']]\n>>> results = []\n>>> for gene in genes:\n if count_slashes(gene) > 1:\n results.append(get_gene_name(gene))\n\n>>> results\n['012345678']\n>>> \n\n",
"import itertools\nimport glob\n\nlis = []\nwith open('output.txt', 'w') as outfile:\n for file in glob.iglob('*.ext'):\n content = open(file).read()\n if content.partition('\\n')[2].count('/') > 1:\n lis.append(content[1:10])\n next_func(lis, outfile)\n\nThe reason you digits to all entries, is because you're not resetting the counter.\n"
] | [
9,
8,
0
] | [] | [] | [
"counter",
"loops",
"python"
] | stackoverflow_0002973926_counter_loops_python.txt |
Q:
reading floating-point numbers with 1.#QNAN values in python
Does anyone know of a python string-to-float parser that can cope with MSVC nan numbers (1.#QNAN)? Currently I'm just using float(str) which at least copes with "nan".
I'm using a python script to read the output of a C++ program (runs under linux/mac/win platforms) and the script barfs up when reading these values. (I did already find a C++ library to output the values consistently across platforms, but sometimes have to compare past results, so this still occaisionally pops up.)
A:
Since you have to deal with legacy output files, I see no other possibility but writing a robust_float function:
def robust_float(s):
try:
return float(s)
except ValueError:
if 'nan' in s.lower():
return float('nan')
else:
raise
| reading floating-point numbers with 1.#QNAN values in python | Does anyone know of a python string-to-float parser that can cope with MSVC nan numbers (1.#QNAN)? Currently I'm just using float(str) which at least copes with "nan".
I'm using a python script to read the output of a C++ program (runs under linux/mac/win platforms) and the script barfs up when reading these values. (I did already find a C++ library to output the values consistently across platforms, but sometimes have to compare past results, so this still occaisionally pops up.)
| [
"Since you have to deal with legacy output files, I see no other possibility but writing a robust_float function:\ndef robust_float(s):\n try:\n return float(s)\n except ValueError:\n if 'nan' in s.lower():\n return float('nan')\n else:\n raise\n\n"
] | [
2
] | [] | [] | [
"cross_platform",
"nan",
"python",
"visual_c++"
] | stackoverflow_0002974124_cross_platform_nan_python_visual_c++.txt |
Q:
Available disk space on an SMB share, via Python
Does anyone know a way to get the amount of space available on a Windows (Samba) share via Python 2.6 with its standard library? (also running on Windows)
e.g.
>>> os.free_space("\\myshare\folder") # return free disk space, in bytes
1234567890
A:
If PyWin32 is available:
free, total, totalfree = win32file.GetDiskFreeSpaceEx(r'\\server\share')
Where free is a amount of free space available to the current user, and totalfree is amount of free space total. Relevant documentation: PyWin32 docs, MSDN.
If PyWin32 is not guaranteed to be available, then for Python 2.5 and higher there is ctypes module in stdlib. Same function, using ctypes:
import sys
from ctypes import *
c_ulonglong_p = POINTER(c_ulonglong)
_GetDiskFreeSpace = windll.kernel32.GetDiskFreeSpaceExW
_GetDiskFreeSpace.argtypes = [c_wchar_p, c_ulonglong_p, c_ulonglong_p, c_ulonglong_p]
def GetDiskFreeSpace(path):
if not isinstance(path, unicode):
path = path.decode('mbcs') # this is windows only code
free, total, totalfree = c_ulonglong(0), c_ulonglong(0), c_ulonglong(0)
if not _GetDiskFreeSpace(path, pointer(free), pointer(total), pointer(totalfree)):
raise WindowsError
return free.value, total.value, totalfree.value
Could probably be done better but I'm not really familiar with ctypes.
A:
The standard library has the os.statvfs() function, but unfortunately it's only available on Unix-like platforms.
In case there is some cygwin-python maybe it would work there?
| Available disk space on an SMB share, via Python | Does anyone know a way to get the amount of space available on a Windows (Samba) share via Python 2.6 with its standard library? (also running on Windows)
e.g.
>>> os.free_space("\\myshare\folder") # return free disk space, in bytes
1234567890
| [
"If PyWin32 is available:\nfree, total, totalfree = win32file.GetDiskFreeSpaceEx(r'\\\\server\\share')\n\nWhere free is a amount of free space available to the current user, and totalfree is amount of free space total. Relevant documentation: PyWin32 docs, MSDN.\nIf PyWin32 is not guaranteed to be available, then for Python 2.5 and higher there is ctypes module in stdlib. Same function, using ctypes:\nimport sys\nfrom ctypes import *\n\nc_ulonglong_p = POINTER(c_ulonglong)\n\n_GetDiskFreeSpace = windll.kernel32.GetDiskFreeSpaceExW\n_GetDiskFreeSpace.argtypes = [c_wchar_p, c_ulonglong_p, c_ulonglong_p, c_ulonglong_p]\n\ndef GetDiskFreeSpace(path):\n if not isinstance(path, unicode):\n path = path.decode('mbcs') # this is windows only code\n free, total, totalfree = c_ulonglong(0), c_ulonglong(0), c_ulonglong(0)\n if not _GetDiskFreeSpace(path, pointer(free), pointer(total), pointer(totalfree)):\n raise WindowsError\n return free.value, total.value, totalfree.value\n\nCould probably be done better but I'm not really familiar with ctypes.\n",
"The standard library has the os.statvfs() function, but unfortunately it's only available on Unix-like platforms.\nIn case there is some cygwin-python maybe it would work there?\n"
] | [
8,
0
] | [] | [] | [
"python",
"samba",
"windows"
] | stackoverflow_0002973480_python_samba_windows.txt |
Q:
Multiple Objects of the same class in Python
I have a bunch of Objects from the same Class in Python.
I've decided to put each object in a different file since it's
easier to manage them (If I plan to add more objects or edit them individually)
However, I'm not sure how to run through all of them, they are in another Package
So if I look at Netbeans I have TopLevel... and there's also a Package named Shapes
in Shapes I have Ball.py, Circle.py, Triangle.py (inside the files is a call for a constructor with the details of the specific shape) and they are all from class GraphicalShape
That is configured in GraphicalShape.py in the TopLevel Package.
Now, I have also on my Toplevel Package a file named newpythonproject.py, which would start the
process of calling each shape and doing things with it, how do I run through all of the shapes?
also: Is it a good way to do this?
p.s. never mind the uppercase lowercase stuff...
Just to clarify, I added a picture of the Project Tree
http://i47.tinypic.com/2i1nomw.png
A:
It seems that you're misunderstanding the Python jargon. The Python term "object" means an actual run-time instance of a class. As far as I can tell, you have "sub-classes" of the Shape class called ball, circle and triangle. Note that a sub-class is also a class. You are keeping the code for each such sub-class in a separate file, which is fine.
I think you're getting mixed up because you're focusing on the file layout of your project far too early. With Python it is often easier to start with just one file, writing everything you need in that file (functions, classes, etc.). Just get things working first. Later, when you've got working code and you just want to split a part of it into another file for organizational reasons, it will be much more obvious (to you!) how this should be done.
In Python, every class does not have to be defined in its own separate file. You can do this if you like, but it is not compulsory.
A:
it's not clear what you mean when you say "run through them all".
If you mean "import them for use", then you should:
Make sure the parent folder of shapes is on the PYTHONPATH environment variable; then use
from shapes import ball.
| Multiple Objects of the same class in Python | I have a bunch of Objects from the same Class in Python.
I've decided to put each object in a different file since it's
easier to manage them (If I plan to add more objects or edit them individually)
However, I'm not sure how to run through all of them, they are in another Package
So if I look at Netbeans I have TopLevel... and there's also a Package named Shapes
in Shapes I have Ball.py, Circle.py, Triangle.py (inside the files is a call for a constructor with the details of the specific shape) and they are all from class GraphicalShape
That is configured in GraphicalShape.py in the TopLevel Package.
Now, I have also on my Toplevel Package a file named newpythonproject.py, which would start the
process of calling each shape and doing things with it, how do I run through all of the shapes?
also: Is it a good way to do this?
p.s. never mind the uppercase lowercase stuff...
Just to clarify, I added a picture of the Project Tree
http://i47.tinypic.com/2i1nomw.png
| [
"It seems that you're misunderstanding the Python jargon. The Python term \"object\" means an actual run-time instance of a class. As far as I can tell, you have \"sub-classes\" of the Shape class called ball, circle and triangle. Note that a sub-class is also a class. You are keeping the code for each such sub-class in a separate file, which is fine.\nI think you're getting mixed up because you're focusing on the file layout of your project far too early. With Python it is often easier to start with just one file, writing everything you need in that file (functions, classes, etc.). Just get things working first. Later, when you've got working code and you just want to split a part of it into another file for organizational reasons, it will be much more obvious (to you!) how this should be done.\nIn Python, every class does not have to be defined in its own separate file. You can do this if you like, but it is not compulsory.\n",
"it's not clear what you mean when you say \"run through them all\".\nIf you mean \"import them for use\", then you should:\n\nMake sure the parent folder of shapes is on the PYTHONPATH environment variable; then use\nfrom shapes import ball.\n\n"
] | [
2,
0
] | [] | [] | [
"oop",
"package",
"python"
] | stackoverflow_0002974604_oop_package_python.txt |
Q:
Does Python copy value or reference upon object instantiation?
A simple question, perhaps, but I can't quite phrase my Google query to find the answer here. I've had the habit of making copies of objects when I pass them into object constructors, like so:
...
def __init__(self, name):
self._name = name[:]
...
However, when I ran the following test code, it appears to not be necessary, that Python is making deep copies of the object values upon object instantiation:
>>> class Candy(object):
... def __init__(self, flavor):
... self.flavor = flavor
...
>>> flav = "cherry"
>>> a = Candy(flav)
>>> a
<__main__.Candy object at 0x00CA4670>
>>> a.flavor
'cherry'
>>> flav += ' and grape'
>>> flav
'cherry and grape'
>>> a.flavor
'cherry'
So, what's the real story here? Thanks!
EDIT:
Thanks to @Olivier for his great answer. The following code documents a better example that Python does copy by reference:
>>> flav = ['a','b']
>>> a = Candy(flav)
>>> a.flavor
['a', 'b']
>>> flav[1] = 'c'
>>> flav
['a', 'c']
>>> a.flavor
['a', 'c']
A:
It is because strings are immutable.
The operator +=, rather confusingly, actually reassigns the variable it is applied to, if the object is immutable:
s = 'a'
ids = id(s)
s += 'b'
ids == id(s) # False, because s was reassigned to a new object
So, in your case, in the beginning, both flav and a.flavor point to the same string object:
flav --------\
'cherry'
a.flavor ----/
But when you write flav += 'and grape' the variable flav gets reassigned to a new string object:
flav --------> 'cherry and grape'
a.flavor ----> 'cherry' # <-- that string object never changes
It is confusing, because usually, when you call an operator on a variable, it doesn't change the variable. But just in the case of an immutable object, it does reassign the variable.
So the final answer to your question is, yes, it makes sense to copy the objects upon instantiation, especially if you are expecting a mutable object (which is often the case). It the object was immutable, it will not harm to copy it anyway.
A:
it appears to not be necessary
Appears? Your question is entirely about design and meaning. This is not a preference or habit question.
Does the class contract include the ability modify a mutable argument? If so, do NOT make a copy.
Does the class contract assert that a mutable argument will not be modified? If so, you MUST make a copy.
Your questions is answered entirely by the contract definition for the class.
| Does Python copy value or reference upon object instantiation? | A simple question, perhaps, but I can't quite phrase my Google query to find the answer here. I've had the habit of making copies of objects when I pass them into object constructors, like so:
...
def __init__(self, name):
self._name = name[:]
...
However, when I ran the following test code, it appears to not be necessary, that Python is making deep copies of the object values upon object instantiation:
>>> class Candy(object):
... def __init__(self, flavor):
... self.flavor = flavor
...
>>> flav = "cherry"
>>> a = Candy(flav)
>>> a
<__main__.Candy object at 0x00CA4670>
>>> a.flavor
'cherry'
>>> flav += ' and grape'
>>> flav
'cherry and grape'
>>> a.flavor
'cherry'
So, what's the real story here? Thanks!
EDIT:
Thanks to @Olivier for his great answer. The following code documents a better example that Python does copy by reference:
>>> flav = ['a','b']
>>> a = Candy(flav)
>>> a.flavor
['a', 'b']
>>> flav[1] = 'c'
>>> flav
['a', 'c']
>>> a.flavor
['a', 'c']
| [
"It is because strings are immutable.\nThe operator +=, rather confusingly, actually reassigns the variable it is applied to, if the object is immutable:\ns = 'a'\nids = id(s)\ns += 'b'\nids == id(s) # False, because s was reassigned to a new object\n\nSo, in your case, in the beginning, both flav and a.flavor point to the same string object:\nflav --------\\\n 'cherry'\na.flavor ----/\n\nBut when you write flav += 'and grape' the variable flav gets reassigned to a new string object:\nflav --------> 'cherry and grape'\na.flavor ----> 'cherry' # <-- that string object never changes\n\nIt is confusing, because usually, when you call an operator on a variable, it doesn't change the variable. But just in the case of an immutable object, it does reassign the variable.\nSo the final answer to your question is, yes, it makes sense to copy the objects upon instantiation, especially if you are expecting a mutable object (which is often the case). It the object was immutable, it will not harm to copy it anyway.\n",
"\nit appears to not be necessary\n\nAppears? Your question is entirely about design and meaning. This is not a preference or habit question.\nDoes the class contract include the ability modify a mutable argument? If so, do NOT make a copy.\nDoes the class contract assert that a mutable argument will not be modified? If so, you MUST make a copy.\nYour questions is answered entirely by the contract definition for the class.\n"
] | [
14,
1
] | [] | [] | [
"instantiation",
"language_design",
"object",
"python"
] | stackoverflow_0002974679_instantiation_language_design_object_python.txt |
Q:
Why is win32com so much slower than xlrd?
I have the same code, written using win32com and xlrd. xlrd preforms the algorithm in less than a second, while win32com takes minutes.
Here is the win32com:
def makeDict(ws):
"""makes dict with key as header name,
value as tuple of column begin and column end (inclusive)"""
wsHeaders = {} # key is header name, value is column begin and end inclusive
for cnum in xrange(9, find_last_col(ws)):
if ws.Cells(7, cnum).Value:
wsHeaders[str(ws.Cells(7, cnum).Value)] = (cnum, find_last_col(ws))
for cend in xrange(cnum + 1, find_last_col(ws)): #finds end column
if ws.Cells(7, cend).Value:
wsHeaders[str(ws.Cells(7, cnum).Value)] = (cnum, cend - 1)
break
return wsHeaders
And the xlrd
def makeDict(ws):
"""makes dict with key as header name,
value as tuple of column begin and column end (inclusive)"""
wsHeaders = {} # key is header name, value is column begin and end inclusive
for cnum in xrange(8, ws.ncols):
if ws.cell_value(6, cnum):
wsHeaders[str(ws.cell_value(6, cnum))] = (cnum, ws.ncols)
for cend in xrange(cnum + 1, ws.ncols):#finds end column
if ws.cell_value(6, cend):
wsHeaders[str(ws.cell_value(6, cnum))] = (cnum, cend - 1)
break
return wsHeaders
A:
(0) You asked "Why is win32com so much slower than xlrd?" ... this question is a bit like "Have you stopped beating your wife?" --- it is based on a presupposition that may not be true; win32com was written in C by a brilliant programmer, but xlrd was written in pure Python by an average programmer. The real difference is that win32com has to call COM which involves inter-process communication and was written by you-know-who, whereas xlrd is reading the Excel file directly. Moreover, there's a fourth party in the scenario: YOU. Please read on.
(1) You don't show us the source of the find_last_col() function that you use repetitively in the COM code. In the xlrd code, you are happy to use the same value (ws.ncols) all the time. So in the COM code, you should call find_last_col(ws) ONCE and thereafter used the returned result. Update See answer to your separate question on how to get the equivalent of xlrd's Sheet.ncols from COM.
(2) Accessing each cell value TWICE is slowing down both codes. Instead of
if ws.cell_value(6, cnum):
wsHeaders[str(ws.cell_value(6, cnum))] = (cnum, ws.ncols)
try
value = ws.cell_value(6, cnum)
if value:
wsHeaders[str(value)] = (cnum, ws.ncols)
Note: there are 2 cases of this in each code snippet.
(3) It is not at all apparent what the purpose of your nested loops are, but there does seem to be some redundant computation, involving redundant fetches from COM. If you care to tell us what you are trying to achieve, with examples, we could be able to help you make it run much faster. At the very least, extracting the values from COM once then processing them in nested loops in Python should be faster. How many columns are there?
Update 2 Meanwhile the little elves took to your code with the proctoscope, and came up with the following script:
tests= [
"A/B/C/D",
"A//C//",
"A//C//E",
"A///D",
"///D",
]
for test in tests:
print "\nTest:", test
row = test.split("/")
ncols = len(row)
# modelling the OP's code
# (using xlrd-style 0-relative column indexes)
d = {}
for cnum in xrange(ncols):
if row[cnum]:
k = row[cnum]
v = (cnum, ncols) #### BUG; should be ncols - 1 ("inclusive")
print "outer", cnum, k, '=>', v
d[k] = v
for cend in xrange(cnum + 1, ncols):
if row[cend]:
k = row[cnum]
v = (cnum, cend - 1)
print "inner", cnum, cend, k, '=>', v
d[k] = v
break
print d
# modelling a slightly better algorithm
d = {}
prev = None
for cnum in xrange(ncols):
key = row[cnum]
if key:
d[key] = [cnum, cnum]
prev = key
elif prev:
d[prev][1] = cnum
print d
# if tuples are really needed (can't imagine why)
for k in d:
d[k] = tuple(d[k])
print d
which outputs this:
Test: A/B/C/D
outer 0 A => (0, 4)
inner 0 1 A => (0, 0)
outer 1 B => (1, 4)
inner 1 2 B => (1, 1)
outer 2 C => (2, 4)
inner 2 3 C => (2, 2)
outer 3 D => (3, 4)
{'A': (0, 0), 'C': (2, 2), 'B': (1, 1), 'D': (3, 4)}
{'A': [0, 0], 'C': [2, 2], 'B': [1, 1], 'D': [3, 3]}
{'A': (0, 0), 'C': (2, 2), 'B': (1, 1), 'D': (3, 3)}
Test: A//C//
outer 0 A => (0, 5)
inner 0 2 A => (0, 1)
outer 2 C => (2, 5)
{'A': (0, 1), 'C': (2, 5)}
{'A': [0, 1], 'C': [2, 4]}
{'A': (0, 1), 'C': (2, 4)}
Test: A//C//E
outer 0 A => (0, 5)
inner 0 2 A => (0, 1)
outer 2 C => (2, 5)
inner 2 4 C => (2, 3)
outer 4 E => (4, 5)
{'A': (0, 1), 'C': (2, 3), 'E': (4, 5)}
{'A': [0, 1], 'C': [2, 3], 'E': [4, 4]}
{'A': (0, 1), 'C': (2, 3), 'E': (4, 4)}
Test: A///D
outer 0 A => (0, 4)
inner 0 3 A => (0, 2)
outer 3 D => (3, 4)
{'A': (0, 2), 'D': (3, 4)}
{'A': [0, 2], 'D': [3, 3]}
{'A': (0, 2), 'D': (3, 3)}
Test: ///D
outer 3 D => (3, 4)
{'D': (3, 4)}
{'D': [3, 3]}
{'D': (3, 3)}
A:
COM requires talking to another process which actually handles the requests. xlrd works in-process on the data structures themselves.
A:
Thought about it as I was going to bed last night, and ended up using this. A far superior version to my original:
def makeDict(ws):
"""makes dict with key as header name,
value as tuple of column begin and column end (inclusive)"""
wsHeaders = {} # key is header name, value is column begin and end inclusive
last_col = find_last_col(ws)
for cnum in xrange(9, last_col):
if ws.Cells(7, cnum).Value:
value = ws.Cells(7, cnum).Value
cstart = cnum
if ws.Cells(7, cnum + 1).Value:
wsHeaders[str(value)] = (cstart, cnum) #cnum is last in range
return wsHeaders
| Why is win32com so much slower than xlrd? | I have the same code, written using win32com and xlrd. xlrd preforms the algorithm in less than a second, while win32com takes minutes.
Here is the win32com:
def makeDict(ws):
"""makes dict with key as header name,
value as tuple of column begin and column end (inclusive)"""
wsHeaders = {} # key is header name, value is column begin and end inclusive
for cnum in xrange(9, find_last_col(ws)):
if ws.Cells(7, cnum).Value:
wsHeaders[str(ws.Cells(7, cnum).Value)] = (cnum, find_last_col(ws))
for cend in xrange(cnum + 1, find_last_col(ws)): #finds end column
if ws.Cells(7, cend).Value:
wsHeaders[str(ws.Cells(7, cnum).Value)] = (cnum, cend - 1)
break
return wsHeaders
And the xlrd
def makeDict(ws):
"""makes dict with key as header name,
value as tuple of column begin and column end (inclusive)"""
wsHeaders = {} # key is header name, value is column begin and end inclusive
for cnum in xrange(8, ws.ncols):
if ws.cell_value(6, cnum):
wsHeaders[str(ws.cell_value(6, cnum))] = (cnum, ws.ncols)
for cend in xrange(cnum + 1, ws.ncols):#finds end column
if ws.cell_value(6, cend):
wsHeaders[str(ws.cell_value(6, cnum))] = (cnum, cend - 1)
break
return wsHeaders
| [
"(0) You asked \"Why is win32com so much slower than xlrd?\" ... this question is a bit like \"Have you stopped beating your wife?\" --- it is based on a presupposition that may not be true; win32com was written in C by a brilliant programmer, but xlrd was written in pure Python by an average programmer. The real difference is that win32com has to call COM which involves inter-process communication and was written by you-know-who, whereas xlrd is reading the Excel file directly. Moreover, there's a fourth party in the scenario: YOU. Please read on.\n(1) You don't show us the source of the find_last_col() function that you use repetitively in the COM code. In the xlrd code, you are happy to use the same value (ws.ncols) all the time. So in the COM code, you should call find_last_col(ws) ONCE and thereafter used the returned result. Update See answer to your separate question on how to get the equivalent of xlrd's Sheet.ncols from COM.\n(2) Accessing each cell value TWICE is slowing down both codes. Instead of \nif ws.cell_value(6, cnum):\n wsHeaders[str(ws.cell_value(6, cnum))] = (cnum, ws.ncols)\n\ntry\nvalue = ws.cell_value(6, cnum)\nif value:\n wsHeaders[str(value)] = (cnum, ws.ncols)\n\nNote: there are 2 cases of this in each code snippet.\n(3) It is not at all apparent what the purpose of your nested loops are, but there does seem to be some redundant computation, involving redundant fetches from COM. If you care to tell us what you are trying to achieve, with examples, we could be able to help you make it run much faster. At the very least, extracting the values from COM once then processing them in nested loops in Python should be faster. How many columns are there?\nUpdate 2 Meanwhile the little elves took to your code with the proctoscope, and came up with the following script:\ntests= [\n \"A/B/C/D\",\n \"A//C//\",\n \"A//C//E\",\n \"A///D\",\n \"///D\",\n ]\nfor test in tests:\n print \"\\nTest:\", test\n row = test.split(\"/\")\n ncols = len(row)\n # modelling the OP's code\n # (using xlrd-style 0-relative column indexes)\n d = {}\n for cnum in xrange(ncols):\n if row[cnum]:\n k = row[cnum]\n v = (cnum, ncols) #### BUG; should be ncols - 1 (\"inclusive\")\n print \"outer\", cnum, k, '=>', v\n d[k] = v\n for cend in xrange(cnum + 1, ncols):\n if row[cend]:\n k = row[cnum]\n v = (cnum, cend - 1)\n print \"inner\", cnum, cend, k, '=>', v\n d[k] = v\n break\n print d\n # modelling a slightly better algorithm\n d = {}\n prev = None\n for cnum in xrange(ncols):\n key = row[cnum]\n if key:\n d[key] = [cnum, cnum]\n prev = key\n elif prev:\n d[prev][1] = cnum\n print d\n # if tuples are really needed (can't imagine why)\n for k in d:\n d[k] = tuple(d[k])\n print d\n\nwhich outputs this:\nTest: A/B/C/D\nouter 0 A => (0, 4)\ninner 0 1 A => (0, 0)\nouter 1 B => (1, 4)\ninner 1 2 B => (1, 1)\nouter 2 C => (2, 4)\ninner 2 3 C => (2, 2)\nouter 3 D => (3, 4)\n{'A': (0, 0), 'C': (2, 2), 'B': (1, 1), 'D': (3, 4)}\n{'A': [0, 0], 'C': [2, 2], 'B': [1, 1], 'D': [3, 3]}\n{'A': (0, 0), 'C': (2, 2), 'B': (1, 1), 'D': (3, 3)}\n\nTest: A//C//\nouter 0 A => (0, 5)\ninner 0 2 A => (0, 1)\nouter 2 C => (2, 5)\n{'A': (0, 1), 'C': (2, 5)}\n{'A': [0, 1], 'C': [2, 4]}\n{'A': (0, 1), 'C': (2, 4)}\n\nTest: A//C//E\nouter 0 A => (0, 5)\ninner 0 2 A => (0, 1)\nouter 2 C => (2, 5)\ninner 2 4 C => (2, 3)\nouter 4 E => (4, 5)\n{'A': (0, 1), 'C': (2, 3), 'E': (4, 5)}\n{'A': [0, 1], 'C': [2, 3], 'E': [4, 4]}\n{'A': (0, 1), 'C': (2, 3), 'E': (4, 4)}\n\nTest: A///D\nouter 0 A => (0, 4)\ninner 0 3 A => (0, 2)\nouter 3 D => (3, 4)\n{'A': (0, 2), 'D': (3, 4)}\n{'A': [0, 2], 'D': [3, 3]}\n{'A': (0, 2), 'D': (3, 3)}\n\nTest: ///D\nouter 3 D => (3, 4)\n{'D': (3, 4)}\n{'D': [3, 3]}\n{'D': (3, 3)}\n\n",
"COM requires talking to another process which actually handles the requests. xlrd works in-process on the data structures themselves.\n",
"Thought about it as I was going to bed last night, and ended up using this. A far superior version to my original:\ndef makeDict(ws):\n\"\"\"makes dict with key as header name, \n value as tuple of column begin and column end (inclusive)\"\"\"\nwsHeaders = {} # key is header name, value is column begin and end inclusive\nlast_col = find_last_col(ws)\n\nfor cnum in xrange(9, last_col):\n if ws.Cells(7, cnum).Value:\n value = ws.Cells(7, cnum).Value\n cstart = cnum\n if ws.Cells(7, cnum + 1).Value:\n wsHeaders[str(value)] = (cstart, cnum) #cnum is last in range\nreturn wsHeaders\n\n"
] | [
12,
2,
0
] | [] | [] | [
"python",
"win32com",
"xlrd"
] | stackoverflow_0002969225_python_win32com_xlrd.txt |
Q:
include udf in python?
i've a small user defined function in python, say fib(n), how do i use that in other programs or modules?
def fib(n):
should i use import or is there any other feature?
Also i'm learning python in eclipse IDE, it wont support
print "any string"
but i'm forced to use like,
print("string")
in python manual online, its given its cross platform and same syntax, but why like above?
A:
You use import to include the function in other programs. Just say import mymodule where the code is located in file mymodule.py. Then say mymodule.fib to use the function.
To answer your second question: The syntax print "any string" is acceptable in Python 2, but is no longer allowed in Python 3.
| include udf in python? | i've a small user defined function in python, say fib(n), how do i use that in other programs or modules?
def fib(n):
should i use import or is there any other feature?
Also i'm learning python in eclipse IDE, it wont support
print "any string"
but i'm forced to use like,
print("string")
in python manual online, its given its cross platform and same syntax, but why like above?
| [
"You use import to include the function in other programs. Just say import mymodule where the code is located in file mymodule.py. Then say mymodule.fib to use the function.\nTo answer your second question: The syntax print \"any string\" is acceptable in Python 2, but is no longer allowed in Python 3.\n"
] | [
2
] | [] | [] | [
"cross_platform",
"import",
"python"
] | stackoverflow_0002975473_cross_platform_import_python.txt |
Q:
Is Django double encoding a Unicode (utf-8?) string?
I'm having trouble storing and outputting an ndash character as UTF-8 in Django.
I'm getting data from an API. In raw form, as retrieved and viewed in a text editor, given unit of data may be similar to:
"I love this detergent \u2013 it is so inspiring."
(\u2013 is & ndash; as an html entity).
If I get this straight from an API and display it in Django, no problem. It displays in my browser as a long dash. I noticed I have to do decode('utf-8') to avoid the "'ascii' codec can't encode character" error if I try to do some operations with that text in my view, though. The text is going to the template as "I love this detergent\u2013 it is so inspiring.", according to the Django Debug Toolbar.
When stored to MySQL and read for output through the same view and template, however, it ends up looking like
"I love this detergent – it is so inspiring"
My MySQL table is set to DEFAULT CHARSET=utf8.
Now, when I read the data from the database through the MysQl monitor in a terminal set to Utf-8, it shows up as
"I love this detergent – it is so inspiring"
(correct - shows an ndash)
When I use mysqldb in a python shell, this line is
"I love this detergent \xe2\x80\x93 it is so inspiring"
(this is the correct UTF-8 for an ndash)
However, if I run python manage.py shell, and then
In [1]: import myproject.myapp.models ThatTable
In [2]: msg=ThatTable.objects.all().filter(thefield__contains='detergent')
In [3]: msg
Out[4]: [{'thefield': 'I love this detergent \xc3\xa2\xe2\x82\xac\xe2\x80\x9c it is so inspiring'}]
It appears to me that Django has taken \xe2\x80\x93 to mean three separate characters, and encoded it as UTF-8 into \xc3\xa2\xe2\x82\xac\xe2\x80\x9c. This displays as – because \xe2 appears to be â, \x80 appears to be €, etc. I've checked and this is how it's being sent to the template, as well.
If you decode the long sequence in Python, though, with decode('utf-8'), the result is \xe2\u20ac\u201c which also renders in the browser as –. Trying to decode it again yields a UnicodeDecodeError.
I've followed the Django suggestions for Unicode, as far as I know (configured MySQL).
Any suggestions on what I may have misconfigured?
addendum It seems this same issue has cropped up in other areas or systems as well., as while searching for \xc3\xa2\xe2\x82\xac\xe2\x80\x9c, I found at http://pastie.org/908443.txt a script to 'repair bad UTF8 entities.', also found in a wordpress RSS import plug in. It simply replaces this sequence with –. I'd like to solve this the right way, though!
Oh, and I'm using Django 1.2 and Python 2.6.5.
I can connect to the same database with PHP/PDO and print out this data without doing anything special, and it looks fine.
A:
This does seem like a case of double-encoding; I don't have much experience with Python, but try adjusting the MySQL connection settings as per the advice at http://tahpot.blogspot.com/2005/06/mysql-and-python-and-unicode.html
What I'm guessing is happening is that the connection is latin1, so MySQL tries to encode the string again before storage to the UTF-8 field. The code there, specifically this bit:
EDIT: With Python when establishing a
database connection add the following
flag: init_command='SET NAMES utf8'.
In addition set the following in
MySQL's my.cnf: default-character-set
= utf8
is probably what you want.
A:
I added set names utf8 to my php data insertion sequence, and now in a Python shell the feared ndash shows up as \x96. This renders correctly when read and output through Django.
One unusual situation about this is that I'm inserting data through PHP. Django issues set names utf8 automatically, so likely if I was inserting and reading the data through Django, this issue would not have appeared. PHP was using the default of latin1, I suppose
As an interesting note, while before I could read the data from PHP and it showed up normally in the browser, now the ndash is � unless I call set namesbefore reading the data.
So, it's working now and I hope I never have to understand whatever was going on before!
| Is Django double encoding a Unicode (utf-8?) string? | I'm having trouble storing and outputting an ndash character as UTF-8 in Django.
I'm getting data from an API. In raw form, as retrieved and viewed in a text editor, given unit of data may be similar to:
"I love this detergent \u2013 it is so inspiring."
(\u2013 is & ndash; as an html entity).
If I get this straight from an API and display it in Django, no problem. It displays in my browser as a long dash. I noticed I have to do decode('utf-8') to avoid the "'ascii' codec can't encode character" error if I try to do some operations with that text in my view, though. The text is going to the template as "I love this detergent\u2013 it is so inspiring.", according to the Django Debug Toolbar.
When stored to MySQL and read for output through the same view and template, however, it ends up looking like
"I love this detergent – it is so inspiring"
My MySQL table is set to DEFAULT CHARSET=utf8.
Now, when I read the data from the database through the MysQl monitor in a terminal set to Utf-8, it shows up as
"I love this detergent – it is so inspiring"
(correct - shows an ndash)
When I use mysqldb in a python shell, this line is
"I love this detergent \xe2\x80\x93 it is so inspiring"
(this is the correct UTF-8 for an ndash)
However, if I run python manage.py shell, and then
In [1]: import myproject.myapp.models ThatTable
In [2]: msg=ThatTable.objects.all().filter(thefield__contains='detergent')
In [3]: msg
Out[4]: [{'thefield': 'I love this detergent \xc3\xa2\xe2\x82\xac\xe2\x80\x9c it is so inspiring'}]
It appears to me that Django has taken \xe2\x80\x93 to mean three separate characters, and encoded it as UTF-8 into \xc3\xa2\xe2\x82\xac\xe2\x80\x9c. This displays as – because \xe2 appears to be â, \x80 appears to be €, etc. I've checked and this is how it's being sent to the template, as well.
If you decode the long sequence in Python, though, with decode('utf-8'), the result is \xe2\u20ac\u201c which also renders in the browser as –. Trying to decode it again yields a UnicodeDecodeError.
I've followed the Django suggestions for Unicode, as far as I know (configured MySQL).
Any suggestions on what I may have misconfigured?
addendum It seems this same issue has cropped up in other areas or systems as well., as while searching for \xc3\xa2\xe2\x82\xac\xe2\x80\x9c, I found at http://pastie.org/908443.txt a script to 'repair bad UTF8 entities.', also found in a wordpress RSS import plug in. It simply replaces this sequence with –. I'd like to solve this the right way, though!
Oh, and I'm using Django 1.2 and Python 2.6.5.
I can connect to the same database with PHP/PDO and print out this data without doing anything special, and it looks fine.
| [
"This does seem like a case of double-encoding; I don't have much experience with Python, but try adjusting the MySQL connection settings as per the advice at http://tahpot.blogspot.com/2005/06/mysql-and-python-and-unicode.html\nWhat I'm guessing is happening is that the connection is latin1, so MySQL tries to encode the string again before storage to the UTF-8 field. The code there, specifically this bit:\n\nEDIT: With Python when establishing a\n database connection add the following\n flag: init_command='SET NAMES utf8'.\nIn addition set the following in\n MySQL's my.cnf: default-character-set\n = utf8\n\nis probably what you want.\n",
"I added set names utf8 to my php data insertion sequence, and now in a Python shell the feared ndash shows up as \\x96. This renders correctly when read and output through Django. \nOne unusual situation about this is that I'm inserting data through PHP. Django issues set names utf8 automatically, so likely if I was inserting and reading the data through Django, this issue would not have appeared. PHP was using the default of latin1, I suppose\nAs an interesting note, while before I could read the data from PHP and it showed up normally in the browser, now the ndash is � unless I call set namesbefore reading the data.\nSo, it's working now and I hope I never have to understand whatever was going on before!\n"
] | [
1,
0
] | [] | [] | [
"django",
"mysql",
"python",
"unicode",
"utf_8"
] | stackoverflow_0002971634_django_mysql_python_unicode_utf_8.txt |
Q:
Is twisted any good?
I keep hearing all this hype about Twisted for python, but i just find it plain confusing. What do you think is more simple to use? Simple sockets or implementing twisted ?
A:
I stand by what I wrote in Python in a Nutshell (2nd edition p. 540):
Twisted includes powerful, high-level
components such as web servers, user
authentication systems, mail servers
and clients, instant messaging, SSH
clients and servers, a DNS server and
client, and so on, as well as the
lower-level infrastructure on which
all these high-level components are
built. Each component is highly
scalable and easily customizable, and
all are integrated to interoperate
smoothly. It's a tribute to the power
of Python and to the ingenuity of
Twisted's developers that so much can
be accomplished within two megabytes'
worth of download.
Asking whether this incredibly rich and powerful framework is "simpler to use" than "simple sockets" is a bit like asking if a car is "simpler to use" than a screw: what a weird question!
Cars are built with screws (among other things), and can't be quite as "simple to use" -- just because a screw does so little, a car does so much.
But if you want to get from A to B (and possibly carry passengers, luggage, pets, ...) a screw won't help much (unless you're basically going to build a car from scratch;-).
Of course cars aren't the only way to get from A to B, just as twisted is not the only way to build network-centric systems in Python. A horse and buggy (like asyncore) is quaint and fun, though less practical; a high-speed train (like tornado) may be easier to use and at least as fast, though much less flexible; and for various specialized purposes you may prefer all kinds of other conveyances, from unicycles to cruise ships (like, in Python and for networking, all kinds of other packages, from paramiko to dnspython) -- all of them will include screws as part of their components (like, all will include sockets as part of the way they're built), none will be as easy to use as "simple sockets", each (in its own range of applicability) will do a lot more for you than "simple sockets" on their own possibly could.
Twisted is an excellent choice in a vast number of cases, often the best when you need to integrate multiple aspects of functionality and/or implement some protocol for which there is no fully packaged solution. "Simple sockets" are not -- they're just a low-level component out of which higher-functionality, higher-level ones are built, and there rarely is a good reason (except learning, of course) to "roll your own" higher level components built "from scratch" on top of sockets (rather than picking powerful, well-built existing ones) -- just like you'd rarely be justified in building your own computer out of transistors, resistors, capacitors, etc, rather than picking appropriate integrated circuits;-).
A:
Twisted is a concurrency framework. It allows you to juggle multiple tasks in one application without using threads/processes. It does this using an event driven asynchronous system and is especially good with networking applications. Asynchronous code generally tends to be a little 'different' from normal stuff since the flow is not explicit and things happen based on external events. This can be confusing but it works. Twisted is arguably the most mature Python async concurrency library so if that's what you're planning to do, twisted is a good thing to bet on.
"Simple sockets" as you put them are communication primitives and not really comparable to twisted. What are you trying to do?
A:
I'd say it's good. Just look at this page of projects using twisted.
A:
Twisted was first released in 2002 and has bloated substantially since then; (this is a touchy subject and many people would argue that this is good and necessary in a framework) - However for someone approaching the project now it can be a bit daunting. There are options however if you're pushing towards asynchronous frameworks. I found this blog to be interesting: http://nichol.as/asynchronous-servers-in-python. Benchmarks aside, the code samples alone are quite interesting to compare.
| Is twisted any good? | I keep hearing all this hype about Twisted for python, but i just find it plain confusing. What do you think is more simple to use? Simple sockets or implementing twisted ?
| [
"I stand by what I wrote in Python in a Nutshell (2nd edition p. 540):\n\nTwisted includes powerful, high-level\n components such as web servers, user\n authentication systems, mail servers\n and clients, instant messaging, SSH\n clients and servers, a DNS server and\n client, and so on, as well as the\n lower-level infrastructure on which\n all these high-level components are\n built. Each component is highly\n scalable and easily customizable, and\n all are integrated to interoperate\n smoothly. It's a tribute to the power\n of Python and to the ingenuity of\n Twisted's developers that so much can\n be accomplished within two megabytes'\n worth of download.\n\nAsking whether this incredibly rich and powerful framework is \"simpler to use\" than \"simple sockets\" is a bit like asking if a car is \"simpler to use\" than a screw: what a weird question!\nCars are built with screws (among other things), and can't be quite as \"simple to use\" -- just because a screw does so little, a car does so much.\nBut if you want to get from A to B (and possibly carry passengers, luggage, pets, ...) a screw won't help much (unless you're basically going to build a car from scratch;-).\nOf course cars aren't the only way to get from A to B, just as twisted is not the only way to build network-centric systems in Python. A horse and buggy (like asyncore) is quaint and fun, though less practical; a high-speed train (like tornado) may be easier to use and at least as fast, though much less flexible; and for various specialized purposes you may prefer all kinds of other conveyances, from unicycles to cruise ships (like, in Python and for networking, all kinds of other packages, from paramiko to dnspython) -- all of them will include screws as part of their components (like, all will include sockets as part of the way they're built), none will be as easy to use as \"simple sockets\", each (in its own range of applicability) will do a lot more for you than \"simple sockets\" on their own possibly could.\nTwisted is an excellent choice in a vast number of cases, often the best when you need to integrate multiple aspects of functionality and/or implement some protocol for which there is no fully packaged solution. \"Simple sockets\" are not -- they're just a low-level component out of which higher-functionality, higher-level ones are built, and there rarely is a good reason (except learning, of course) to \"roll your own\" higher level components built \"from scratch\" on top of sockets (rather than picking powerful, well-built existing ones) -- just like you'd rarely be justified in building your own computer out of transistors, resistors, capacitors, etc, rather than picking appropriate integrated circuits;-).\n",
"Twisted is a concurrency framework. It allows you to juggle multiple tasks in one application without using threads/processes. It does this using an event driven asynchronous system and is especially good with networking applications. Asynchronous code generally tends to be a little 'different' from normal stuff since the flow is not explicit and things happen based on external events. This can be confusing but it works. Twisted is arguably the most mature Python async concurrency library so if that's what you're planning to do, twisted is a good thing to bet on.\n\"Simple sockets\" as you put them are communication primitives and not really comparable to twisted. What are you trying to do?\n",
"I'd say it's good. Just look at this page of projects using twisted.\n",
"Twisted was first released in 2002 and has bloated substantially since then; (this is a touchy subject and many people would argue that this is good and necessary in a framework) - However for someone approaching the project now it can be a bit daunting. There are options however if you're pushing towards asynchronous frameworks. I found this blog to be interesting: http://nichol.as/asynchronous-servers-in-python. Benchmarks aside, the code samples alone are quite interesting to compare. \n"
] | [
30,
4,
3,
0
] | [] | [] | [
"python",
"sockets",
"twisted"
] | stackoverflow_0002974781_python_sockets_twisted.txt |
Q:
python facebook api in linux?
How do i install package and libraries of facebook api? When i use
import facebook, there's error?
I see only svn checkouts, some files are empty, how do i download and get them working?
A:
If you don't have git, probably the easiest way to download it is to go to http://github.com/sciyoshi/pyfacebook and click on the "Download source" button. Extract the downloaded ZIP to a subdirectory, enter that subdirectory, launch your Python interpreter and type import facebook. It works for me.
A:
Start with the tutorial, which leads you through the download (with svn or git on Linux or Windows), installation, desktop apps (including a simple session on the interactive Python interpreter), and so on.
| python facebook api in linux? | How do i install package and libraries of facebook api? When i use
import facebook, there's error?
I see only svn checkouts, some files are empty, how do i download and get them working?
| [
"If you don't have git, probably the easiest way to download it is to go to http://github.com/sciyoshi/pyfacebook and click on the \"Download source\" button. Extract the downloaded ZIP to a subdirectory, enter that subdirectory, launch your Python interpreter and type import facebook. It works for me.\n",
"Start with the tutorial, which leads you through the download (with svn or git on Linux or Windows), installation, desktop apps (including a simple session on the interactive Python interpreter), and so on.\n"
] | [
2,
1
] | [] | [] | [
"facebook",
"installation",
"python"
] | stackoverflow_0002975582_facebook_installation_python.txt |
Q:
Working with multiple excel workbooks in python
Using win32com, I have two workbooks open.
How do you know which one is active?
How do you change which one is active?
How can you close one and not the other? (not Application.Quit())
A:
What is your larger goal here? Automate already open excel windows or simply write XLS files? If it's the latter you should use consider using xlwt.
How do you know which one is active?
xl = win32com.client.Dispatch("Excel.Application")
wbOne = xl.Workbooks.Add()
wbTwo = xl.Workbooks.Add()
xl.ActiveWorkbook == wbOne
False
xl.ActiveWorkbook == wbTwo
True
How do you change which one is active?
wbOne.Activate()
xl.ActiveWorkbook == wbOne
True
How can you close one and not the other? (not Application.Quit())
wbOne.Close()
wbTwo.Close()
| Working with multiple excel workbooks in python | Using win32com, I have two workbooks open.
How do you know which one is active?
How do you change which one is active?
How can you close one and not the other? (not Application.Quit())
| [
"What is your larger goal here? Automate already open excel windows or simply write XLS files? If it's the latter you should use consider using xlwt.\n\nHow do you know which one is active?\n\nxl = win32com.client.Dispatch(\"Excel.Application\")\nwbOne = xl.Workbooks.Add()\nwbTwo = xl.Workbooks.Add()\nxl.ActiveWorkbook == wbOne\n False\nxl.ActiveWorkbook == wbTwo \n True\n\n\nHow do you change which one is active?\n\nwbOne.Activate()\nxl.ActiveWorkbook == wbOne\n True\n\n\nHow can you close one and not the other? (not Application.Quit())\n\nwbOne.Close()\nwbTwo.Close()\n\n"
] | [
6
] | [] | [] | [
"com",
"excel",
"python",
"win32com"
] | stackoverflow_0002975777_com_excel_python_win32com.txt |
Q:
Is Python programming for Logitech G15 possible?
I have a Logitech G15 keyboard. It has a screen. Can i program this? I googled it but the one site i found didn't work.. It seems like it is possible, but i cannot grasp how.
Thanks!
This site is truly great.
A:
I believe the G15 comes with an SDK. You could use that along with the ctypes module to call into the supplied DLLs. Otherwise, I imagine you'd have to use something like Swig or Boost.Python to make a Python module for the G15 from the SDK.
A:
It seems to be programmable even with bash shell: http://www.g15-applets.de/tux---benutzername---zeit---datum---cpu---ram-t4336.html
so it should be easy to program this keyboard in python as well.
Of course you have to install "g15composer" first, which should be available under ubuntu:
sudo apt-get install g15composer
| Is Python programming for Logitech G15 possible? | I have a Logitech G15 keyboard. It has a screen. Can i program this? I googled it but the one site i found didn't work.. It seems like it is possible, but i cannot grasp how.
Thanks!
This site is truly great.
| [
"I believe the G15 comes with an SDK. You could use that along with the ctypes module to call into the supplied DLLs. Otherwise, I imagine you'd have to use something like Swig or Boost.Python to make a Python module for the G15 from the SDK.\n",
"It seems to be programmable even with bash shell: http://www.g15-applets.de/tux---benutzername---zeit---datum---cpu---ram-t4336.html\nso it should be easy to program this keyboard in python as well.\nOf course you have to install \"g15composer\" first, which should be available under ubuntu:\nsudo apt-get install g15composer \n\n"
] | [
6,
3
] | [] | [] | [
"g15",
"logitech",
"python"
] | stackoverflow_0002976446_g15_logitech_python.txt |
Q:
Why does output of fltk-config truncate arguments to gcc?
I'm trying to build an application I've downloaded which uses the SCONS "make replacement" and the Fast Light Tool Kit Gui.
The SConstruct code to detect the presence of fltk is:
guienv = Environment(CPPFLAGS = '')
guiconf = Configure(guienv)
if not guiconf.CheckLibWithHeader('lo', 'lo/lo.h','c'):
print 'Did not find liblo for OSC, exiting!'
Exit(1)
if not guiconf.CheckLibWithHeader('fltk', 'FL/Fl.H','c++'):
print 'Did not find FLTK for the gui, exiting!'
Exit(1)
Unfortunately, on my (Gentoo Linux) system, and many others (Linux distributions) this can be quite troublesome if the package manager allows the simultaneous install of FLTK-1 and FLTK-2.
I have attempted to modify the SConstruct file to use fltk-config --cflags and fltk-config --ldflags (or fltk-config --libs might be better than ldflags) by adding them like so:
guienv.Append(CPPPATH = os.popen('fltk-config --cflags').read())
guienv.Append(LIBPATH = os.popen('fltk-config --ldflags').read())
But this causes the test for liblo to fail! Looking in config.log shows how it failed:
scons: Configure: Checking for C library lo...
gcc -o .sconf_temp/conftest_4.o -c "-I/usr/include/fltk-1.1 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_THREAD_SAFE -D_REENTRANT"
gcc: no input files
scons: Configure: no
How should this really be done?
And to complete my answer, how do I remove the quotes from the result of os.popen( 'command').read()?
EDIT The real question here is why does appending the output of fltk-config cause gcc to not receive the filename argument it is supposed to compile?
A:
This is quite a complex problem with no quick answer
I have referred to the instructions for using pkg-config with scons at http://www.scons.org/wiki/UsingPkgConfig. The following question is also helpful
Test if executable exists in Python?.
But we need to go a little bit further with these.
So after much investigation I discovered os.popen('command').read() does not trim the trailing newline '\n' which is what caused the truncation of the arguments sent to GCC.
We can use str.rstrip() to remove the trailing '\n'.
Secondly, as config.log shows, the arguments which fltk-config provides, SCONS wraps up in double quotes before giving them to GCC. I'm not exactly sure of the specifics but this is because the output of fltk-config (via os.popen) contains space characters.
We can use something like strarray = str.split(" ", str.count(" ")) to split the output into substrings where the space characters occur.
It is also worth noting that we were attempting to append the fltk-config --ldflags to the wrong variable within the GUI environment, they should have been added to LINKFLAGS.
Unfortunately this is only half way to the solution.
What we need to do is:
Find the full path of an executable on the system
Pass arguments to an executable and capture its output
Convert the output into a suitable format to append to the CPPFLAGS and LINKFLAGS.
So I have defined some functions to help...
1) Find full path of executable on system:
( see: Test if executable exists in Python? )
def ExecutablePath(program):
def is_exe(fpath):
return os.path.exists(fpath) and os.access(fpath, os.X_OK)
fpath, fname = os.path.split(program)
if fpath:
if is_exe(program):
return program
else:
for path in os.environ["PATH"].split(os.pathsep):
exe_file = os.path.join(path, program)
if is_exe(exe_file):
return exe_file
return None
1b) We also need to test for executable existence:
def CheckForExecutable(context, program):
context.Message( 'Checking for program %s...' %program )
if ExecutablePath(program):
context.Result('yes')
return program
context.Result('no')
2) Pass arguments to executable and place the output into an array:
def ExecutableOutputAsArray(program, args):
pth = ExecutablePath(program)
pargs = shlex.split('%s %s' %(pth, args))
progout = subprocess.Popen( pargs , stdout=subprocess.PIPE).communicate()[0]
flags = progout.rstrip()
return flags.split(' ', flags.count(" "))
Some usage:
guienv.Append(CPPFLAGS = ExecutableOutputAsArray('fltk-config', '--cflags') )
guienv.Append(LINKFLAGS = ExecutableOutputAsArray('fltk-config', '--ldflags') )
guienv.Append(LINKFLAGS = ExecutableOutputAsArray('pkg-config', '--libs liblo') )
A:
There are 2 similar ways to do this:
1)
conf = Configure(env)
status, _ = conf.TryAction("fltk-config --cflags")
if status:
env.ParseConfig("fltk-config --cflags")
else:
print "Failed fltk"
2)
try:
env.ParseConfig("fltk-config --cflags")
except (OSError):
print 'failed to run fltk-config you sure fltk is installed !?'
sys.exit(1)
| Why does output of fltk-config truncate arguments to gcc? | I'm trying to build an application I've downloaded which uses the SCONS "make replacement" and the Fast Light Tool Kit Gui.
The SConstruct code to detect the presence of fltk is:
guienv = Environment(CPPFLAGS = '')
guiconf = Configure(guienv)
if not guiconf.CheckLibWithHeader('lo', 'lo/lo.h','c'):
print 'Did not find liblo for OSC, exiting!'
Exit(1)
if not guiconf.CheckLibWithHeader('fltk', 'FL/Fl.H','c++'):
print 'Did not find FLTK for the gui, exiting!'
Exit(1)
Unfortunately, on my (Gentoo Linux) system, and many others (Linux distributions) this can be quite troublesome if the package manager allows the simultaneous install of FLTK-1 and FLTK-2.
I have attempted to modify the SConstruct file to use fltk-config --cflags and fltk-config --ldflags (or fltk-config --libs might be better than ldflags) by adding them like so:
guienv.Append(CPPPATH = os.popen('fltk-config --cflags').read())
guienv.Append(LIBPATH = os.popen('fltk-config --ldflags').read())
But this causes the test for liblo to fail! Looking in config.log shows how it failed:
scons: Configure: Checking for C library lo...
gcc -o .sconf_temp/conftest_4.o -c "-I/usr/include/fltk-1.1 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_THREAD_SAFE -D_REENTRANT"
gcc: no input files
scons: Configure: no
How should this really be done?
And to complete my answer, how do I remove the quotes from the result of os.popen( 'command').read()?
EDIT The real question here is why does appending the output of fltk-config cause gcc to not receive the filename argument it is supposed to compile?
| [
"This is quite a complex problem with no quick answer\nI have referred to the instructions for using pkg-config with scons at http://www.scons.org/wiki/UsingPkgConfig. The following question is also helpful\nTest if executable exists in Python?.\nBut we need to go a little bit further with these.\nSo after much investigation I discovered os.popen('command').read() does not trim the trailing newline '\\n' which is what caused the truncation of the arguments sent to GCC.\nWe can use str.rstrip() to remove the trailing '\\n'.\nSecondly, as config.log shows, the arguments which fltk-config provides, SCONS wraps up in double quotes before giving them to GCC. I'm not exactly sure of the specifics but this is because the output of fltk-config (via os.popen) contains space characters.\nWe can use something like strarray = str.split(\" \", str.count(\" \")) to split the output into substrings where the space characters occur. \nIt is also worth noting that we were attempting to append the fltk-config --ldflags to the wrong variable within the GUI environment, they should have been added to LINKFLAGS.\nUnfortunately this is only half way to the solution.\nWhat we need to do is:\n\nFind the full path of an executable on the system\nPass arguments to an executable and capture its output\nConvert the output into a suitable format to append to the CPPFLAGS and LINKFLAGS. \n\nSo I have defined some functions to help...\n1) Find full path of executable on system:\n( see: Test if executable exists in Python? )\ndef ExecutablePath(program):\n def is_exe(fpath):\n return os.path.exists(fpath) and os.access(fpath, os.X_OK)\n fpath, fname = os.path.split(program)\n if fpath:\n if is_exe(program):\n return program\n else:\n for path in os.environ[\"PATH\"].split(os.pathsep):\n exe_file = os.path.join(path, program)\n if is_exe(exe_file):\n return exe_file\n return None\n\n1b) We also need to test for executable existence: \ndef CheckForExecutable(context, program):\n context.Message( 'Checking for program %s...' %program )\n if ExecutablePath(program):\n context.Result('yes')\n return program\n context.Result('no')\n\n2) Pass arguments to executable and place the output into an array:\ndef ExecutableOutputAsArray(program, args):\n pth = ExecutablePath(program)\n pargs = shlex.split('%s %s' %(pth, args))\n progout = subprocess.Popen( pargs , stdout=subprocess.PIPE).communicate()[0]\n flags = progout.rstrip()\n return flags.split(' ', flags.count(\" \"))\n\nSome usage:\nguienv.Append(CPPFLAGS = ExecutableOutputAsArray('fltk-config', '--cflags') )\nguienv.Append(LINKFLAGS = ExecutableOutputAsArray('fltk-config', '--ldflags') )\nguienv.Append(LINKFLAGS = ExecutableOutputAsArray('pkg-config', '--libs liblo') )\n\n",
"There are 2 similar ways to do this:\n1) \nconf = Configure(env)\nstatus, _ = conf.TryAction(\"fltk-config --cflags\")\nif status:\n env.ParseConfig(\"fltk-config --cflags\")\nelse:\n print \"Failed fltk\"\n\n2)\n try:\n env.ParseConfig(\"fltk-config --cflags\")\n except (OSError):\n print 'failed to run fltk-config you sure fltk is installed !?'\n sys.exit(1)\n\n"
] | [
1,
1
] | [] | [] | [
"c",
"c++",
"fltk",
"python",
"scons"
] | stackoverflow_0002945877_c_c++_fltk_python_scons.txt |
Q:
Binary search of unaccesible data field in ldap from python
I'm interested in reproducing a particular python script.
I have a friend who was accessing an ldap database, without authentication. There was a particular field of interest, we'll call it nin (an integer) for reference, and this field wasn't accessible without proper authentication. However, my friend managed to access this field through some sort of binary search (rather than just looping through integers) on the data; he would check the first digit, check if it was greater or less than the starting value, he would augment that until it returned a true value indicating existence, adding digits and continuing checking until he found the exact value of the integer nin.
Any ideas on how he went about this? I've access to a similarly set up database.
A:
Your best bet would be to get authorization to access that field. You are circumventing the security of the database otherwise.
A:
Figured it out. I just needed to filter on (&(cn=My name)(nin=guess*) and I managed to filter until it returns the correct result.
Code follows in case anyone else needs to find a field they aren't supposed to access, but can check results for and know the name of.
def lookup(self, username="", guess=0,verbose=0):
guin = guess
result_set = []
varsearch = "(&(name=" + str(username) + ")(" + "nin" + "=" + str(guin) + "*))"
result_id = self.l.search("", ldap.SCOPE_SUBTREE, varsearch, ["nin"])
while True:
try:
result_type, result_data = self.l.result(result_id, 0, 5.0)
if (result_data == []):
break
else:
if result_type == ldap.RES_SEARCH_ENTRY:
result_set.append(result_data)
except ldap.TIMEOUT:
return {"name": username}
if len(result_set) == 0:
return self.lookup(username, guin + 1,verbose)
else:
if guess < 1000000:
return self.lookup(username, guess * 10,verbose)
else:
if verbose==1:
print "Bingo!",
return str(guess)
| Binary search of unaccesible data field in ldap from python | I'm interested in reproducing a particular python script.
I have a friend who was accessing an ldap database, without authentication. There was a particular field of interest, we'll call it nin (an integer) for reference, and this field wasn't accessible without proper authentication. However, my friend managed to access this field through some sort of binary search (rather than just looping through integers) on the data; he would check the first digit, check if it was greater or less than the starting value, he would augment that until it returned a true value indicating existence, adding digits and continuing checking until he found the exact value of the integer nin.
Any ideas on how he went about this? I've access to a similarly set up database.
| [
"Your best bet would be to get authorization to access that field. You are circumventing the security of the database otherwise.\n",
"Figured it out. I just needed to filter on (&(cn=My name)(nin=guess*) and I managed to filter until it returns the correct result.\nCode follows in case anyone else needs to find a field they aren't supposed to access, but can check results for and know the name of.\ndef lookup(self, username=\"\", guess=0,verbose=0):\n guin = guess\n result_set = []\n varsearch = \"(&(name=\" + str(username) + \")(\" + \"nin\" + \"=\" + str(guin) + \"*))\"\n result_id = self.l.search(\"\", ldap.SCOPE_SUBTREE, varsearch, [\"nin\"])\n while True:\n try:\n result_type, result_data = self.l.result(result_id, 0, 5.0)\n if (result_data == []):\n break\n else:\n if result_type == ldap.RES_SEARCH_ENTRY:\n result_set.append(result_data)\n except ldap.TIMEOUT:\n return {\"name\": username}\n if len(result_set) == 0:\n return self.lookup(username, guin + 1,verbose)\n else:\n if guess < 1000000:\n return self.lookup(username, guess * 10,verbose)\n else:\n if verbose==1:\n print \"Bingo!\",\n return str(guess)\n\n"
] | [
0,
0
] | [] | [] | [
"ldap",
"python"
] | stackoverflow_0002968127_ldap_python.txt |
Q:
How to resolve bindings during execution with embedded Python?
I'm embedding Python into a C++ application. I plan to use PyEval_EvalCode to execute Python code, but instead of providing the locals and globals as dictionaries, I'm looking for a way to have my program resolve symbol references dynamically.
For example, let's say my Python code consists of the following expression:
bear + lion * bunny
Instead of placing bear, lion and bunny and their associated objects into the dictionaries that I'm passing to PyEval_EvalCode, I'd like the Python interpreter to call back my program and request these named objects.
Is there a way to accomplish this?
A:
Possibly. I've never tried this but in theory you might be able to implement a small extension class in C++ that overrides the __getattr__ method (probably via the tp_as_mapping or tp_getattro function pointers of PyTypeObject). Pass an instance of this as locals and/or globals to PyEval_EvalCode and your C++ method should be asked to resolve your lions, tigers, & bears for you.
A:
By providing the locals and globals dictionaries, you are providing the environment in which the evaled code is executed. That effectively provides you with an interface to map names to objects defined in the C++ app.
Can you clarify why you do not want to use the dictionaries?
Another thing you could do is process the string in C++ and do string substitution before you eval the code....
| How to resolve bindings during execution with embedded Python? | I'm embedding Python into a C++ application. I plan to use PyEval_EvalCode to execute Python code, but instead of providing the locals and globals as dictionaries, I'm looking for a way to have my program resolve symbol references dynamically.
For example, let's say my Python code consists of the following expression:
bear + lion * bunny
Instead of placing bear, lion and bunny and their associated objects into the dictionaries that I'm passing to PyEval_EvalCode, I'd like the Python interpreter to call back my program and request these named objects.
Is there a way to accomplish this?
| [
"Possibly. I've never tried this but in theory you might be able to implement a small extension class in C++ that overrides the __getattr__ method (probably via the tp_as_mapping or tp_getattro function pointers of PyTypeObject). Pass an instance of this as locals and/or globals to PyEval_EvalCode and your C++ method should be asked to resolve your lions, tigers, & bears for you. \n",
"By providing the locals and globals dictionaries, you are providing the environment in which the evaled code is executed. That effectively provides you with an interface to map names to objects defined in the C++ app. \nCan you clarify why you do not want to use the dictionaries?\nAnother thing you could do is process the string in C++ and do string substitution before you eval the code.... \n"
] | [
1,
1
] | [] | [] | [
"dynamic_binding",
"python",
"python_embedding"
] | stackoverflow_0002976698_dynamic_binding_python_python_embedding.txt |
Q:
What options are there for visualising class relationships in a Python program
I am maintaining a Python program, and am struggling to understand the relationships between the various classes. I think it would be helpful to see a diagram of how the classes interact.
What options are there available that might allow me to do this?
A:
Just my 2 cents.
Case tools like Enterprise Architect can generate class diagrams from python code, however for the purpose of understanding I prefer to coarsely model the classes and relationships by hand.
I too use UML when I want to understand new code, to get a coarse overview of the collaborations between classes, and to get a view of inheritance heirachies.
Most IDEs have means to explore the code, but I find small cohesive UML diagrams easier to digest and memorize.
I also find domain models easier to understand when on a class diagram.
A:
If you are seeking IDEs that have that feature, then:
Komodo
Pydev for eclipse
A:
gaphor has a feature to import python module and generate uml class diagrams, anyway it's not so good.
Anyway, code analysis tools on python don't work very well since no one can "predict" which arguments will be passed (or returned) by functions and so on. Most of them "guess" the type passed.
Hoping that python 3 with the "function annotation" can solve this sort of "problem"
A:
Great question! Depending on how hands-on you are, you can consider using the trace module on a run of your code.
python -m trace -T yourprogram.py
Will give you who-called-who information. You can either parse this, or write some code that uses trace programatically to extract your call graph.
Once that's done, a bit of dot hacking, and you've got a diagram. Once you've done this, it'd make a cool blog post about what you did and how it worked out.
A:
Check out epydoc. It's generally thought of as a documentation generator, but check out the (automatically generated) graph in this example:
http://epydoc.sourceforge.net/api/epydoc.apidoc.RoutineDoc-class.html
A:
If it's Django you can also do
./manage.py graph_models
if you are using Django Command Extenstions
| What options are there for visualising class relationships in a Python program | I am maintaining a Python program, and am struggling to understand the relationships between the various classes. I think it would be helpful to see a diagram of how the classes interact.
What options are there available that might allow me to do this?
| [
"Just my 2 cents.\nCase tools like Enterprise Architect can generate class diagrams from python code, however for the purpose of understanding I prefer to coarsely model the classes and relationships by hand.\nI too use UML when I want to understand new code, to get a coarse overview of the collaborations between classes, and to get a view of inheritance heirachies. \nMost IDEs have means to explore the code, but I find small cohesive UML diagrams easier to digest and memorize.\nI also find domain models easier to understand when on a class diagram.\n",
"If you are seeking IDEs that have that feature, then:\n\nKomodo\nPydev for eclipse\n\n",
"gaphor has a feature to import python module and generate uml class diagrams, anyway it's not so good.\nAnyway, code analysis tools on python don't work very well since no one can \"predict\" which arguments will be passed (or returned) by functions and so on. Most of them \"guess\" the type passed.\nHoping that python 3 with the \"function annotation\" can solve this sort of \"problem\"\n",
"Great question! Depending on how hands-on you are, you can consider using the trace module on a run of your code.\npython -m trace -T yourprogram.py\n\nWill give you who-called-who information. You can either parse this, or write some code that uses trace programatically to extract your call graph.\nOnce that's done, a bit of dot hacking, and you've got a diagram. Once you've done this, it'd make a cool blog post about what you did and how it worked out.\n",
"Check out epydoc. It's generally thought of as a documentation generator, but check out the (automatically generated) graph in this example:\nhttp://epydoc.sourceforge.net/api/epydoc.apidoc.RoutineDoc-class.html\n",
"If it's Django you can also do \n./manage.py graph_models \n\nif you are using Django Command Extenstions\n"
] | [
3,
1,
1,
1,
1,
0
] | [] | [] | [
"class_design",
"python",
"uml"
] | stackoverflow_0002972198_class_design_python_uml.txt |
Q:
Python function argument scope (Dictionaries v. Strings)
foo = "foo"
def bar(foo):
foo = "bar"
bar(foo)
print foo
# foo is still "foo"...
foo = {'foo':"foo"}
def bar(foo):
foo['foo'] = "bar"
bar(foo)
print foo['foo']
# foo['foo'] is now "bar"?
I have a function that has been inadvertently over-writing my function parameters when I pass a dictionary. Is there a clean way to declare my parameters as constant or am I stuck making a copy of the dictionary within the function?
Thanks!
A:
In a case like this, you'd have to copy the dictionary if you want to change it and keep the changes local to the function.
The reason is that, when you pass a dictionary into your second bar function, Python only passes a reference to the dictionary. So when you modify it inside the function, you're modifying the same object that exists outside the function. In the first bar function, on the other hand, you assign a different object to the name foo inside the function when you write foo = "bar". When you do that, the name foo inside the function starts to refer to the string "bar" and not to the string "foo". You've changed which object the name refers to. But foo inside the function and foo outside the function are different names, so if you change the object labeled by foo inside the function, you don't affect the name foo outside the function. So they refer to different objects.
A:
Arguments are passed by value, but in this case the value is a reference to an object. This is also called call by sharing. So yes, if you need a new dictionary, you will have to create one in the function.
A:
I would like to add that
temp_dict = CONSTdict
did not help, as this simply copies the pointer.
Instead, I needed:
temp_dict = dict(CONSTdict)
Cheers,
Shaun
| Python function argument scope (Dictionaries v. Strings) | foo = "foo"
def bar(foo):
foo = "bar"
bar(foo)
print foo
# foo is still "foo"...
foo = {'foo':"foo"}
def bar(foo):
foo['foo'] = "bar"
bar(foo)
print foo['foo']
# foo['foo'] is now "bar"?
I have a function that has been inadvertently over-writing my function parameters when I pass a dictionary. Is there a clean way to declare my parameters as constant or am I stuck making a copy of the dictionary within the function?
Thanks!
| [
"In a case like this, you'd have to copy the dictionary if you want to change it and keep the changes local to the function.\nThe reason is that, when you pass a dictionary into your second bar function, Python only passes a reference to the dictionary. So when you modify it inside the function, you're modifying the same object that exists outside the function. In the first bar function, on the other hand, you assign a different object to the name foo inside the function when you write foo = \"bar\". When you do that, the name foo inside the function starts to refer to the string \"bar\" and not to the string \"foo\". You've changed which object the name refers to. But foo inside the function and foo outside the function are different names, so if you change the object labeled by foo inside the function, you don't affect the name foo outside the function. So they refer to different objects.\n",
"Arguments are passed by value, but in this case the value is a reference to an object. This is also called call by sharing. So yes, if you need a new dictionary, you will have to create one in the function.\n",
"I would like to add that \ntemp_dict = CONSTdict \n\ndid not help, as this simply copies the pointer. \nInstead, I needed:\ntemp_dict = dict(CONSTdict)\n\nCheers,\nShaun\n"
] | [
3,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0002951112_python.txt |
Q:
Python, MySQL and Daemon Problem in Ubuntu 10.04
I have a script which runs inside a while loop and monitors a mysql data source every 2 seconds. If I run if from the command line, it runs and works fine. But If I attach it to a daemon, it throws an error saying "MySQL has gone" or something similar. I checked and found MySQL up and running. I could even execute queries from other tools.
I badly need help. I am running Ubuntu 10.04.
Error Code
Traceback (most recent call last):
File "/home/masnun/Desktop/daemon/daemon.py", line 67, in <module>
main()
File "/home/masnun/Desktop/daemon/daemon.py", line 35, in main
USERPROG()
File "/home/masnun/Desktop/daemon/mymain.py", line 19, in main
cursor.execute("select * from hits_logs where id > '" + str(last) + "'")
File "/usr/lib/pymodules/python2.6/MySQLdb/cursors.py", line 166, in execute
self.errorhandler(self, exc, value)
File "/usr/lib/pymodules/python2.6/MySQLdb/connections.py", line 35, in defau$
raise errorclass, errorvalue
_mysql_exceptions.OperationalError: (2006, 'MySQL server has gone away')
File: daemon
#! /bin/sh
# example python daemon starter script
# based on skeleton from Debian GNU/Linux
# cliechti@gmx.net
# place the daemon scripts in a folder accessible by root. /usr/local/sbin is a good idea
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
DAEMON=/home/masnun/Desktop/daemon/daemon.py
NAME=pydaemon
DESC="Example daemon"
test -f $DAEMON || exit 0
set -e
case "$1" in
start)
echo -n "Starting $DESC: "
start-stop-daemon --start --quiet --pidfile /var/run/$NAME.pid \
--exec $DAEMON
echo "$NAME."
;;
stop)
echo -n "Stopping $DESC: "
start-stop-daemon --stop --quiet --pidfile /var/run/$NAME.pid
# \ --exec $DAEMON
echo "$NAME."
;;
#reload)
#
# If the daemon can reload its config files on the fly
# for example by sending it SIGHUP, do it here.
#
# If the daemon responds to changes in its config file
# directly anyway, make this a do-nothing entry.
#
# echo "Reloading $DESC configuration files."
# start-stop-daemon --stop --signal 1 --quiet --pidfile \
# /var/run/$NAME.pid --exec $DAEMON
#;;
restart|force-reload)
#
# If the "reload" option is implemented, move the "force-reload"
# option to the "reload" entry above. If not, "force-reload" is
# just the same as "restart".
#
echo -n "Restarting $DESC: "
start-stop-daemon --stop --quiet --pidfile \
/var/run/$NAME.pid
# --exec $DAEMON
sleep 1
start-stop-daemon --start --quiet --pidfile \
/var/run/$NAME.pid --exec $DAEMON
echo "$NAME."
;;
*)
N=/etc/init.d/$NAME
# echo "Usage: $N {start|stop|restart|reload|force-reload}" >&2
echo "Usage: $N {start|stop|restart|force-reload}" >&2
exit 1
;;
esac
exit 0
File: daemon.py
#!/usr/bin/env python
###########################################################################
# configure these paths:
LOGFILE = '/var/log/pydaemon.log'
PIDFILE = '/var/run/pydaemon.pid'
# and let USERPROG be the main function of your project
import mymain
USERPROG = mymain.main
###########################################################################
import sys, os
class Log:
"""file like for writes with auto flush after each write
to ensure that everything is logged, even during an
unexpected exit."""
def __init__(self, f):
self.f = f
def write(self, s):
self.f.write(s)
self.f.flush()
def main():
#change to data directory if needed
os.chdir("/home/masnun/Desktop/daemon")
#redirect outputs to a logfile
sys.stdout = sys.stderr = Log(open(LOGFILE, 'a+'))
#ensure the that the daemon runs a normal user
os.setegid(1000) #set group first "pydaemon"
os.seteuid(1000) #set user "pydaemon"
#start the user program here:
USERPROG()
if __name__ == "__main__":
# do the UNIX double-fork magic, see Stevens' "Advanced
# Programming in the UNIX Environment" for details (ISBN 0201563177)
try:
pid = os.fork()
if pid > 0:
# exit first parent
sys.exit(0)
except OSError, e:
print >>sys.stderr, "fork #1 failed: %d (%s)" % (e.errno, e.strerror)
sys.exit(1)
# decouple from parent environment
os.chdir("/") #don't prevent unmounting....
os.setsid()
os.umask(0)
# do second fork
try:
pid = os.fork()
if pid > 0:
# exit from second parent, print eventual PID before
#print "Daemon PID %d" % pid
open(PIDFILE,'w').write("%d"%pid)
sys.exit(0)
except OSError, e:
print >>sys.stderr, "fork #2 failed: %d (%s)" % (e.errno, e.strerror)
sys.exit(1)
# start the daemon main loop
main()
File: mymain.py
import MySQLdb
from ProxyChecker import ProxyChecker
from time import sleep
config = {"host":"localhost","username":"root","password":"masnun","database":"webtracc_db1"}
connection = MySQLdb.connect(config['host'],config['username'],config['password'],config['database'])
cursor = connection.cursor()
def main():
while True:
f = open("last","r")
last = f.read().strip()
f.close()
if last == '': last = 0;
last = int(last)
cursor.execute("select * from hits_logs where id > '" + str(last) + "'")
row = cursor.fetchall()
for x in row:
pc = ProxyChecker( x[2] )
pc.start()
last = x[0]
f = open("last","w")
f.write(str(last))
f.close()
sleep(2)
if __name__ == "__main__":
main()
File:
ProxyChecker.py
#! /usr/bin/env python
from threading import Thread
from CheckProxy import CheckProxy
class ProxyChecker(Thread):
def __init__(self, data):
self.data = data
Thread.__init__(self)
def run(self):
pc = CheckProxy()
pc.check(self.data)
File: CheckProxy.py
#! /usr/bin/env python
import MySQLdb
import socket
class CheckProxy:
def __init__(self):
self.config = {"host":"localhost","username":"root","password":"masnun","database":"webtracc_db1"}
self.portList = [80]
def check(self,host):
connection = MySQLdb.connect(self.config['host'],self.config['username'],self.config['password'],self.config['database'])
cursor = connection.cursor()
proxy = False
try:
for x in self.portList:
sock = socket.socket()
sock.connect((host,x))
#print "connected to: " + str (x)
sock.close()
cursor.execute("select count(*) from list_entries where list='1' and ip='"+ host + "' ")
data = cursor.fetchall()
#print data[0][0]
if data[0][0] < 1:
print 'ok'
proxy = True
except socket.error, e:
#print e
if proxy:
cursor.execute("insert into list_entries (ip,list) values ('"+ host.strip() +"','1') ")
else:
cursor.execute("insert into list_entries (ip,list) values ('"+ host.strip() +"','2') ")
if __name__ == "__main__":
print "Direct access not allowed!"
A:
I haven't worked with Python, but it almost seems you are making a database connection, then forking. The other way around should work: fork at will, then connect in remaining process, possibly in your mymain.py:main() method.
| Python, MySQL and Daemon Problem in Ubuntu 10.04 | I have a script which runs inside a while loop and monitors a mysql data source every 2 seconds. If I run if from the command line, it runs and works fine. But If I attach it to a daemon, it throws an error saying "MySQL has gone" or something similar. I checked and found MySQL up and running. I could even execute queries from other tools.
I badly need help. I am running Ubuntu 10.04.
Error Code
Traceback (most recent call last):
File "/home/masnun/Desktop/daemon/daemon.py", line 67, in <module>
main()
File "/home/masnun/Desktop/daemon/daemon.py", line 35, in main
USERPROG()
File "/home/masnun/Desktop/daemon/mymain.py", line 19, in main
cursor.execute("select * from hits_logs where id > '" + str(last) + "'")
File "/usr/lib/pymodules/python2.6/MySQLdb/cursors.py", line 166, in execute
self.errorhandler(self, exc, value)
File "/usr/lib/pymodules/python2.6/MySQLdb/connections.py", line 35, in defau$
raise errorclass, errorvalue
_mysql_exceptions.OperationalError: (2006, 'MySQL server has gone away')
File: daemon
#! /bin/sh
# example python daemon starter script
# based on skeleton from Debian GNU/Linux
# cliechti@gmx.net
# place the daemon scripts in a folder accessible by root. /usr/local/sbin is a good idea
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
DAEMON=/home/masnun/Desktop/daemon/daemon.py
NAME=pydaemon
DESC="Example daemon"
test -f $DAEMON || exit 0
set -e
case "$1" in
start)
echo -n "Starting $DESC: "
start-stop-daemon --start --quiet --pidfile /var/run/$NAME.pid \
--exec $DAEMON
echo "$NAME."
;;
stop)
echo -n "Stopping $DESC: "
start-stop-daemon --stop --quiet --pidfile /var/run/$NAME.pid
# \ --exec $DAEMON
echo "$NAME."
;;
#reload)
#
# If the daemon can reload its config files on the fly
# for example by sending it SIGHUP, do it here.
#
# If the daemon responds to changes in its config file
# directly anyway, make this a do-nothing entry.
#
# echo "Reloading $DESC configuration files."
# start-stop-daemon --stop --signal 1 --quiet --pidfile \
# /var/run/$NAME.pid --exec $DAEMON
#;;
restart|force-reload)
#
# If the "reload" option is implemented, move the "force-reload"
# option to the "reload" entry above. If not, "force-reload" is
# just the same as "restart".
#
echo -n "Restarting $DESC: "
start-stop-daemon --stop --quiet --pidfile \
/var/run/$NAME.pid
# --exec $DAEMON
sleep 1
start-stop-daemon --start --quiet --pidfile \
/var/run/$NAME.pid --exec $DAEMON
echo "$NAME."
;;
*)
N=/etc/init.d/$NAME
# echo "Usage: $N {start|stop|restart|reload|force-reload}" >&2
echo "Usage: $N {start|stop|restart|force-reload}" >&2
exit 1
;;
esac
exit 0
File: daemon.py
#!/usr/bin/env python
###########################################################################
# configure these paths:
LOGFILE = '/var/log/pydaemon.log'
PIDFILE = '/var/run/pydaemon.pid'
# and let USERPROG be the main function of your project
import mymain
USERPROG = mymain.main
###########################################################################
import sys, os
class Log:
"""file like for writes with auto flush after each write
to ensure that everything is logged, even during an
unexpected exit."""
def __init__(self, f):
self.f = f
def write(self, s):
self.f.write(s)
self.f.flush()
def main():
#change to data directory if needed
os.chdir("/home/masnun/Desktop/daemon")
#redirect outputs to a logfile
sys.stdout = sys.stderr = Log(open(LOGFILE, 'a+'))
#ensure the that the daemon runs a normal user
os.setegid(1000) #set group first "pydaemon"
os.seteuid(1000) #set user "pydaemon"
#start the user program here:
USERPROG()
if __name__ == "__main__":
# do the UNIX double-fork magic, see Stevens' "Advanced
# Programming in the UNIX Environment" for details (ISBN 0201563177)
try:
pid = os.fork()
if pid > 0:
# exit first parent
sys.exit(0)
except OSError, e:
print >>sys.stderr, "fork #1 failed: %d (%s)" % (e.errno, e.strerror)
sys.exit(1)
# decouple from parent environment
os.chdir("/") #don't prevent unmounting....
os.setsid()
os.umask(0)
# do second fork
try:
pid = os.fork()
if pid > 0:
# exit from second parent, print eventual PID before
#print "Daemon PID %d" % pid
open(PIDFILE,'w').write("%d"%pid)
sys.exit(0)
except OSError, e:
print >>sys.stderr, "fork #2 failed: %d (%s)" % (e.errno, e.strerror)
sys.exit(1)
# start the daemon main loop
main()
File: mymain.py
import MySQLdb
from ProxyChecker import ProxyChecker
from time import sleep
config = {"host":"localhost","username":"root","password":"masnun","database":"webtracc_db1"}
connection = MySQLdb.connect(config['host'],config['username'],config['password'],config['database'])
cursor = connection.cursor()
def main():
while True:
f = open("last","r")
last = f.read().strip()
f.close()
if last == '': last = 0;
last = int(last)
cursor.execute("select * from hits_logs where id > '" + str(last) + "'")
row = cursor.fetchall()
for x in row:
pc = ProxyChecker( x[2] )
pc.start()
last = x[0]
f = open("last","w")
f.write(str(last))
f.close()
sleep(2)
if __name__ == "__main__":
main()
File:
ProxyChecker.py
#! /usr/bin/env python
from threading import Thread
from CheckProxy import CheckProxy
class ProxyChecker(Thread):
def __init__(self, data):
self.data = data
Thread.__init__(self)
def run(self):
pc = CheckProxy()
pc.check(self.data)
File: CheckProxy.py
#! /usr/bin/env python
import MySQLdb
import socket
class CheckProxy:
def __init__(self):
self.config = {"host":"localhost","username":"root","password":"masnun","database":"webtracc_db1"}
self.portList = [80]
def check(self,host):
connection = MySQLdb.connect(self.config['host'],self.config['username'],self.config['password'],self.config['database'])
cursor = connection.cursor()
proxy = False
try:
for x in self.portList:
sock = socket.socket()
sock.connect((host,x))
#print "connected to: " + str (x)
sock.close()
cursor.execute("select count(*) from list_entries where list='1' and ip='"+ host + "' ")
data = cursor.fetchall()
#print data[0][0]
if data[0][0] < 1:
print 'ok'
proxy = True
except socket.error, e:
#print e
if proxy:
cursor.execute("insert into list_entries (ip,list) values ('"+ host.strip() +"','1') ")
else:
cursor.execute("insert into list_entries (ip,list) values ('"+ host.strip() +"','2') ")
if __name__ == "__main__":
print "Direct access not allowed!"
| [
"I haven't worked with Python, but it almost seems you are making a database connection, then forking. The other way around should work: fork at will, then connect in remaining process, possibly in your mymain.py:main() method.\n"
] | [
1
] | [] | [] | [
"daemon",
"mysql",
"python",
"ubuntu"
] | stackoverflow_0002972672_daemon_mysql_python_ubuntu.txt |
Q:
Anyone know a good regex to remove extra whitespace?
Possible Duplicate:
Substitute multiple whitespace with single whitespace in Python
trying to figure out how to write a regex that given the string:
"hi this is a test"
I can turn it into
"hi this is a test"
where the whitespace is normalized to just one space
any ideas? thanks so much
A:
import re
re.sub("\s+"," ",string)
A:
Does it need to be a regex?
I'd just use
new_string = " ".join(re.split(s'\s+', old_string.strip()))
A:
sed
sed 's/[ ]\{2,\}/ /g'
| Anyone know a good regex to remove extra whitespace? |
Possible Duplicate:
Substitute multiple whitespace with single whitespace in Python
trying to figure out how to write a regex that given the string:
"hi this is a test"
I can turn it into
"hi this is a test"
where the whitespace is normalized to just one space
any ideas? thanks so much
| [
"import re \nre.sub(\"\\s+\",\" \",string)\n\n",
"Does it need to be a regex?\nI'd just use\nnew_string = \" \".join(re.split(s'\\s+', old_string.strip()))\n\n",
"sed\n sed 's/[ ]\\{2,\\}/ /g'\n\n"
] | [
10,
0,
0
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0002977905_python_regex.txt |
Q:
Call another classes method in Python
I'm tying to create a class that holds a reference to another classes method. I want to be able to call the method. It is basically a way to do callbacks.
My code works until I try to access a class var. When I run the code below, I get the error What am I doing wrong?
Brian
import logging
class yRunMethod(object):
"""
container that allows method to be called when method run is called
"""
def __init__(self, method, *args):
"""
init
"""
self.logger = logging.getLogger('yRunMethod')
self.logger.debug('method <%s> and args <%s>'%(method, args))
self.method = method
self.args = args
def run(self):
"""
runs the method
"""
self.logger.debug('running with <%s> and <%s>'%(self.method,self.args))
#if have args sent to function
if self.args:
self.method.im_func(self.method, *self.args)
else:
self.method.im_func(self.method)
if __name__ == "__main__":
import sys
#create test class
class testClass(object):
"""
test class
"""
def __init__(self):
"""
init
"""
self.var = 'some var'
def doSomthing(self):
"""
"""
print 'do somthing called'
print 'self.var <%s>'%self.var
#test yRunMethod
met1 = testClass().doSomthing
run1 = yRunMethod(met1)
run1.run()
A:
I think you're making this WAY too hard on yourself (which is easy to do ;-). Methods of classes and instances are first-class objects in Python. You can pass them around and call them like anything else. Digging into a method's instance variables is something that should almost never be done. A simple example to accomplish your goal is:
class Wrapper (object):
def __init__(self, meth, *args):
self.meth = meth
self.args = args
def runit(self):
self.meth(*self.args)
class Test (object):
def __init__(self, var):
self.var = var
def sayHello(self):
print "Hello! My name is: %s" % self.var
t = Test('FooBar')
w = Wrapper( t.sayHello )
w.runit()
A:
Why not use this:
self.method(*self.args)
instead of this:
if self.args:
self.method.im_func(self.method, *self.args)
else:
self.method.im_func(self.method)
A:
In your code you were calling self.method.im_func(self.method) - you shouldn't have been passing the method as argument but the object from which that method came. I.e. should have been self.method.im_func(self.method.im_self, *self.args)
| Call another classes method in Python | I'm tying to create a class that holds a reference to another classes method. I want to be able to call the method. It is basically a way to do callbacks.
My code works until I try to access a class var. When I run the code below, I get the error What am I doing wrong?
Brian
import logging
class yRunMethod(object):
"""
container that allows method to be called when method run is called
"""
def __init__(self, method, *args):
"""
init
"""
self.logger = logging.getLogger('yRunMethod')
self.logger.debug('method <%s> and args <%s>'%(method, args))
self.method = method
self.args = args
def run(self):
"""
runs the method
"""
self.logger.debug('running with <%s> and <%s>'%(self.method,self.args))
#if have args sent to function
if self.args:
self.method.im_func(self.method, *self.args)
else:
self.method.im_func(self.method)
if __name__ == "__main__":
import sys
#create test class
class testClass(object):
"""
test class
"""
def __init__(self):
"""
init
"""
self.var = 'some var'
def doSomthing(self):
"""
"""
print 'do somthing called'
print 'self.var <%s>'%self.var
#test yRunMethod
met1 = testClass().doSomthing
run1 = yRunMethod(met1)
run1.run()
| [
"I think you're making this WAY too hard on yourself (which is easy to do ;-). Methods of classes and instances are first-class objects in Python. You can pass them around and call them like anything else. Digging into a method's instance variables is something that should almost never be done. A simple example to accomplish your goal is:\nclass Wrapper (object):\n def __init__(self, meth, *args):\n self.meth = meth\n self.args = args\n\n def runit(self):\n self.meth(*self.args)\n\nclass Test (object):\n def __init__(self, var):\n self.var = var\n def sayHello(self):\n print \"Hello! My name is: %s\" % self.var\n\nt = Test('FooBar')\nw = Wrapper( t.sayHello )\n\nw.runit()\n\n",
"Why not use this:\n self.method(*self.args)\n\ninstead of this:\n if self.args:\n self.method.im_func(self.method, *self.args)\n\n else:\n self.method.im_func(self.method)\n\n",
"In your code you were calling self.method.im_func(self.method) - you shouldn't have been passing the method as argument but the object from which that method came. I.e. should have been self.method.im_func(self.method.im_self, *self.args)\n"
] | [
11,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0002976840_python.txt |
Q:
Live UI update of model changes when the model contains plain data structures only
Please consult me with your opinions on the following topic:
I have a model - a structure of the objects. Such as:
Event, containing participants
Current task
Assignee of each task
The model is going to be pickled on the server and transferred over the network to the client GUI application. Because of the pickle I'd want to keep the model classes as simple as possible (i.e. just simple classes with data fields only, no any single method inside). As a result I can't create signals (such as OnUpdate) on the model objects.
Sometimes the server will send model updates. Such as "Task text changed". When the update is applied, I need it to be reflected in the UI. In the case of task text change it shall be change of the label in the UI. I'd want only related controls to be changed, so the whole UI update is not the best solution.
On the other hand, would not like to traverse the whole model in the search of changes - that would be too resource intensive.
So, what is the best pattern of notifying the UI about changes on the plain data structures?
A:
You may be operating under a mis-conception: pickles don't include the code from classes that are pickled. You can add methods to your data structures, and it will not increase the size of your pickles.
This is a common misunderstanding about pickles. They don't include code.
A:
You may add a flag, e.g. self.isOnClientSide, and check it in every update handler, so that you can use different logic in either case.
def onUpdateFoo(self):
if self.isOnClientSide:
return self.onUpdateFooOnClient()
else:
return self.onUpdateFooOnServer()
Change this flag accordingly right after un-pickling.
| Live UI update of model changes when the model contains plain data structures only | Please consult me with your opinions on the following topic:
I have a model - a structure of the objects. Such as:
Event, containing participants
Current task
Assignee of each task
The model is going to be pickled on the server and transferred over the network to the client GUI application. Because of the pickle I'd want to keep the model classes as simple as possible (i.e. just simple classes with data fields only, no any single method inside). As a result I can't create signals (such as OnUpdate) on the model objects.
Sometimes the server will send model updates. Such as "Task text changed". When the update is applied, I need it to be reflected in the UI. In the case of task text change it shall be change of the label in the UI. I'd want only related controls to be changed, so the whole UI update is not the best solution.
On the other hand, would not like to traverse the whole model in the search of changes - that would be too resource intensive.
So, what is the best pattern of notifying the UI about changes on the plain data structures?
| [
"You may be operating under a mis-conception: pickles don't include the code from classes that are pickled. You can add methods to your data structures, and it will not increase the size of your pickles.\nThis is a common misunderstanding about pickles. They don't include code.\n",
"You may add a flag, e.g. self.isOnClientSide, and check it in every update handler, so that you can use different logic in either case.\ndef onUpdateFoo(self):\n if self.isOnClientSide:\n return self.onUpdateFooOnClient()\n else:\n return self.onUpdateFooOnServer()\n\nChange this flag accordingly right after un-pickling.\n"
] | [
0,
0
] | [] | [] | [
"architecture",
"design_patterns",
"python"
] | stackoverflow_0001426272_architecture_design_patterns_python.txt |
Q:
add xml node to xml file with python
I wonder if it is better add an element by opening file, search 'good place' and add string which contains xml code.
Or use some library... i have no idea. I know how can i get nodes and properties from xml through for example lxml but what's the simpliest and the best way to add?
A:
You could use lxml.etree.Element to make the xml node(s), and use append or insert to attach them into xml document:
data='''\
<root>
<node1>
<node2 a1="x1"> ... </node2>
<node2 a1="x2"> ... </node2>
<node2 a1="x1"> ... </node2>
</node1>
</root>
'''
doc = lxml.etree.XML(data)
e=doc.find('node1')
child = lxml.etree.Element("node3",attrib={'a1':'x3'})
child.text='...'
e.insert(1,child)
print(lxml.etree.tostring(doc))
yields:
<root>
<node1>
<node2 a1="x1"> ... </node2>
<node3 a1="x3">...</node3><node2 a1="x2"> ... </node2>
<node2 a1="x1"> ... </node2>
</node1>
</root>
A:
The safest way to add nodes to an XML document is to load it into a DOM, add the nodes programmatically and write it out again. There are several Python XML libraries. I have used minidom, but I have no reason to recommend it specifically over the others.
| add xml node to xml file with python | I wonder if it is better add an element by opening file, search 'good place' and add string which contains xml code.
Or use some library... i have no idea. I know how can i get nodes and properties from xml through for example lxml but what's the simpliest and the best way to add?
| [
"You could use lxml.etree.Element to make the xml node(s), and use append or insert to attach them into xml document:\ndata='''\\\n<root>\n<node1>\n <node2 a1=\"x1\"> ... </node2>\n <node2 a1=\"x2\"> ... </node2>\n <node2 a1=\"x1\"> ... </node2>\n</node1>\n</root>\n'''\ndoc = lxml.etree.XML(data)\ne=doc.find('node1')\nchild = lxml.etree.Element(\"node3\",attrib={'a1':'x3'})\nchild.text='...'\ne.insert(1,child)\nprint(lxml.etree.tostring(doc))\n\nyields:\n<root>\n <node1>\n <node2 a1=\"x1\"> ... </node2>\n <node3 a1=\"x3\">...</node3><node2 a1=\"x2\"> ... </node2>\n <node2 a1=\"x1\"> ... </node2>\n </node1>\n </root>\n\n",
"The safest way to add nodes to an XML document is to load it into a DOM, add the nodes programmatically and write it out again. There are several Python XML libraries. I have used minidom, but I have no reason to recommend it specifically over the others. \n"
] | [
4,
1
] | [] | [] | [
"python",
"xml"
] | stackoverflow_0002977779_python_xml.txt |
Q:
Python: Mechanize and BeautifulSoup not working on a shared hosting computer
I am writing a small site decorator to make my local airport site work with standard HTML.
On my local computer, I use Python's mechanize and BeautifulSoup packages to scrape and parse the site contents, and everything seems to work just fine. I have installed these packages via apt-get.
On my shared hosting site (at DreamHost) I have downloaded the .tar.gz files, extracted the packages, renamed the directories (e.g., from BeautifulSoup-3.1.0.tar.gz to BeautifulSoup) and tried to run the command.
I've got a bizarre error with BeautifulSoup; I don't know if it's about an older version of Python on Dreamhost, about directory names, or other reason.
[sanjose]$ python
Python 2.4.4 (#2, Jan 24 2010, 11:50:13)
[GCC 4.1.2 20061115 (prerelease) (Debian 4.1.1-21)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from BeautifulSoup import BeautifulSoup
>>> import mechanize
>>> url='http://www.iaa.gov.il/Rashat/he-IL/Airports/BenGurion/informationForTravelers/OnlineFlights.aspx?flightsType=arr'
>>> br=mechanize.Browser()
>>> br.addheaders = [('User-agent', 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)')]
>>> r=br.open(url)
>>> html=r.read()
>>> type(html)
<type 'str'>
I've done this to show that the input is indeed a string. Now let's run the command that works in my local computer:
>>> soup = BeautifulSoup.BeautifulSoup(html)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "/home/adamatan/matan.name/natbug/BeautifulSoup/BeautifulSoup.py", line 1493, in __init__
BeautifulStoneSoup.__init__(self, *args, **kwargs)
File "/home/adamatan/matan.name/natbug/BeautifulSoup/BeautifulSoup.py", line 1224, in __init__
self._feed(isHTML=isHTML)
File "/home/adamatan/matan.name/natbug/BeautifulSoup/BeautifulSoup.py", line 1257, in _feed
self.builder.feed(markup)
File "/usr/lib/python2.4/HTMLParser.py", line 108, in feed
self.goahead(0)
File "/usr/lib/python2.4/HTMLParser.py", line 148, in goahead
k = self.parse_starttag(i)
File "/usr/lib/python2.4/HTMLParser.py", line 268, in parse_starttag
self.handle_starttag(tag, attrs)
File "/home/adamatan/matan.name/natbug/BeautifulSoup/BeautifulSoup.py", line 1011, in handle_starttag
self.soup.unknown_starttag(name, attrs)
File "/home/adamatan/matan.name/natbug/BeautifulSoup/BeautifulSoup.py", line 1408, in unknown_starttag
tag = Tag(self, name, attrs, self.currentTag, self.previous)
File "/home/adamatan/matan.name/natbug/BeautifulSoup/BeautifulSoup.py", line 525, in __init__
self.attrs = map(convert, self.attrs)
File "/home/adamatan/matan.name/natbug/BeautifulSoup/BeautifulSoup.py", line 524, in <lambda>
val))
File "/usr/lib/python2.4/sre.py", line 142, in sub
return _compile(pattern, 0).sub(repl, string, count)
TypeError: expected string or buffer
Any ideas?
Adam
A:
You're using BeautifulSoup version 3.1.0 which is for Python 3.x. Use a 3.0 version of BeautifulSoup for Python 2.x.
| Python: Mechanize and BeautifulSoup not working on a shared hosting computer | I am writing a small site decorator to make my local airport site work with standard HTML.
On my local computer, I use Python's mechanize and BeautifulSoup packages to scrape and parse the site contents, and everything seems to work just fine. I have installed these packages via apt-get.
On my shared hosting site (at DreamHost) I have downloaded the .tar.gz files, extracted the packages, renamed the directories (e.g., from BeautifulSoup-3.1.0.tar.gz to BeautifulSoup) and tried to run the command.
I've got a bizarre error with BeautifulSoup; I don't know if it's about an older version of Python on Dreamhost, about directory names, or other reason.
[sanjose]$ python
Python 2.4.4 (#2, Jan 24 2010, 11:50:13)
[GCC 4.1.2 20061115 (prerelease) (Debian 4.1.1-21)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from BeautifulSoup import BeautifulSoup
>>> import mechanize
>>> url='http://www.iaa.gov.il/Rashat/he-IL/Airports/BenGurion/informationForTravelers/OnlineFlights.aspx?flightsType=arr'
>>> br=mechanize.Browser()
>>> br.addheaders = [('User-agent', 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)')]
>>> r=br.open(url)
>>> html=r.read()
>>> type(html)
<type 'str'>
I've done this to show that the input is indeed a string. Now let's run the command that works in my local computer:
>>> soup = BeautifulSoup.BeautifulSoup(html)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "/home/adamatan/matan.name/natbug/BeautifulSoup/BeautifulSoup.py", line 1493, in __init__
BeautifulStoneSoup.__init__(self, *args, **kwargs)
File "/home/adamatan/matan.name/natbug/BeautifulSoup/BeautifulSoup.py", line 1224, in __init__
self._feed(isHTML=isHTML)
File "/home/adamatan/matan.name/natbug/BeautifulSoup/BeautifulSoup.py", line 1257, in _feed
self.builder.feed(markup)
File "/usr/lib/python2.4/HTMLParser.py", line 108, in feed
self.goahead(0)
File "/usr/lib/python2.4/HTMLParser.py", line 148, in goahead
k = self.parse_starttag(i)
File "/usr/lib/python2.4/HTMLParser.py", line 268, in parse_starttag
self.handle_starttag(tag, attrs)
File "/home/adamatan/matan.name/natbug/BeautifulSoup/BeautifulSoup.py", line 1011, in handle_starttag
self.soup.unknown_starttag(name, attrs)
File "/home/adamatan/matan.name/natbug/BeautifulSoup/BeautifulSoup.py", line 1408, in unknown_starttag
tag = Tag(self, name, attrs, self.currentTag, self.previous)
File "/home/adamatan/matan.name/natbug/BeautifulSoup/BeautifulSoup.py", line 525, in __init__
self.attrs = map(convert, self.attrs)
File "/home/adamatan/matan.name/natbug/BeautifulSoup/BeautifulSoup.py", line 524, in <lambda>
val))
File "/usr/lib/python2.4/sre.py", line 142, in sub
return _compile(pattern, 0).sub(repl, string, count)
TypeError: expected string or buffer
Any ideas?
Adam
| [
"You're using BeautifulSoup version 3.1.0 which is for Python 3.x. Use a 3.0 version of BeautifulSoup for Python 2.x.\n"
] | [
3
] | [] | [] | [
"beautifulsoup",
"python",
"shared_hosting"
] | stackoverflow_0002978205_beautifulsoup_python_shared_hosting.txt |
Q:
Checkstyle for Python
Is there an application similar to Java's Checkstyle for Python?
By which I mean, a tool that analyzes Python code and can be run as part of continuous integration (e.g. CruiseControl or Hudson). After analyzing, it should produce an online accessible report which outlines any problems found in the code.
Thank you,
A:
There are actually a lot of tools:
as other have said
pylint : very very strict (imho too much), yet customizable
pep-8 : very good
pychecker
pyflakes: extremely fast, perfect when used in emacs with flymake.
To format your code according to pep8 I can suggest you PythonTidy
A:
You may look at pylint to see if it fits you needs.
A:
Besides pylint:
the pep-8 tool
A:
Besides pylint and pep-8: pychecker
| Checkstyle for Python | Is there an application similar to Java's Checkstyle for Python?
By which I mean, a tool that analyzes Python code and can be run as part of continuous integration (e.g. CruiseControl or Hudson). After analyzing, it should produce an online accessible report which outlines any problems found in the code.
Thank you,
| [
"There are actually a lot of tools:\nas other have said\n\npylint : very very strict (imho too much), yet customizable\npep-8 : very good\npychecker\npyflakes: extremely fast, perfect when used in emacs with flymake.\n\nTo format your code according to pep8 I can suggest you PythonTidy\n",
"You may look at pylint to see if it fits you needs.\n",
"Besides pylint:\nthe pep-8 tool\n",
"Besides pylint and pep-8: pychecker\n"
] | [
18,
6,
6,
1
] | [] | [] | [
"coding_style",
"java",
"python"
] | stackoverflow_0002977866_coding_style_java_python.txt |
Q:
Is there a more efficient way to organize random outcomes by size in Python?
I'm making a program that, in part, rolls four dice and subtracts the lowest dice from the outcome. The code I'm using is
die1 = random.randrange(6) + 1
die2 = random.randrange(6) + 1
die3 = random.randrange(6) + 1
die4 = random.randrange(6) + 1
if die1 <= die2 and die1 <= die3 and die1 <= die4:
drop = die1
elif die2 <= die1 and die2 <= die3 and die2 <= die4:
drop = die2
elif die3 <= die1 and die3 <= die2 and die3 <= die4:
drop = die3
else:
drop = die4
cha = die1 + die2 + die3 + die4 - drop
That's the best I could come up with from my so-far limited coding ability. Is there a better way to make it organize the four dice in order of size, then add together the three highest while ignoring the leftover? Or is the code I'm using the best way to do it?
A:
Put the dice in a list, sort the list using sorted and remove the smallest element using a slice:
>>> import random
>>> dice = [random.randint(1, 6) for x in range(4)]
>>> sum(sorted(dice)[1:])
13
Or an alternative that is simpler and will also be faster if you have lots of dice: use min to find the minimum die and subtract it from the sum of them all:
>>> sum(dice) - min(dice)
13
| Is there a more efficient way to organize random outcomes by size in Python? | I'm making a program that, in part, rolls four dice and subtracts the lowest dice from the outcome. The code I'm using is
die1 = random.randrange(6) + 1
die2 = random.randrange(6) + 1
die3 = random.randrange(6) + 1
die4 = random.randrange(6) + 1
if die1 <= die2 and die1 <= die3 and die1 <= die4:
drop = die1
elif die2 <= die1 and die2 <= die3 and die2 <= die4:
drop = die2
elif die3 <= die1 and die3 <= die2 and die3 <= die4:
drop = die3
else:
drop = die4
cha = die1 + die2 + die3 + die4 - drop
That's the best I could come up with from my so-far limited coding ability. Is there a better way to make it organize the four dice in order of size, then add together the three highest while ignoring the leftover? Or is the code I'm using the best way to do it?
| [
"Put the dice in a list, sort the list using sorted and remove the smallest element using a slice:\n>>> import random\n>>> dice = [random.randint(1, 6) for x in range(4)]\n>>> sum(sorted(dice)[1:])\n13\n\nOr an alternative that is simpler and will also be faster if you have lots of dice: use min to find the minimum die and subtract it from the sum of them all:\n>>> sum(dice) - min(dice)\n13\n\n"
] | [
8
] | [] | [] | [
"python",
"random"
] | stackoverflow_0002978317_python_random.txt |
Q:
Does Django Have a Way to Auto-Sort Model Fields?
So basically, I've got a rather large Django project going. It's a private web portal that allows users to manage various phone-related tasks.
Several pages of the portal provide a listing of Model objects to users, and list all of their attributes in a HTML table (so that users can visually look through a list of these items).
The problem I'm having is: I cannot find a Django-ish or pythonic way to handle the sorting of these Model objects by field name. As an example of what I'm talking about, here is one of my views which lists all Partyline Model objects:
def list_partylines(request):
"""
List all `Partyline`s that we own.
"""
# Figure out which sort term to use.
sort_field = request.REQUEST.get('sortby', 'did').strip()
if sort_field.startswith('-'):
search = sort_field[1:]
sort_toggle = ''
else:
search = sort_field
sort_toggle = '-'
# Check to see if the sort term is valid.
if not (search in Partyline._meta.get_all_field_names()):
sort_field = 'did'
if is_user_type(request.user, ['admin']):
partylines = Partyline.objects.all().order_by(sort_field)
else:
partylines = get_my_partylines(request.user, sort_field)
variables = RequestContext(request, {
'partylines': partylines,
'sort_toggle': sort_toggle
})
return render_to_response('portal/partylines/list.html', variables)
The sorting code basically allows users to specify a /url/?sortby=model_field_name parameter which will then return a sorted listing of objects whenever users click on the HTML table name displayed on the page.
Since I have various views in various apps which all show a listing of Model objects, and require sorting, I'm wondering if there is a generic way to do this sorting so that I don't have to?
I'm sorry if this question is a bit unclear, I'm struggling to find the right way to phrase this question.
Thanks.
A:
The way that I'd look at doing this is through a custom QuerySet. In your model, you can define the class QuerySet and add your sorting there. In order to maintain all the logic in the model object, I'd also move the contents of get_my_partylines into the QuerySet, too.
## This class is used to replicate QuerySet methods into a manager.
## This way: Partyline.objects.for_user(foo) works the same as
## Partyline.objects.filter(date=today).for_user(foo)
class CustomQuerySetManager(models.Manager):
def get_query_set(self):
return self.model.QuerySet(self.model)
def __getattr__(self, attr, *args):
try:
return getattr(self.__class__, attr, *args)
except AttributeError:
return getattr(self.get_query_set(), attr, *args)
class Partyline(models.Model):
## Define fields, blah blah.
objects = CustomQuerySetManager()
class QuerySet(QuerySet):
def sort_for_request(self, request):
sort_field = request.REQUEST.get('sortby', 'did').strip()
reverse_order = False
if sort_field.startswith('-'):
search = sort_field[1:]
else:
search = sort_field
reverse_order = True
# Check to see if the sort term is valid.
if not (search in Partyline._meta.get_all_field_names()):
sort_field = 'did'
partylines = self.all().order_by(sort_field)
if reverse_order:
partylines.reverse()
return partylines
def for_user(self, user):
if is_user_type(request.user, ['admin']):
return self.all()
else:
## Code from get_my_partylines goes here.
return self.all() ## Temporary.
views.py:
def list_partylines(request):
"""
List all `Partyline`s that we own.
"""
partylines = Partylines.objects.for_user(request.user).sort_for_request(request)
A:
There's a great example of how this is done in a generic way in django.contrib.admin.views.main.ChangeList although that does much more than sorting you can browse it's code for some hints and ideas. You may also want to look at django.contrib.admin.options.ModelAdmin the changelist method in particular to get more context.
| Does Django Have a Way to Auto-Sort Model Fields? | So basically, I've got a rather large Django project going. It's a private web portal that allows users to manage various phone-related tasks.
Several pages of the portal provide a listing of Model objects to users, and list all of their attributes in a HTML table (so that users can visually look through a list of these items).
The problem I'm having is: I cannot find a Django-ish or pythonic way to handle the sorting of these Model objects by field name. As an example of what I'm talking about, here is one of my views which lists all Partyline Model objects:
def list_partylines(request):
"""
List all `Partyline`s that we own.
"""
# Figure out which sort term to use.
sort_field = request.REQUEST.get('sortby', 'did').strip()
if sort_field.startswith('-'):
search = sort_field[1:]
sort_toggle = ''
else:
search = sort_field
sort_toggle = '-'
# Check to see if the sort term is valid.
if not (search in Partyline._meta.get_all_field_names()):
sort_field = 'did'
if is_user_type(request.user, ['admin']):
partylines = Partyline.objects.all().order_by(sort_field)
else:
partylines = get_my_partylines(request.user, sort_field)
variables = RequestContext(request, {
'partylines': partylines,
'sort_toggle': sort_toggle
})
return render_to_response('portal/partylines/list.html', variables)
The sorting code basically allows users to specify a /url/?sortby=model_field_name parameter which will then return a sorted listing of objects whenever users click on the HTML table name displayed on the page.
Since I have various views in various apps which all show a listing of Model objects, and require sorting, I'm wondering if there is a generic way to do this sorting so that I don't have to?
I'm sorry if this question is a bit unclear, I'm struggling to find the right way to phrase this question.
Thanks.
| [
"The way that I'd look at doing this is through a custom QuerySet. In your model, you can define the class QuerySet and add your sorting there. In order to maintain all the logic in the model object, I'd also move the contents of get_my_partylines into the QuerySet, too.\n## This class is used to replicate QuerySet methods into a manager.\n## This way: Partyline.objects.for_user(foo) works the same as\n## Partyline.objects.filter(date=today).for_user(foo)\nclass CustomQuerySetManager(models.Manager):\n def get_query_set(self):\n return self.model.QuerySet(self.model)\n def __getattr__(self, attr, *args):\n try:\n return getattr(self.__class__, attr, *args)\n except AttributeError:\n return getattr(self.get_query_set(), attr, *args)\n\n\nclass Partyline(models.Model):\n ## Define fields, blah blah.\n objects = CustomQuerySetManager()\n class QuerySet(QuerySet):\n def sort_for_request(self, request):\n sort_field = request.REQUEST.get('sortby', 'did').strip()\n reverse_order = False\n if sort_field.startswith('-'):\n search = sort_field[1:]\n else:\n search = sort_field\n reverse_order = True\n\n # Check to see if the sort term is valid.\n if not (search in Partyline._meta.get_all_field_names()):\n sort_field = 'did'\n\n partylines = self.all().order_by(sort_field)\n if reverse_order:\n partylines.reverse()\n return partylines\n def for_user(self, user):\n if is_user_type(request.user, ['admin']):\n return self.all()\n else:\n ## Code from get_my_partylines goes here.\n return self.all() ## Temporary.\n\nviews.py:\ndef list_partylines(request):\n \"\"\"\n List all `Partyline`s that we own.\n \"\"\"\n partylines = Partylines.objects.for_user(request.user).sort_for_request(request)\n\n",
"There's a great example of how this is done in a generic way in django.contrib.admin.views.main.ChangeList although that does much more than sorting you can browse it's code for some hints and ideas. You may also want to look at django.contrib.admin.options.ModelAdmin the changelist method in particular to get more context.\n"
] | [
2,
0
] | [] | [] | [
"django",
"django_models",
"django_templates",
"python"
] | stackoverflow_0002977845_django_django_models_django_templates_python.txt |
Q:
django : ImportError No module named myapp.views.hometest
I have fecora 11, set django with mod_wsgi2.5 and apache2.2. And I can run "python manage.py runserver" at local. It works fine. I got error when i test from remote browser.
Thanks for any suggestion and help!
A:
I just had this problem. It went away when I added sys.path.append('/path/to/project') to my .wsgi file.
A:
Is the application containing your Django project in your $PYTHONPATH (when Python is invoked in a server context)? For example, if your Django project is at /home/wwwuser/web/myproj, then /home/wwwuser/web should be in your $PYTHONPATH. You should set this in the script that loads the project when invoked from the web server.
A:
Just a guess, but unless you've explicitly made sure that your app is on PYTHONPATH, you should be specifying views in urls.py as myproject.myapp.views.functionname.
Otherwise:
check if you're setting PYTHONPATH, or what to. Your project directory should be in there.
if you enable the django admin (by uncommenting the lines that are there by default in urls.py), does that work?
A:
All required env variables should be set in django.wsgi. You could compare the env declared in django.wsgi and the env of executing ./manage runserver and make sure they are same.
Furthermore, if there is another myapp package which could be found through PYTHONPATH before /usr/local/django/myapp, ImportError may be raised.
| django : ImportError No module named myapp.views.hometest | I have fecora 11, set django with mod_wsgi2.5 and apache2.2. And I can run "python manage.py runserver" at local. It works fine. I got error when i test from remote browser.
Thanks for any suggestion and help!
| [
"I just had this problem. It went away when I added sys.path.append('/path/to/project') to my .wsgi file.\n",
"Is the application containing your Django project in your $PYTHONPATH (when Python is invoked in a server context)? For example, if your Django project is at /home/wwwuser/web/myproj, then /home/wwwuser/web should be in your $PYTHONPATH. You should set this in the script that loads the project when invoked from the web server.\n",
"Just a guess, but unless you've explicitly made sure that your app is on PYTHONPATH, you should be specifying views in urls.py as myproject.myapp.views.functionname.\nOtherwise:\n\ncheck if you're setting PYTHONPATH, or what to. Your project directory should be in there.\nif you enable the django admin (by uncommenting the lines that are there by default in urls.py), does that work?\n\n",
"\nAll required env variables should be set in django.wsgi. You could compare the env declared in django.wsgi and the env of executing ./manage runserver and make sure they are same.\nFurthermore, if there is another myapp package which could be found through PYTHONPATH before /usr/local/django/myapp, ImportError may be raised.\n\n"
] | [
4,
2,
1,
1
] | [] | [] | [
"django",
"mod_wsgi",
"python"
] | stackoverflow_0001359449_django_mod_wsgi_python.txt |
Q:
High-concurrency counters without sharding
This question concerns two implementations of counters which are intended to scale without sharding (with a tradeoff that they might under-count in some situations):
http://appengine-cookbook.appspot.com/recipe/high-concurrency-counters-without-sharding/ (the code in the comments)
http://blog.notdot.net/2010/04/High-concurrency-counters-without-sharding
My questions:
With respect to #1: Running memcache.decr() in a deferred, transactional task seems like overkill. If memcache.decr() is done outside the transaction, I think the worst-case is the transaction fails and we miss counting whatever we decremented. Am I overlooking some other problem that could occur by doing this?
What are the significiant tradeoffs between the two implementations?
Here are the tradeoffs I see:
2 does not require datastore transactions.
To get the counter's value, #2 requires a datastore fetch while with #1 typically only needs to do a memcache.get() and memcache.add().
When incrementing a counter, both call memcache.incr(). Periodically, #2 adds a task to the task queue while #1 transactionally performs a datastore get and put. #1 also always performs memcache.add() (to test whether it is time to persist the counter to the datastore).
Conclusions
(without actually running any performance tests):
1 should typically be faster at retrieving a counter (#1 memcache vs #2 datastore). Though #1 has to perform an extra memcache.add() too.
However, #2 should be faster when updating counters (#1 datastore get+put vs #2 enqueue a task).
On the other hand, with #1 you have to be a bit more careful with the update interval since the task queue quota is almost 100x smaller than either the datastore or memcahce APIs.
A:
Going to datastore is likely to be more expensive than going through memcache. Else memcache wouldn't be all that useful in the first place :-)
I'd recommend the first option.
If you have a reasonable request rate, you can actually implement it even simpler:
1) update the value in memcache
2) if the returned updated value is evenly divisible by N
2.1) add N to the datastore counter
2.2) decrement memcache by N
This assumes you can set a long enough timeout on your memcache to live between successive events, but if events are so sparse that your memcache times out, chances are you wouldn't need a "high concurrency" counter :-)
For larger sites, relying on a single memcache to do things like count total page hits may get you in trouble; in that case, you really do want to shard your memcaches, and update a random counter instance; the aggregation of counters will happen by the database update.
When using memcache, though, beware that some client APIs will assume that a one second timeout means the value isn't there. If the TCP SYN packet to the memcache instance gets dropped, this means that your request will erroneously assume the data isn't there. (Similar problems can happen with UDP for memcache)
| High-concurrency counters without sharding | This question concerns two implementations of counters which are intended to scale without sharding (with a tradeoff that they might under-count in some situations):
http://appengine-cookbook.appspot.com/recipe/high-concurrency-counters-without-sharding/ (the code in the comments)
http://blog.notdot.net/2010/04/High-concurrency-counters-without-sharding
My questions:
With respect to #1: Running memcache.decr() in a deferred, transactional task seems like overkill. If memcache.decr() is done outside the transaction, I think the worst-case is the transaction fails and we miss counting whatever we decremented. Am I overlooking some other problem that could occur by doing this?
What are the significiant tradeoffs between the two implementations?
Here are the tradeoffs I see:
2 does not require datastore transactions.
To get the counter's value, #2 requires a datastore fetch while with #1 typically only needs to do a memcache.get() and memcache.add().
When incrementing a counter, both call memcache.incr(). Periodically, #2 adds a task to the task queue while #1 transactionally performs a datastore get and put. #1 also always performs memcache.add() (to test whether it is time to persist the counter to the datastore).
Conclusions
(without actually running any performance tests):
1 should typically be faster at retrieving a counter (#1 memcache vs #2 datastore). Though #1 has to perform an extra memcache.add() too.
However, #2 should be faster when updating counters (#1 datastore get+put vs #2 enqueue a task).
On the other hand, with #1 you have to be a bit more careful with the update interval since the task queue quota is almost 100x smaller than either the datastore or memcahce APIs.
| [
"Going to datastore is likely to be more expensive than going through memcache. Else memcache wouldn't be all that useful in the first place :-)\nI'd recommend the first option.\nIf you have a reasonable request rate, you can actually implement it even simpler:\n1) update the value in memcache\n2) if the returned updated value is evenly divisible by N\n2.1) add N to the datastore counter\n2.2) decrement memcache by N\n\nThis assumes you can set a long enough timeout on your memcache to live between successive events, but if events are so sparse that your memcache times out, chances are you wouldn't need a \"high concurrency\" counter :-)\nFor larger sites, relying on a single memcache to do things like count total page hits may get you in trouble; in that case, you really do want to shard your memcaches, and update a random counter instance; the aggregation of counters will happen by the database update.\nWhen using memcache, though, beware that some client APIs will assume that a one second timeout means the value isn't there. If the TCP SYN packet to the memcache instance gets dropped, this means that your request will erroneously assume the data isn't there. (Similar problems can happen with UDP for memcache)\n"
] | [
1
] | [
"Memcache gets flushed, you lose your counter. OUCH. Using a mysql database or a NOSQL solution will resolve that problem with a possible performance hit. (Redis, Tokyotyrant, MongoDB etc...) may not have that performance hit.\nKeep in mind, you may want to do 2 actions:\n\nkeep a memcache counter just for the high performance reasons.\nkeep a log, and then get more accurate metrics from that.\n\n"
] | [
-2
] | [
"counter",
"google_app_engine",
"python"
] | stackoverflow_0002769934_counter_google_app_engine_python.txt |
Q:
Pass parameter one time, but use more times
I'm trying to do this:
commands = { 'py': 'python %s', 'md': 'markdown "%s" > "%s.html"; gnome-open "%s.html"', }
commands['md'] % 'file.md'
But like you see, the commmands['md'] uses the parameter 3 times, but the commands['py'] just use once. How can I repeat the parameter without changing the last line (so, just passing the parameter one time?)
A:
Note: The accepted answer, while it does work for both older and newer versions of Python, is discouraged in newer versions of Python.
Since str.format() is quite new, a lot of Python code still uses the % operator. However, because this old style of formatting will eventually be removed from the language, str.format() should generally be used.
For this reason if you're using Python 2.6 or newer you should use str.format instead of the old % operator:
>>> commands = {
... 'py': 'python {0}',
... 'md': 'markdown "{0}" > "{0}.html"; gnome-open "{0}.html"',
... }
>>> commands['md'].format('file.md')
'markdown "file.md" > "file.md.html"; gnome-open "file.md.html"'
A:
If you are not using 2.6 you can mod the string with a dictionary instead:
commands = { 'py': 'python %(file)s', 'md': 'markdown "%(file)s" > "%(file)s.html"; gnome-open "%(file)s.html"', }
commands['md'] % { 'file': 'file.md' }
The %()s syntax works with any of the normal % formatter types and accepts the usual other options: http://docs.python.org/library/stdtypes.html#string-formatting-operations
A:
If you're not using 2.6 or want to use those %s symbols here's another way:
>>> commands = {'py': 'python %s',
... 'md': 'markdown "%s" > "%s.html"; gnome-open "%s.html"'
... }
>>> commands['md'] % tuple(['file.md'] * 3)
'markdown "file.md" > "file.md.html"; gnome-open "file.md.html"'
| Pass parameter one time, but use more times | I'm trying to do this:
commands = { 'py': 'python %s', 'md': 'markdown "%s" > "%s.html"; gnome-open "%s.html"', }
commands['md'] % 'file.md'
But like you see, the commmands['md'] uses the parameter 3 times, but the commands['py'] just use once. How can I repeat the parameter without changing the last line (so, just passing the parameter one time?)
| [
"Note: The accepted answer, while it does work for both older and newer versions of Python, is discouraged in newer versions of Python.\n\nSince str.format() is quite new, a lot of Python code still uses the % operator. However, because this old style of formatting will eventually be removed from the language, str.format() should generally be used.\n\nFor this reason if you're using Python 2.6 or newer you should use str.format instead of the old % operator:\n>>> commands = {\n... 'py': 'python {0}',\n... 'md': 'markdown \"{0}\" > \"{0}.html\"; gnome-open \"{0}.html\"',\n... }\n>>> commands['md'].format('file.md')\n'markdown \"file.md\" > \"file.md.html\"; gnome-open \"file.md.html\"'\n\n",
"If you are not using 2.6 you can mod the string with a dictionary instead:\ncommands = { 'py': 'python %(file)s', 'md': 'markdown \"%(file)s\" > \"%(file)s.html\"; gnome-open \"%(file)s.html\"', }\n\ncommands['md'] % { 'file': 'file.md' }\n\nThe %()s syntax works with any of the normal % formatter types and accepts the usual other options: http://docs.python.org/library/stdtypes.html#string-formatting-operations\n",
"If you're not using 2.6 or want to use those %s symbols here's another way:\n>>> commands = {'py': 'python %s',\n... 'md': 'markdown \"%s\" > \"%s.html\"; gnome-open \"%s.html\"'\n... }\n>>> commands['md'] % tuple(['file.md'] * 3)\n\n'markdown \"file.md\" > \"file.md.html\"; gnome-open \"file.md.html\"'\n"
] | [
11,
3,
1
] | [] | [] | [
"parameters",
"python",
"repeat",
"string_formatting"
] | stackoverflow_0002978362_parameters_python_repeat_string_formatting.txt |
Q:
Which is should I use and why? Does it matter? SafeUnicode or django.utils.safestring.mark_safe()?
Suppose I've got a custom form label with some HTML on it like so:
SafeUnicode('<span class="superscript">™</span>')
Why would Django 1.2 have a function mark_safe if this exist? What are the differences if any?
Thanks for the help!
A:
mark_safe is a factory function which encapsulate a bit of type-checking logic in order to return, as appropriate, either a SafeUnicode or a SafeString (or possibly some other subclass of SafeData should you have defined any such subclasses). The source is easily short enough to quote...:
89 def mark_safe(s):
90 """
91 Explicitly mark a string as safe for (HTML) output purposes. The returned
92 object can be used everywhere a string or unicode object is appropriate.
93
94 Can be called multiple times on a single string.
95 """
96 if isinstance(s, SafeData):
97 return s
98 if isinstance(s, str) or (isinstance(s, Promise) and s._delegate_str):
99 return SafeString(s)
100 if isinstance(s, (unicode, Promise)):
101 return SafeUnicode(s)
102 return SafeString(str(s))
Just using SafeUnicode(s) instead of make_safe(s) will be minutely faster, but could get you in trouble if you're potentially dealing with a type and value that don't happily support being passed to the SafeUnicode initializer (e.g., a byte string with non-ascii codes, a non-string, a Promise with a string delegate, ...). If you are 100% certain that you know what you're doing, nothing stops you from going for the nanoseconds-saving approach;-).
By the way, some questions about open-source code (no matter how well documented, and Django's docs are really impressive) are often best answered by first having a look at the code (and then asking if the code's too complex or subtle to follow with assurance).
| Which is should I use and why? Does it matter? SafeUnicode or django.utils.safestring.mark_safe()? | Suppose I've got a custom form label with some HTML on it like so:
SafeUnicode('<span class="superscript">™</span>')
Why would Django 1.2 have a function mark_safe if this exist? What are the differences if any?
Thanks for the help!
| [
"mark_safe is a factory function which encapsulate a bit of type-checking logic in order to return, as appropriate, either a SafeUnicode or a SafeString (or possibly some other subclass of SafeData should you have defined any such subclasses). The source is easily short enough to quote...:\n89 def mark_safe(s):\n90 \"\"\"\n91 Explicitly mark a string as safe for (HTML) output purposes. The returned\n92 object can be used everywhere a string or unicode object is appropriate.\n93 \n94 Can be called multiple times on a single string.\n95 \"\"\"\n96 if isinstance(s, SafeData):\n97 return s\n98 if isinstance(s, str) or (isinstance(s, Promise) and s._delegate_str):\n99 return SafeString(s)\n100 if isinstance(s, (unicode, Promise)):\n101 return SafeUnicode(s)\n102 return SafeString(str(s))\n\nJust using SafeUnicode(s) instead of make_safe(s) will be minutely faster, but could get you in trouble if you're potentially dealing with a type and value that don't happily support being passed to the SafeUnicode initializer (e.g., a byte string with non-ascii codes, a non-string, a Promise with a string delegate, ...). If you are 100% certain that you know what you're doing, nothing stops you from going for the nanoseconds-saving approach;-).\nBy the way, some questions about open-source code (no matter how well documented, and Django's docs are really impressive) are often best answered by first having a look at the code (and then asking if the code's too complex or subtle to follow with assurance).\n"
] | [
5
] | [] | [] | [
"django",
"python",
"string"
] | stackoverflow_0002978530_django_python_string.txt |
Q:
Get the inputs from Excel and use those inputs in python script
How to get the inputs from excel and use those inputs in python.
A:
Take a look at xlrd
This is the best reference I found for learning how to use it: http://www.dev-explorer.com/articles/excel-spreadsheets-and-python
A:
Not sure if this is exactly what you're talking about, but:
If you have a very simple excel file (i.e. basically just one table filled with string-values, nothing fancy), and all you want to do is basic processing, then I'd suggest just converting it to a csv (comma-seperated value file). This can be done by "saving as..." in excel and selecting csv.
This is just a file with the same data as the excel, except represented by lines seperated with commas:
cell A:1, cell A:2, cell A:3
cell B:1, cell B:2, cell b:3
This is then very easy to parse using standard python functions (i.e., readlines to get each line of the file, then it's just a list that you can split on ",").
This if of course only helpful in some situations, like when you get a log from a program and want to quickly run a python script which handles it.
Note: As was pointed out in the comments, splitting the string on "," is actually not very good, since you run into all sorts of problems. Better to use the csv module (which another answer here teaches how to use).
A:
import win32com
Excel=win32com.client.Dispatch("Excel.Application")
Excel.Workbooks.Open(file path)
Cells=Excel.ActiveWorkBook.ActiveSheet.Cells
Cells(row,column).Value=Input
Output=Cells(row,column).Value
A:
If you can save as a csv file with headers:
Attrib1, Attrib2, Attrib3
value1.1, value1.2, value1.3
value2,1,...
Then I would highly recommend looking at built-in the csv module
With that you can do things like:
csvFile = csv.DictReader(open("csvFile.csv", "r"))
for row in csvFile:
print row['Attrib1'], row['Attrib2']
| Get the inputs from Excel and use those inputs in python script | How to get the inputs from excel and use those inputs in python.
| [
"Take a look at xlrd\nThis is the best reference I found for learning how to use it: http://www.dev-explorer.com/articles/excel-spreadsheets-and-python\n",
"Not sure if this is exactly what you're talking about, but:\nIf you have a very simple excel file (i.e. basically just one table filled with string-values, nothing fancy), and all you want to do is basic processing, then I'd suggest just converting it to a csv (comma-seperated value file). This can be done by \"saving as...\" in excel and selecting csv.\nThis is just a file with the same data as the excel, except represented by lines seperated with commas:\ncell A:1, cell A:2, cell A:3\ncell B:1, cell B:2, cell b:3\nThis is then very easy to parse using standard python functions (i.e., readlines to get each line of the file, then it's just a list that you can split on \",\").\nThis if of course only helpful in some situations, like when you get a log from a program and want to quickly run a python script which handles it.\nNote: As was pointed out in the comments, splitting the string on \",\" is actually not very good, since you run into all sorts of problems. Better to use the csv module (which another answer here teaches how to use).\n",
"import win32com\n\nExcel=win32com.client.Dispatch(\"Excel.Application\")\nExcel.Workbooks.Open(file path) \nCells=Excel.ActiveWorkBook.ActiveSheet.Cells\nCells(row,column).Value=Input\nOutput=Cells(row,column).Value\n\n",
"If you can save as a csv file with headers: \nAttrib1, Attrib2, Attrib3\nvalue1.1, value1.2, value1.3\nvalue2,1,...\n\nThen I would highly recommend looking at built-in the csv module \nWith that you can do things like:\ncsvFile = csv.DictReader(open(\"csvFile.csv\", \"r\"))\nfor row in csvFile:\n print row['Attrib1'], row['Attrib2']\n\n"
] | [
6,
2,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0001459788_python.txt |
Q:
Using Elixir, how can I get the table object of a self-referential relationship to perform inserts on?
I'm using Elixir with SQLite and I'd like to perform multiple inserts as per the docs:
http://www.sqlalchemy.org/docs/05/sqlexpression.html#executing-multiple-statements
However, my ManyToMany relationship is self-referential and I can't figure out where to get the insert() object from. Can anyone help?
A:
It might be easy if you just stick with SQL Alchemy's built in Declarative style instead of using Elixir as much of what it does is now doable in there. Then you can follow the example here: Many to Many
Then look very closely at the code where a post is added and then keywords related to that post are added. You get multiple inserts done for you into the relator table - the one that maintains the many to many relationship:
>>> post.keywords.append(Keyword('wendy'))
>>> post.keywords.append(Keyword('firstpost'))
| Using Elixir, how can I get the table object of a self-referential relationship to perform inserts on? | I'm using Elixir with SQLite and I'd like to perform multiple inserts as per the docs:
http://www.sqlalchemy.org/docs/05/sqlexpression.html#executing-multiple-statements
However, my ManyToMany relationship is self-referential and I can't figure out where to get the insert() object from. Can anyone help?
| [
"It might be easy if you just stick with SQL Alchemy's built in Declarative style instead of using Elixir as much of what it does is now doable in there. Then you can follow the example here: Many to Many\nThen look very closely at the code where a post is added and then keywords related to that post are added. You get multiple inserts done for you into the relator table - the one that maintains the many to many relationship:\n>>> post.keywords.append(Keyword('wendy'))\n>>> post.keywords.append(Keyword('firstpost'))\n\n"
] | [
0
] | [] | [] | [
"python",
"python_elixir",
"sqlalchemy"
] | stackoverflow_0002978797_python_python_elixir_sqlalchemy.txt |
Q:
Parsing complicated query parameters
My Python server receives jobs that contain a list of the items to act against, rather like a search query term; an example input:
(Customer:24 OR Customer:25 OR (Group:NW NOT Customer:26))
So when a job is submitted, I have to parse this recipient pattern and resolve all those customers that match, and create the job with that input.
To complicate matters, customers can join and leave groups at any time, and the job should be updated live when this happens.
So as groups change membership, I have to notify all currently-running jobs about it (and they do their thing).
How is best to parse, apply and store (in my RDBMS) this kind of list of constraints?
Parsing: eval(), hand-written FSM eating characters, yacc/bison/?, or?
applying: how would you store these constraints, and evaluate them?
storing: in a database, a row per term with a evaluation-order and a NOT/AND/OR op field; or as a blob?
A:
I suggest pyparsing (http://pyparsing.wikispaces.com/) which lets you describe a grammar neatly and gives you a tree filled with data. Then, hopefully, your syntax is close enough to SQL so that you can trivially form a "where" clause from the parsing results.
You may pickle and store the parsed tree, or the unparsed requests, or ready-made SQL clauses. This depends on how often will you fetch and reuse them, and whether you need to inspect the database by other means and see the queries. I see no point in storing the queries in a non-blob form unless you want to run interesting selects against them — and if you do, you probably need an XML database or something else that supports trees easily.
A:
Consider using SQL instead of inventing yet another mini language:
(
cust.id = 24
or cust.id = 25
or (cust.id = cust_group.cust_id and cust_group.id = 'NW' and cust.id != 26)
) // or somthing similar
SQL injection worries? You'd need to parse it (not too difficult if your expressions are suitably limited) and check it for plausibility whatever language it was written in.
| Parsing complicated query parameters | My Python server receives jobs that contain a list of the items to act against, rather like a search query term; an example input:
(Customer:24 OR Customer:25 OR (Group:NW NOT Customer:26))
So when a job is submitted, I have to parse this recipient pattern and resolve all those customers that match, and create the job with that input.
To complicate matters, customers can join and leave groups at any time, and the job should be updated live when this happens.
So as groups change membership, I have to notify all currently-running jobs about it (and they do their thing).
How is best to parse, apply and store (in my RDBMS) this kind of list of constraints?
Parsing: eval(), hand-written FSM eating characters, yacc/bison/?, or?
applying: how would you store these constraints, and evaluate them?
storing: in a database, a row per term with a evaluation-order and a NOT/AND/OR op field; or as a blob?
| [
"I suggest pyparsing (http://pyparsing.wikispaces.com/) which lets you describe a grammar neatly and gives you a tree filled with data. Then, hopefully, your syntax is close enough to SQL so that you can trivially form a \"where\" clause from the parsing results.\nYou may pickle and store the parsed tree, or the unparsed requests, or ready-made SQL clauses. This depends on how often will you fetch and reuse them, and whether you need to inspect the database by other means and see the queries. I see no point in storing the queries in a non-blob form unless you want to run interesting selects against them — and if you do, you probably need an XML database or something else that supports trees easily.\n",
"Consider using SQL instead of inventing yet another mini language:\n(\ncust.id = 24\nor cust.id = 25\nor (cust.id = cust_group.cust_id and cust_group.id = 'NW' and cust.id != 26)\n) // or somthing similar\n\nSQL injection worries? You'd need to parse it (not too difficult if your expressions are suitably limited) and check it for plausibility whatever language it was written in.\n"
] | [
1,
0
] | [] | [] | [
"database",
"parsing",
"python"
] | stackoverflow_0002918828_database_parsing_python.txt |
Q:
Proper way to define "remaining time off" for a Django User
I've implemented a UserProfile model (as the Django 1.2 docs say is the proper way to save additional data about a User) which has a 'remaining_vacation_hours' field.
In our current system, when a user fills out a Time Off Request, the remaining hours available should be checked to see that they have enough vacation to use, or otherwise be warned that they are asking for more than they have.
Of course, vacation hours are annually replenished, so it would be appropriate for the system to check if the user would have additional vacation accrued for the dates they're asking off.
Simply creating a get_remaining_vacation_hours() method would suffice, because other calculations or business logic that might need to be added in the future could be added to or called from that method.
My question is, does it sound correct that the get_remaining_vacation_hours() method be added to the UserProfile model?
It seems to make sense, but I wanted to verify with the community that I wasn't overlooking a better practice for this type of thing. Any ideas or suggestions are welcome.
A:
If you do not want to modify/inherit from the original User model I'd say it's totally ok if the method is added to your UserProfile!
A:
First, I would suggest the solution of making a different method (as you've already suggested), but on a different class and pass the user instance as a parameter. That way, your user isn't characterized by something that might not apply to everyone at that level of abstraction. Consider, for example, potential employees, auditors, partners, contractors, or other types of users.
If you're system will never have users as such (or least very few), then I echo lazerscience's answer. In my experience, even well-seasoned programmers with great design experience tend to vary on what kind of information and behavior should go into the user profile. Trust your gut. If it "feels" like it might not be a source of pain or confusion then go for it. See also: Jeff Atwood's advice in matters such as these.
| Proper way to define "remaining time off" for a Django User | I've implemented a UserProfile model (as the Django 1.2 docs say is the proper way to save additional data about a User) which has a 'remaining_vacation_hours' field.
In our current system, when a user fills out a Time Off Request, the remaining hours available should be checked to see that they have enough vacation to use, or otherwise be warned that they are asking for more than they have.
Of course, vacation hours are annually replenished, so it would be appropriate for the system to check if the user would have additional vacation accrued for the dates they're asking off.
Simply creating a get_remaining_vacation_hours() method would suffice, because other calculations or business logic that might need to be added in the future could be added to or called from that method.
My question is, does it sound correct that the get_remaining_vacation_hours() method be added to the UserProfile model?
It seems to make sense, but I wanted to verify with the community that I wasn't overlooking a better practice for this type of thing. Any ideas or suggestions are welcome.
| [
"If you do not want to modify/inherit from the original User model I'd say it's totally ok if the method is added to your UserProfile!\n",
"First, I would suggest the solution of making a different method (as you've already suggested), but on a different class and pass the user instance as a parameter. That way, your user isn't characterized by something that might not apply to everyone at that level of abstraction. Consider, for example, potential employees, auditors, partners, contractors, or other types of users. \nIf you're system will never have users as such (or least very few), then I echo lazerscience's answer. In my experience, even well-seasoned programmers with great design experience tend to vary on what kind of information and behavior should go into the user profile. Trust your gut. If it \"feels\" like it might not be a source of pain or confusion then go for it. See also: Jeff Atwood's advice in matters such as these.\n"
] | [
2,
2
] | [] | [] | [
"django",
"python",
"user_profile"
] | stackoverflow_0002977824_django_python_user_profile.txt |
Q:
WTForms error:TypeError: formdata should be a multidict-type wrapper
from wtforms import Form, BooleanField, TextField, validators,PasswordField
class LoginForm(Form):
username = TextField('Username', [validators.Length(min=4, max=25)])
password = PasswordField('Password')
when i use LoginForm on webapp(gae) like this :
def post(self):
form=LoginForm(self.request)
but it show error :
Traceback (most recent call last):
File "D:\Program Files\Google\google_appengine\google\appengine\ext\webapp\__init__.py", line 513, in __call__
handler.post(*groups)
File "D:\zjm_code\forum_blog_gae\main.py", line 189, in post
form=LoginForm(self.request)
File "D:\zjm_code\forum_blog_gae\wtforms\form.py", line 161, in __call__
return type.__call__(cls, *args, **kwargs)
File "D:\zjm_code\forum_blog_gae\wtforms\form.py", line 214, in __init__
self.process(formdata, obj, **kwargs)
File "D:\zjm_code\forum_blog_gae\wtforms\form.py", line 85, in process
raise TypeError("formdata should be a multidict-type wrapper that supports the 'getlist' method")
TypeError: formdata should be a multidict-type wrapper that supports the 'getlist' method
how to make it running
thanks
A:
You are supposed to pass in self.request.form (the actual form fields, not the entire request)
| WTForms error:TypeError: formdata should be a multidict-type wrapper | from wtforms import Form, BooleanField, TextField, validators,PasswordField
class LoginForm(Form):
username = TextField('Username', [validators.Length(min=4, max=25)])
password = PasswordField('Password')
when i use LoginForm on webapp(gae) like this :
def post(self):
form=LoginForm(self.request)
but it show error :
Traceback (most recent call last):
File "D:\Program Files\Google\google_appengine\google\appengine\ext\webapp\__init__.py", line 513, in __call__
handler.post(*groups)
File "D:\zjm_code\forum_blog_gae\main.py", line 189, in post
form=LoginForm(self.request)
File "D:\zjm_code\forum_blog_gae\wtforms\form.py", line 161, in __call__
return type.__call__(cls, *args, **kwargs)
File "D:\zjm_code\forum_blog_gae\wtforms\form.py", line 214, in __init__
self.process(formdata, obj, **kwargs)
File "D:\zjm_code\forum_blog_gae\wtforms\form.py", line 85, in process
raise TypeError("formdata should be a multidict-type wrapper that supports the 'getlist' method")
TypeError: formdata should be a multidict-type wrapper that supports the 'getlist' method
how to make it running
thanks
| [
"You are supposed to pass in self.request.form (the actual form fields, not the entire request)\n"
] | [
8
] | [] | [] | [
"python",
"wtforms"
] | stackoverflow_0002978986_python_wtforms.txt |
Q:
why doesnt netbeans support python and django?
i wonder why sun doesnt support python and django in netbeans?
cause i am choosing between learning ruby/rails or python/django.
does this mean that i should use ruby/rails cause then support comes out of the box?
seems that other applications favor support for ruby over python.
A:
Netbeans now suports python please take look at http://wiki.netbeans.org/Python and DJANGO http://wiki.netbeans.org/Python70Roadmap
A:
Why let one particular IDE determine what programming language you learn?
I use python a lot, and Geany does what I need rather well. Other folks like heavier weight editors. Here's a list to get you started:
What IDE to use for Python?
| why doesnt netbeans support python and django? | i wonder why sun doesnt support python and django in netbeans?
cause i am choosing between learning ruby/rails or python/django.
does this mean that i should use ruby/rails cause then support comes out of the box?
seems that other applications favor support for ruby over python.
| [
"Netbeans now suports python please take look at http://wiki.netbeans.org/Python and DJANGO http://wiki.netbeans.org/Python70Roadmap\n",
"Why let one particular IDE determine what programming language you learn?\nI use python a lot, and Geany does what I need rather well. Other folks like heavier weight editors. Here's a list to get you started:\nWhat IDE to use for Python?\n"
] | [
6,
3
] | [] | [] | [
"ide",
"netbeans",
"python",
"ruby"
] | stackoverflow_0002979053_ide_netbeans_python_ruby.txt |
Q:
pyplot: really slow creating heatmaps
I have a loop that executes the body about 200 times. In each loop iteration, it does a sophisticated calculation, and then as debugging, I wish to produce a heatmap of a NxM matrix. But, generating this heatmap is unbearably slow and significantly slow downs an already slow algorithm.
My code is along the lines:
import numpy
import matplotlib.pyplot as plt
for i in range(200):
matrix = complex_calculation()
plt.set_cmap("gray")
plt.imshow(matrix)
plt.savefig("frame{0}.png".format(i))
The matrix, from numpy, is not huge --- 300 x 600 of doubles. Even if I do not save the figure and instead update an on-screen plot, it's even slower.
Surely I must be abusing pyplot. (Matlab can do this, no problem.) How do I speed this up?
A:
Try putting plt.clf() in the loop to clear the current figure:
for i in range(200):
matrix = complex_calculation()
plt.set_cmap("gray")
plt.imshow(matrix)
plt.savefig("frame{0}.png".format(i))
plt.clf()
If you don't do this, the loop slows down as the machine struggles to allocate more and more memory for the figure.
A:
I think this is a bit faster:
import matplotlib.pyplot as plt
from matplotlib import cm
fig = plt.figure()
ax = fig.add_axes([0.1,0.1,0.8,0.8])
for i in range(200):
matrix = complex_calculation()
ax.imshow(matrix, cmap=cm.gray)
fig.savefig("frame{0}.png".format(i))
plt.imshow calls gca which calls gcf which checks to see if there is a figure; if not, it creates one. By manually instantiating the figure first, you do not need to do all that.
| pyplot: really slow creating heatmaps | I have a loop that executes the body about 200 times. In each loop iteration, it does a sophisticated calculation, and then as debugging, I wish to produce a heatmap of a NxM matrix. But, generating this heatmap is unbearably slow and significantly slow downs an already slow algorithm.
My code is along the lines:
import numpy
import matplotlib.pyplot as plt
for i in range(200):
matrix = complex_calculation()
plt.set_cmap("gray")
plt.imshow(matrix)
plt.savefig("frame{0}.png".format(i))
The matrix, from numpy, is not huge --- 300 x 600 of doubles. Even if I do not save the figure and instead update an on-screen plot, it's even slower.
Surely I must be abusing pyplot. (Matlab can do this, no problem.) How do I speed this up?
| [
"Try putting plt.clf() in the loop to clear the current figure:\nfor i in range(200):\n matrix = complex_calculation()\n plt.set_cmap(\"gray\")\n plt.imshow(matrix)\n plt.savefig(\"frame{0}.png\".format(i))\n plt.clf()\n\nIf you don't do this, the loop slows down as the machine struggles to allocate more and more memory for the figure.\n",
"I think this is a bit faster:\nimport matplotlib.pyplot as plt\nfrom matplotlib import cm\nfig = plt.figure()\nax = fig.add_axes([0.1,0.1,0.8,0.8])\nfor i in range(200):\n matrix = complex_calculation()\n ax.imshow(matrix, cmap=cm.gray)\n fig.savefig(\"frame{0}.png\".format(i))\n\nplt.imshow calls gca which calls gcf which checks to see if there is a figure; if not, it creates one. By manually instantiating the figure first, you do not need to do all that.\n"
] | [
5,
3
] | [] | [] | [
"matplotlib",
"python"
] | stackoverflow_0002971653_matplotlib_python.txt |
Q:
Is there any thorough, broad documentation of Twisted that is better than the official site?
I've been looking at twisted for a while now. It looks interesting - it seems like a good way to leverage a lot of power when writing servers. Unfortunately, in spite of writing a few web servers using twisted.web (from reading other people's source and an extremely dated O'Reilly book) I've never really felt like I had reached an affinity with twisted... a level of understanding that actually gave me some of the power it seems like it has.
I think I need some good documentation to arrive at a better level of understanding - I simply don't have time to pore over the source, and other threads on SO have mentioned twisted's official documentation, which is patchy at best, absent at worst, and occasionally very out of date.
Is there anything else out there that is more thorough, more forgiving, and more useful, or am I stuck with another classic, boring STFU and RTFM even though TFM is not helpful?
Update
In response to JP Calderone's comment that I'm just having a bitch, to some extent I guess I am, but I think the breadth of the question is valid considering the breadth and value of Twisted and the lack of obvious, thorough documentation. I have a few things in mind that I wanted to investigate, but I've been getting OK results just hacking things together and asking for specifics when a deeper, broader understanding is what I'm looking for is, in my mind, not helpful.
The contrast that immediately springs to mind is Django... I can read over the (very thorough) Django documentation and not necessarily know how to do everything it can do immediately, but I can get a really good overview of how I might do everything I needed to do, and know exactly where to look when the time comes.
A:
I'm going to repeat what some of the answerers here have said (they're all good answers) in the hopes of providing an answer that is somewhat comprehensive.
While the included documentation is spotty in places, the core documentation contains several helpful and brief introductions to the basic concepts in Twisted. Especially, see Using Deferreds, Writing Clients and Writing Servers.
Also, the API documentation - especially the documentation in interface modules - is increasingly thorough and coherent with each subsequent release.
If you're interested in a higher-level description of Twisted's goals and design so you know how to approach some of this other documentation, I co-authored a paper presented at USENIX 2003 with Itamar Turner-Trauring.
Twisted's FAQ is also a bit meandering, but may help you with many stumbling blocks that people hit when working their way through introductory material.
The O'Reilly book about Twisted has some great examples which may further elucidate core concepts like Deferreds and the Reactor.
Jean-Paul Calderone's "Twisted Web In 60 Seconds" tutorials are a good introduction to the somewhat higher-level twisted.web, of course, but you will also see lots of useful patterns repeated throughout which may be useful to you in whatever application you're writing.
I have written a pair of articles on building-blocks used within Twisted, to deal with the filesystem and to load plugins.
Last but certainly not least, Dave Peticolas's modestly titled "Twisted Intro" is a very comprehensive description, with diagrams and anecdotes, on the introductory material that so many people have difficulty with.
Please also note that all new functionality comes with new API (i.e. reference) documentation; we hope that this will make it more reasonable for people with technical writing skills to write documentation without having to struggle through even understanding what the method names mean.
A:
The Twisted Intro by Dave Peticolas is an amazing overview of Twisted from the ground up. It starts simple and then starts getting deeper and deeper while explaining everything along the way.
I've been using Twisted for years and found this intro to fill in all those gaps I was missing and shed light on the whole thing. Definitely worth your time to check it out!
A:
Check Twisted Web in 60 seconds by Jean-Paul Calderone!
But, honestly, the Twisted's official documentation is not perfect but I'll not call it disgusting. There's a lot of valuable info in it.
A:
Take a look at this previous post...
Python twisted: where to start
A:
There's the O'Reilly book Twisted Network Programming Essentials.
I have not read it, but the ToC looks nice enough.
| Is there any thorough, broad documentation of Twisted that is better than the official site? | I've been looking at twisted for a while now. It looks interesting - it seems like a good way to leverage a lot of power when writing servers. Unfortunately, in spite of writing a few web servers using twisted.web (from reading other people's source and an extremely dated O'Reilly book) I've never really felt like I had reached an affinity with twisted... a level of understanding that actually gave me some of the power it seems like it has.
I think I need some good documentation to arrive at a better level of understanding - I simply don't have time to pore over the source, and other threads on SO have mentioned twisted's official documentation, which is patchy at best, absent at worst, and occasionally very out of date.
Is there anything else out there that is more thorough, more forgiving, and more useful, or am I stuck with another classic, boring STFU and RTFM even though TFM is not helpful?
Update
In response to JP Calderone's comment that I'm just having a bitch, to some extent I guess I am, but I think the breadth of the question is valid considering the breadth and value of Twisted and the lack of obvious, thorough documentation. I have a few things in mind that I wanted to investigate, but I've been getting OK results just hacking things together and asking for specifics when a deeper, broader understanding is what I'm looking for is, in my mind, not helpful.
The contrast that immediately springs to mind is Django... I can read over the (very thorough) Django documentation and not necessarily know how to do everything it can do immediately, but I can get a really good overview of how I might do everything I needed to do, and know exactly where to look when the time comes.
| [
"I'm going to repeat what some of the answerers here have said (they're all good answers) in the hopes of providing an answer that is somewhat comprehensive.\n\nWhile the included documentation is spotty in places, the core documentation contains several helpful and brief introductions to the basic concepts in Twisted. Especially, see Using Deferreds, Writing Clients and Writing Servers.\nAlso, the API documentation - especially the documentation in interface modules - is increasingly thorough and coherent with each subsequent release.\nIf you're interested in a higher-level description of Twisted's goals and design so you know how to approach some of this other documentation, I co-authored a paper presented at USENIX 2003 with Itamar Turner-Trauring.\nTwisted's FAQ is also a bit meandering, but may help you with many stumbling blocks that people hit when working their way through introductory material.\nThe O'Reilly book about Twisted has some great examples which may further elucidate core concepts like Deferreds and the Reactor.\nJean-Paul Calderone's \"Twisted Web In 60 Seconds\" tutorials are a good introduction to the somewhat higher-level twisted.web, of course, but you will also see lots of useful patterns repeated throughout which may be useful to you in whatever application you're writing.\nI have written a pair of articles on building-blocks used within Twisted, to deal with the filesystem and to load plugins.\nLast but certainly not least, Dave Peticolas's modestly titled \"Twisted Intro\" is a very comprehensive description, with diagrams and anecdotes, on the introductory material that so many people have difficulty with.\n\nPlease also note that all new functionality comes with new API (i.e. reference) documentation; we hope that this will make it more reasonable for people with technical writing skills to write documentation without having to struggle through even understanding what the method names mean.\n",
"The Twisted Intro by Dave Peticolas is an amazing overview of Twisted from the ground up. It starts simple and then starts getting deeper and deeper while explaining everything along the way. \nI've been using Twisted for years and found this intro to fill in all those gaps I was missing and shed light on the whole thing. Definitely worth your time to check it out! \n",
"Check Twisted Web in 60 seconds by Jean-Paul Calderone!\nBut, honestly, the Twisted's official documentation is not perfect but I'll not call it disgusting. There's a lot of valuable info in it.\n",
"Take a look at this previous post...\nPython twisted: where to start\n",
"There's the O'Reilly book Twisted Network Programming Essentials.\nI have not read it, but the ToC looks nice enough.\n"
] | [
16,
7,
2,
2,
1
] | [] | [] | [
"python",
"twisted"
] | stackoverflow_0002972703_python_twisted.txt |
Q:
Map vs list comprehension in Python
When should you use map/filter instead of a list comprehension or generator expression?
A:
You might want to take a look at the responses to this question:
Python List Comprehension Vs. Map
Also, here's a relevant essay from Guido, creator and BDFL of Python:
http://www.artima.com/weblogs/viewpost.jsp?thread=98196
Personally, I prefer list comprehensions and generator expressions because their meaning is more obvious when reading the code.
A:
List comprehensions and generator expressions are generally considered more pythonic. When writing python code it is best to use list comprehensions and generator expressions simply because it's the way python programmers tend to do things.
Map and filter both return list objects just like list comprehensions. Generator expressions return a generator. With a generator, computation happens as needed instead of computing and storing the results. This can lead to lower memory usage if the input sizes are large. Also, keep in mind that generators are not indexable. They must be read from sequentially.
Below are some examples of how memory usage would differ when using different methods transforming a sequence of numbers and summing them using list comprehension, generator expressions and map.
k=1000
def transform(input):
return input + 1
"""
1. range(k) allocates a k element list [0...k]
2. Iterate over each element in that list and compute the transform
3. Store the results in a list
4. Pass the list to sum
Memory: Allocates enough 2 lists of size k
"""
print sum([transform(i) for i in range(k)])
"""
1. Create an xrange object
2. Pass transform and xrange object to map
3. Map returns a list of results [1...k+1]
4. Pass list to sum
Memory: Creates a constant size object and creates a list of size k
"""
print sum(map(transform, xrange(k)))
"""
1. Create an xrange object
2. Create a generator object
3. Pass generator object to sum
Memory: Allocates 2 objects of constant size
"""
print sum(transform(i) for i in xrange(k))
"""
Create a generator object and operate on it directly
"""
g = (transform(i) for i in xrange(k))
print dir(g)
print g.next()
print g.next()
print g.next()
| Map vs list comprehension in Python | When should you use map/filter instead of a list comprehension or generator expression?
| [
"You might want to take a look at the responses to this question:\nPython List Comprehension Vs. Map\nAlso, here's a relevant essay from Guido, creator and BDFL of Python:\nhttp://www.artima.com/weblogs/viewpost.jsp?thread=98196\nPersonally, I prefer list comprehensions and generator expressions because their meaning is more obvious when reading the code.\n",
"List comprehensions and generator expressions are generally considered more pythonic. When writing python code it is best to use list comprehensions and generator expressions simply because it's the way python programmers tend to do things.\nMap and filter both return list objects just like list comprehensions. Generator expressions return a generator. With a generator, computation happens as needed instead of computing and storing the results. This can lead to lower memory usage if the input sizes are large. Also, keep in mind that generators are not indexable. They must be read from sequentially.\nBelow are some examples of how memory usage would differ when using different methods transforming a sequence of numbers and summing them using list comprehension, generator expressions and map.\nk=1000\n\ndef transform(input):\n return input + 1\n\n\"\"\"\n 1. range(k) allocates a k element list [0...k]\n 2. Iterate over each element in that list and compute the transform\n 3. Store the results in a list\n 4. Pass the list to sum\n\nMemory: Allocates enough 2 lists of size k\n\"\"\"\nprint sum([transform(i) for i in range(k)])\n\n\"\"\"\n 1. Create an xrange object\n 2. Pass transform and xrange object to map\n 3. Map returns a list of results [1...k+1]\n 4. Pass list to sum\n\nMemory: Creates a constant size object and creates a list of size k\n\"\"\"\nprint sum(map(transform, xrange(k)))\n\n\"\"\"\n 1. Create an xrange object\n 2. Create a generator object\n 3. Pass generator object to sum\n\nMemory: Allocates 2 objects of constant size\n\"\"\"\nprint sum(transform(i) for i in xrange(k))\n\n\"\"\"\nCreate a generator object and operate on it directly\n\"\"\"\ng = (transform(i) for i in xrange(k))\nprint dir(g)\nprint g.next()\nprint g.next()\nprint g.next()\n\n"
] | [
7,
1
] | [] | [] | [
"python"
] | stackoverflow_0002979290_python.txt |
Q:
Unit testing authorization in a Pylons app fails; cookies aren't been correctly set or recorded
I'm having an issue running unit tests for authorization in a Pylons app. It appears as though certain cookies set in the test case may not be correctly written or parsed. Cookies work fine when hitting the app with a browser.
Here is my test case inside a paste-generated TestController:
def test_good_login(self):
r = self.app.post('/dologin', params={'login': self.user['username'], 'password': self.password})
r = r.follow() # Should only be one redirect to root
assert 'http://localhost/' == r.request.url
assert 'Dashboard' in r
This is supposed to test that a login of an existing account forwards the user to the dashboard page. Instead, what happens is that the user is redirected back to the login. The first POST works, sets the user in the session and returns cookies. Although those cookies are sent in the follow request, they don't seem to be correctly parsed.
I start by setting a breakpoint at the beginning of the above method and see what the login response returns:
> nosetests --pdb --pdb-failure -s foo.tests.functional.test_account:TestMainController.test_good_login
Running setup_config() from foo.websetup
> /Users/istevens/dev/foo/foo/tests/functional/test_account.py(33)test_good_login()
-> r = self.app.post('/dologin', params={'login': self.user['username'], 'password': self.password})
(Pdb) n
> /Users/istevens/dev/foo/foo/tests/functional/test_account.py(34)test_good_login()
-> r = r.follow() # Should only be one redirect to root
(Pdb) p r.cookies_set
{'auth_tkt': '"4c898eb72f7ad38551eb11e1936303374bd871934bd871833d19ad8a79000000!"'}
(Pdb) p r.request.environ['REMOTE_USER']
'4bd871833d19ad8a79000000'
(Pdb) p r.headers['Location']
'http://localhost/?__logins=0'
A session appears to be created and a cookie sent back. The browser is redirected to the root, not the login, which also indicates a successful login. If I step past the follow(), I get:
> /Users/istevens/dev/foo/foo/tests/functional/test_account.py(35)test_good_login()
-> assert 'http://localhost/' == r.request.url
(Pdb) p r.request.headers
{'Host': 'localhost:80', 'Cookie': 'auth_tkt=""\\"4c898eb72f7ad38551eb11e1936303374bd871934bd871833d19ad8a79000000!\\"""; '}
(Pdb) p r.request.environ['REMOTE_USER']
*** KeyError: KeyError('REMOTE_USER',)
(Pdb) p r.request.environ['HTTP_COOKIE']
'auth_tkt=""\\"4c898eb72f7ad38551eb11e1936303374bd871934bd871833d19ad8a79000000!\\"""; '
(Pdb) p r.request.cookies
{'auth_tkt': ''}
(Pdb) p r
<302 Found text/html location: http://localhost/login?__logins=1&came_from=http%3A%2F%2Flocalhost%2F body='302 Found...y. '/149>
This indicates to me that the cookie was passed in on the request, although with dubious escaping. The environ appears to be without the session created on the prior request. The cookie has been copied to the environ from the headers, but the cookies in the request seems incorrectly set. Lastly, the user is redirected to the login page, indicating that the user isn't logged in.
Authorization in the app is done via repoze.who and repoze.who.plugins.ldap with repoze.who_friendlyform performing the challenge. I'm using the stock tests.TestController created by paste:
class TestController(TestCase):
def __init__(self, *args, **kwargs):
if pylons.test.pylonsapp:
wsgiapp = pylons.test.pylonsapp
else:
wsgiapp = loadapp('config:%s' % config['__file__'])
self.app = TestApp(wsgiapp)
url._push_object(URLGenerator(config['routes.map'], environ))
TestCase.__init__(self, *args, **kwargs)
That's a webtest.TestApp, by the way.
The encoding of the cookie is done in webtest.TestApp using Cookie:
>>> from Cookie import _quote
>>> _quote('"84533cf9f661f97239208fb844a09a6d4bd8552d4bd8550c3d19ad8339000000!"')
'"\\"84533cf9f661f97239208fb844a09a6d4bd8552d4bd8550c3d19ad8339000000!\\""'
I trust that that's correct.
My guess is that something on the response side is incorrectly parsing the cookie data into cookies in the server-side request. But what? Any ideas?
A:
This issue disappeared after downgrading WebTest from 1.2.1 to 1.2.
A:
The issue continually appeared for me regardless of the version of WebTest. However, after much mucking around I noticed that when the cookie was first set it was using 127.0.0.1 as the REMOTE_ADDR value but on the second request it changed to 0.0.0.0.
If I did the get request and set the REMOTE_ADDR to 127.0.0.1 all was well!
response = response.goto(url('home'), extra_environ=dict(REMOTE_ADDR='127.0.0.1'))
| Unit testing authorization in a Pylons app fails; cookies aren't been correctly set or recorded | I'm having an issue running unit tests for authorization in a Pylons app. It appears as though certain cookies set in the test case may not be correctly written or parsed. Cookies work fine when hitting the app with a browser.
Here is my test case inside a paste-generated TestController:
def test_good_login(self):
r = self.app.post('/dologin', params={'login': self.user['username'], 'password': self.password})
r = r.follow() # Should only be one redirect to root
assert 'http://localhost/' == r.request.url
assert 'Dashboard' in r
This is supposed to test that a login of an existing account forwards the user to the dashboard page. Instead, what happens is that the user is redirected back to the login. The first POST works, sets the user in the session and returns cookies. Although those cookies are sent in the follow request, they don't seem to be correctly parsed.
I start by setting a breakpoint at the beginning of the above method and see what the login response returns:
> nosetests --pdb --pdb-failure -s foo.tests.functional.test_account:TestMainController.test_good_login
Running setup_config() from foo.websetup
> /Users/istevens/dev/foo/foo/tests/functional/test_account.py(33)test_good_login()
-> r = self.app.post('/dologin', params={'login': self.user['username'], 'password': self.password})
(Pdb) n
> /Users/istevens/dev/foo/foo/tests/functional/test_account.py(34)test_good_login()
-> r = r.follow() # Should only be one redirect to root
(Pdb) p r.cookies_set
{'auth_tkt': '"4c898eb72f7ad38551eb11e1936303374bd871934bd871833d19ad8a79000000!"'}
(Pdb) p r.request.environ['REMOTE_USER']
'4bd871833d19ad8a79000000'
(Pdb) p r.headers['Location']
'http://localhost/?__logins=0'
A session appears to be created and a cookie sent back. The browser is redirected to the root, not the login, which also indicates a successful login. If I step past the follow(), I get:
> /Users/istevens/dev/foo/foo/tests/functional/test_account.py(35)test_good_login()
-> assert 'http://localhost/' == r.request.url
(Pdb) p r.request.headers
{'Host': 'localhost:80', 'Cookie': 'auth_tkt=""\\"4c898eb72f7ad38551eb11e1936303374bd871934bd871833d19ad8a79000000!\\"""; '}
(Pdb) p r.request.environ['REMOTE_USER']
*** KeyError: KeyError('REMOTE_USER',)
(Pdb) p r.request.environ['HTTP_COOKIE']
'auth_tkt=""\\"4c898eb72f7ad38551eb11e1936303374bd871934bd871833d19ad8a79000000!\\"""; '
(Pdb) p r.request.cookies
{'auth_tkt': ''}
(Pdb) p r
<302 Found text/html location: http://localhost/login?__logins=1&came_from=http%3A%2F%2Flocalhost%2F body='302 Found...y. '/149>
This indicates to me that the cookie was passed in on the request, although with dubious escaping. The environ appears to be without the session created on the prior request. The cookie has been copied to the environ from the headers, but the cookies in the request seems incorrectly set. Lastly, the user is redirected to the login page, indicating that the user isn't logged in.
Authorization in the app is done via repoze.who and repoze.who.plugins.ldap with repoze.who_friendlyform performing the challenge. I'm using the stock tests.TestController created by paste:
class TestController(TestCase):
def __init__(self, *args, **kwargs):
if pylons.test.pylonsapp:
wsgiapp = pylons.test.pylonsapp
else:
wsgiapp = loadapp('config:%s' % config['__file__'])
self.app = TestApp(wsgiapp)
url._push_object(URLGenerator(config['routes.map'], environ))
TestCase.__init__(self, *args, **kwargs)
That's a webtest.TestApp, by the way.
The encoding of the cookie is done in webtest.TestApp using Cookie:
>>> from Cookie import _quote
>>> _quote('"84533cf9f661f97239208fb844a09a6d4bd8552d4bd8550c3d19ad8339000000!"')
'"\\"84533cf9f661f97239208fb844a09a6d4bd8552d4bd8550c3d19ad8339000000!\\""'
I trust that that's correct.
My guess is that something on the response side is incorrectly parsing the cookie data into cookies in the server-side request. But what? Any ideas?
| [
"This issue disappeared after downgrading WebTest from 1.2.1 to 1.2.\n",
"The issue continually appeared for me regardless of the version of WebTest. However, after much mucking around I noticed that when the cookie was first set it was using 127.0.0.1 as the REMOTE_ADDR value but on the second request it changed to 0.0.0.0.\nIf I did the get request and set the REMOTE_ADDR to 127.0.0.1 all was well!\nresponse = response.goto(url('home'), extra_environ=dict(REMOTE_ADDR='127.0.0.1'))\n\n"
] | [
5,
2
] | [] | [] | [
"nose",
"nosetests",
"pylons",
"python",
"webtest"
] | stackoverflow_0002731907_nose_nosetests_pylons_python_webtest.txt |
Q:
easy way to change a dict data to url Parameter
(1)
a={'b':'bbbb','c':'ccc',....}
(2)
self.redirect('/tribes/view?b=' + a['b'] + '&c=' + a['c'])
so i want to get
b=' + a['b'] + '&c=' + a['c'] ...
from dict a
hae any easy way to do this ?
thanks
A:
from urllib import urlencode
urlencode({'b':'bbbb', 'c':'ccc'})
| easy way to change a dict data to url Parameter | (1)
a={'b':'bbbb','c':'ccc',....}
(2)
self.redirect('/tribes/view?b=' + a['b'] + '&c=' + a['c'])
so i want to get
b=' + a['b'] + '&c=' + a['c'] ...
from dict a
hae any easy way to do this ?
thanks
| [
"from urllib import urlencode\nurlencode({'b':'bbbb', 'c':'ccc'})\n\n"
] | [
7
] | [] | [] | [
"google_app_engine",
"parameters",
"python",
"url"
] | stackoverflow_0002979586_google_app_engine_parameters_python_url.txt |
Q:
how to make the username and email is Unique .. using WTFroms
wtforms is a forms validation and rendering library for python web development
but i can't find how to handle the username and email Unique ,
thanks
A:
Read the whole page in your link and you'll find the header "Custom Validators" which shows a quick ... custom validator.
The documentation has more on the topic.
| how to make the username and email is Unique .. using WTFroms | wtforms is a forms validation and rendering library for python web development
but i can't find how to handle the username and email Unique ,
thanks
| [
"Read the whole page in your link and you'll find the header \"Custom Validators\" which shows a quick ... custom validator.\nThe documentation has more on the topic.\n"
] | [
2
] | [] | [] | [
"google_app_engine",
"python",
"unique",
"web_applications",
"wtforms"
] | stackoverflow_0002979655_google_app_engine_python_unique_web_applications_wtforms.txt |
Q:
ImageChops.duplicate - python
I am tring to use the function ImageChops.dulpicate from the PIL module and I get an error I don't understand:
this is the code
import PIL
import Image
import ImageChops
import os
PathDemo4a='C:/Documents and Settings/Ariel/My Documents/My Dropbox/lecture/demo4a'
PathDemo4b='C:/Documents and Settings/Ariel/My Documents/My Dropbox/lecture/demo4b'
PathDemo4c='C:/Documents and Settings/Ariel/My Documents/My Dropbox/lecture/demo4c'
PathBlackBoard='C:/Documents and Settings/Ariel/My Documents/My Dropbox/lecture/BlackBoard.bmp'
Slides=os.listdir(PathDemo4a)
for slide in Slides:
#BB=Image.open(PathBlackBoard)
BB=ImageChops.duplicate(PathBlackBoard) #BB=BlackBoard
and this is the error;
Traceback (most recent call last):
File "<pyshell#4>", line 1, in <module>
ImageChops.duplicate('c:/1.BMP')
File "C:\Python26\lib\site-packages\PIL\ImageChops.py", line 57, in duplicate
return image.copy()
AttributeError: 'str' object has no attribute 'copy'
any help would be much appriciated
Ariel
A:
You need to pass a Image object into the duplicate function rather than a string. Something like:
img = Image.open(PathBlackBoard)
BB = ImageChops.duplicate(img)
A:
I think you should pass an actual image object to duplicate and not a string. So your code will probably become something like this for one image:
path = '...'
img = Image.open(path)
dup = ImageChops.duplicate(img)
| ImageChops.duplicate - python | I am tring to use the function ImageChops.dulpicate from the PIL module and I get an error I don't understand:
this is the code
import PIL
import Image
import ImageChops
import os
PathDemo4a='C:/Documents and Settings/Ariel/My Documents/My Dropbox/lecture/demo4a'
PathDemo4b='C:/Documents and Settings/Ariel/My Documents/My Dropbox/lecture/demo4b'
PathDemo4c='C:/Documents and Settings/Ariel/My Documents/My Dropbox/lecture/demo4c'
PathBlackBoard='C:/Documents and Settings/Ariel/My Documents/My Dropbox/lecture/BlackBoard.bmp'
Slides=os.listdir(PathDemo4a)
for slide in Slides:
#BB=Image.open(PathBlackBoard)
BB=ImageChops.duplicate(PathBlackBoard) #BB=BlackBoard
and this is the error;
Traceback (most recent call last):
File "<pyshell#4>", line 1, in <module>
ImageChops.duplicate('c:/1.BMP')
File "C:\Python26\lib\site-packages\PIL\ImageChops.py", line 57, in duplicate
return image.copy()
AttributeError: 'str' object has no attribute 'copy'
any help would be much appriciated
Ariel
| [
"You need to pass a Image object into the duplicate function rather than a string. Something like:\nimg = Image.open(PathBlackBoard)\nBB = ImageChops.duplicate(img) \n\n",
"I think you should pass an actual image object to duplicate and not a string. So your code will probably become something like this for one image:\npath = '...'\nimg = Image.open(path)\ndup = ImageChops.duplicate(img)\n\n"
] | [
3,
2
] | [] | [] | [
"python",
"python_imaging_library"
] | stackoverflow_0002979621_python_python_imaging_library.txt |
Q:
Parsing a context-free grammar in Python
What tools are available in Python to assist in parsing a context-free grammar?
Of course it is possible to roll my own, but I am looking for a generic tool that can generate a parser for a given CFG.
A:
I warmly recommend PLY - it's a Lex/Yacc clone in Python that uses the language's introspection facilities in a sophisticated manner to allow for a very natural specification of the grammar. Yacc, if you recall, is the very embodiment of CFGs in an understandable DSL that defines how one parses them.
I used it to implement my parser for ANSI C and the interaction with PLY was almost effortless.
| Parsing a context-free grammar in Python | What tools are available in Python to assist in parsing a context-free grammar?
Of course it is possible to roll my own, but I am looking for a generic tool that can generate a parser for a given CFG.
| [
"I warmly recommend PLY - it's a Lex/Yacc clone in Python that uses the language's introspection facilities in a sophisticated manner to allow for a very natural specification of the grammar. Yacc, if you recall, is the very embodiment of CFGs in an understandable DSL that defines how one parses them.\nI used it to implement my parser for ANSI C and the interaction with PLY was almost effortless. \n"
] | [
9
] | [] | [] | [
"context_free_grammar",
"python",
"regex"
] | stackoverflow_0002979703_context_free_grammar_python_regex.txt |
Q:
Coloring close points
I have a dense set of points in the plane. I want them colored so that points that are close to each other have the same color, and a different color if they're far away. For simplicity assume that there are, say, 5 different colors to choose from. Turns out I've not the slightest idea how to do that ..
I'm using Tkinter with Python, by the way
A:
If you can use whatever color you want, you can use that fact that colors are (almost) continuous. color the points according to their x,y coordinates, so you'll get as a side effect that close points will have a somewhat similar color.
You can use something like
point.color(R,G,B) = ( point.normalized_x, 0.5, 1-point.normalized.y )
where normalized_x is (x-min_x / (max_x-min_x)), so it would give 0 for the point with minimal x value, and 1 for point with maximal x value.
If you really need to use only a small number of colors and have close point have the exact same color, then you'll have to do some clustering on your data (K-means being a simple and widely used algorithm). After clustering, you just assign each point a color according to its cluster's id. Python has some good implementations, including scipy's clustering.
A:
I'd start with identifying the concentrations of the spots in the plane. Find the centers of those agglomerations and assign them each unique color. Then for other spots you could simply calculate the color using the linear principle. For example, if one center is red and the other is yellow, a point somewhere in the middle would become orange.
I'd probably use some exponential function instead of a linear principle. This will keep the point groups more or less of the same color only giving a noticeable color change to far away points, or, to be more exact, to far away and somewhere in between points.
A:
One approach is to go through your points and partition them into sets with a "center". Since you have 5 colours, you'll have 5 sets. You compare the distance of the new point from each of the centers and then put it in the same group as the closest one.
Each set corresponds to a different colour so you can just plot it after this partitioning is done.
A:
The problem domain is the well-trodden cluster analysis and the Cluster suite with PyCluster is a good start.
| Coloring close points | I have a dense set of points in the plane. I want them colored so that points that are close to each other have the same color, and a different color if they're far away. For simplicity assume that there are, say, 5 different colors to choose from. Turns out I've not the slightest idea how to do that ..
I'm using Tkinter with Python, by the way
| [
"If you can use whatever color you want, you can use that fact that colors are (almost) continuous. color the points according to their x,y coordinates, so you'll get as a side effect that close points will have a somewhat similar color.\nYou can use something like\npoint.color(R,G,B) = ( point.normalized_x, 0.5, 1-point.normalized.y )\n\nwhere normalized_x is (x-min_x / (max_x-min_x)), so it would give 0 for the point with minimal x value, and 1 for point with maximal x value.\nIf you really need to use only a small number of colors and have close point have the exact same color, then you'll have to do some clustering on your data (K-means being a simple and widely used algorithm). After clustering, you just assign each point a color according to its cluster's id. Python has some good implementations, including scipy's clustering. \n",
"I'd start with identifying the concentrations of the spots in the plane. Find the centers of those agglomerations and assign them each unique color. Then for other spots you could simply calculate the color using the linear principle. For example, if one center is red and the other is yellow, a point somewhere in the middle would become orange.\nI'd probably use some exponential function instead of a linear principle. This will keep the point groups more or less of the same color only giving a noticeable color change to far away points, or, to be more exact, to far away and somewhere in between points.\n",
"One approach is to go through your points and partition them into sets with a \"center\". Since you have 5 colours, you'll have 5 sets. You compare the distance of the new point from each of the centers and then put it in the same group as the closest one.\nEach set corresponds to a different colour so you can just plot it after this partitioning is done.\n",
"The problem domain is the well-trodden cluster analysis and the Cluster suite with PyCluster is a good start.\n"
] | [
2,
0,
0,
0
] | [] | [] | [
"geometry",
"python",
"tkinter"
] | stackoverflow_0002979697_geometry_python_tkinter.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.