content stringlengths 85 101k | title stringlengths 0 150 | question stringlengths 15 48k | answers list | answers_scores list | non_answers list | non_answers_scores list | tags list | name stringlengths 35 137 |
|---|---|---|---|---|---|---|---|---|
Q:
How to aggregate timeseries in Python?
I have two different timeseries with partially overlapping timestamps:
import scikits.timeseries as ts
from datetime import datetime
a = ts.time_series([1,2,3], dates=[datetime(2010,10,20), datetime(2010,10,21), datetime(2010,10,23)], freq='D')
b = ts.time_series([4,5,6], dates=[datetime(2010,10,20), datetime(2010,10,22), datetime(2010,10,23)], freq='D')
which represents following data:
Day: 20. 21. 22. 23.
a: 1 2 - 3
b: 4 - 5 6
I would like to calculate a weighted average on every day with coefficients a(0.3) and b(0.7), while ignoring missing values:
Day 20.: (0.3 * 1 + 0.7 * 4) / (0.3 + 0.7) = 3.1 / 1. = 3.1
Day 21.: (0.3 * 2 ) / (0.3 ) = 0.6 / 0.3 = 2
Day 22.: ( 0.7 * 5) / ( 0.7) = 3.5 / 0.7 = 5
Day 23.: (0.3 * 3 + 0.7 * 6) / (0.3 + 0.7) = 3.1 / 1. = 5.1
when I first try to align these timeseries:
a1, b1 = ts.aligned(a, b)
I get correctly masked timeseries:
timeseries([1 2 -- 3],
dates = [20-Oct-2010 ... 23-Oct-2010],
freq = D)
timeseries([4 -- 5 6],
dates = [20-Oct-2010 ... 23-Oct-2010],
freq = D)
but when I do a1 * 0.3 + b1 * 0.7, it ignores values, that are present in one timeseries only:
timeseries([3.1 -- -- 5.1],
dates = [20-Oct-2010 ... 23-Oct-2010],
freq = D)
What should I do to receive the awaited?
timeseries([3.1 2. 5. 5.1],
dates = [20-Oct-2010 ... 23-Oct-2010],
freq = D)
EDIT: The answer should be applicable also to more than two initial timeseries with different weights and differently missing values.
So if we have four timeseries with weights T1(0.1), T2(0.2), T3(0.3) and T4(0.4), their weights at a given timestamp will be:
| T1 | T2 | T3 | T4 |
weight | 0.1 | 0.2 | 0.3 | 0.4 |
-------------------------------------
all present | 10% | 20% | 30% | 40% |
T1 missing | | 22% | 33% | 45% |
T1,T2 miss. | | | 43% | 57% |
T4 missing | 17% | 33% | 50% | |
etc.
A:
I have tried and found this:
aWgt = 0.3
bWgt = 0.7
print (np.where(a1.mask, 0., a1.data * aWgt) +
np.where(b1.mask, 0., b1.data * bWgt)) / (np.where(a1.mask, 0., aWgt) +
np.where(b1.mask, 0., bWgt))
# array([ 3.1, 2. , 5. , 5.1])
This is applicable to the edited question with more than one initial timeseries. But hopefully someone will find better.
EDIT: And this is my function:
def weightedAvg(weightedTimeseries):
sumA = np.sum((np.where(ts.mask, 0., ts.data * weight) for ts, weight in weightedTimeseries), axis=0)
sumB = np.sum((np.where(ts.mask, 0., weight) for ts, weight in weightedTimeseries), axis=0)
return np.divide(sumA, sumB)
weightedAvg(((a1, 0.3), (bb, 0.7)))
# array([ 3.1, 2. , 5. , 5.1])
Works for any number of timeseries ;-)
| How to aggregate timeseries in Python? | I have two different timeseries with partially overlapping timestamps:
import scikits.timeseries as ts
from datetime import datetime
a = ts.time_series([1,2,3], dates=[datetime(2010,10,20), datetime(2010,10,21), datetime(2010,10,23)], freq='D')
b = ts.time_series([4,5,6], dates=[datetime(2010,10,20), datetime(2010,10,22), datetime(2010,10,23)], freq='D')
which represents following data:
Day: 20. 21. 22. 23.
a: 1 2 - 3
b: 4 - 5 6
I would like to calculate a weighted average on every day with coefficients a(0.3) and b(0.7), while ignoring missing values:
Day 20.: (0.3 * 1 + 0.7 * 4) / (0.3 + 0.7) = 3.1 / 1. = 3.1
Day 21.: (0.3 * 2 ) / (0.3 ) = 0.6 / 0.3 = 2
Day 22.: ( 0.7 * 5) / ( 0.7) = 3.5 / 0.7 = 5
Day 23.: (0.3 * 3 + 0.7 * 6) / (0.3 + 0.7) = 3.1 / 1. = 5.1
when I first try to align these timeseries:
a1, b1 = ts.aligned(a, b)
I get correctly masked timeseries:
timeseries([1 2 -- 3],
dates = [20-Oct-2010 ... 23-Oct-2010],
freq = D)
timeseries([4 -- 5 6],
dates = [20-Oct-2010 ... 23-Oct-2010],
freq = D)
but when I do a1 * 0.3 + b1 * 0.7, it ignores values, that are present in one timeseries only:
timeseries([3.1 -- -- 5.1],
dates = [20-Oct-2010 ... 23-Oct-2010],
freq = D)
What should I do to receive the awaited?
timeseries([3.1 2. 5. 5.1],
dates = [20-Oct-2010 ... 23-Oct-2010],
freq = D)
EDIT: The answer should be applicable also to more than two initial timeseries with different weights and differently missing values.
So if we have four timeseries with weights T1(0.1), T2(0.2), T3(0.3) and T4(0.4), their weights at a given timestamp will be:
| T1 | T2 | T3 | T4 |
weight | 0.1 | 0.2 | 0.3 | 0.4 |
-------------------------------------
all present | 10% | 20% | 30% | 40% |
T1 missing | | 22% | 33% | 45% |
T1,T2 miss. | | | 43% | 57% |
T4 missing | 17% | 33% | 50% | |
etc.
| [
"I have tried and found this:\naWgt = 0.3\nbWgt = 0.7\n\nprint (np.where(a1.mask, 0., a1.data * aWgt) +\n np.where(b1.mask, 0., b1.data * bWgt)) / (np.where(a1.mask, 0., aWgt) +\n np.where(b1.mask, 0., bWgt))\n\n# array([ 3.1, 2. , 5. , 5.1])\n\nThis is appli... | [
3
] | [] | [] | [
"datetime",
"python",
"scikits",
"time_series",
"weighted_average"
] | stackoverflow_0003977535_datetime_python_scikits_time_series_weighted_average.txt |
Q:
python location of the running program on windows
I have an python application that need to know in which directory it founded when it run,
how can i know the running application path on windows for example when i change the directory path is changed to the new directory .
is there a way to know where is the python application run withour saving it on the beginning by os.path.abspath(os.path.dirname(file))
for example the to know where the application runs after os.chdir("c:/")
import os
print os.path.abspath(os.path.dirname(__file__))
os.chdir("c:/")
print os.path.abspath(os.path.dirname(__file__))
A:
it is contained in the __file__ variable.
But if you want to know the current working directory then you should use os.getcw.
>>> os.getcwd()
'C:\\Program Files\\Python31'
>>> os.chdir(r'C:\\')
>>> os.getcwd()
'C:\\'
A:
import os
print os.path.abspath(os.path.dirname(__file__))
edit : little late !!! :)
edit2 : in C# u can use the property AppDomain.CurrentDomain.BaseDirectory so using something like this will help http://pythonnet.sourceforge.net/readme.html
| python location of the running program on windows | I have an python application that need to know in which directory it founded when it run,
how can i know the running application path on windows for example when i change the directory path is changed to the new directory .
is there a way to know where is the python application run withour saving it on the beginning by os.path.abspath(os.path.dirname(file))
for example the to know where the application runs after os.chdir("c:/")
import os
print os.path.abspath(os.path.dirname(__file__))
os.chdir("c:/")
print os.path.abspath(os.path.dirname(__file__))
| [
"it is contained in the __file__ variable.\nBut if you want to know the current working directory then you should use os.getcw.\n>>> os.getcwd()\n'C:\\\\Program Files\\\\Python31'\n>>> os.chdir(r'C:\\\\')\n>>> os.getcwd()\n'C:\\\\'\n\n",
"import os\nprint os.path.abspath(os.path.dirname(__file__))\n\nedit : littl... | [
1,
0
] | [] | [] | [
"command_line",
"path",
"python"
] | stackoverflow_0003978079_command_line_path_python.txt |
Q:
pyinotify asyncnotifier thread question
I'm confused about how asyncnotifier works. What exactly is threaded in the notifier? Is the just the watcher threaded? Or does each of the callbacks to the handler functions run on its own thread?
The documentation says essentially nothing about the specifics of the class.
A:
The AsyncNotifier doesn't use threading, it uses the asynchronous socket handler loop.
If you're talking about the ThreadedNotifier, then each callback seems to be called in the same thread per notifier.
This means that even if you have several EventHandlers registered with some WatchManager, they will all issue callbacks from the same thread.
I can't find where this is explicitly documented, but seems implicit from the generated documentation for the ThreadedNotifier.loop() method, where it says:
Events are read only once time every min(read_freq, timeout) seconds at best and only if the size of events to read is >= threshold.
...which I took to mean it operates as a fairly simple loop in a single thread, issuing callbacks from that loop.
I have experimented by simply printing the result of threading.current_thread() in the callbacks, and it verifies this.
(You could always file an issue to request more specific documentation if you think that's warranted.)
| pyinotify asyncnotifier thread question | I'm confused about how asyncnotifier works. What exactly is threaded in the notifier? Is the just the watcher threaded? Or does each of the callbacks to the handler functions run on its own thread?
The documentation says essentially nothing about the specifics of the class.
| [
"The AsyncNotifier doesn't use threading, it uses the asynchronous socket handler loop.\nIf you're talking about the ThreadedNotifier, then each callback seems to be called in the same thread per notifier.\nThis means that even if you have several EventHandlers registered with some WatchManager, they will all issue... | [
3
] | [] | [] | [
"multithreading",
"pyinotify",
"python"
] | stackoverflow_0003955544_multithreading_pyinotify_python.txt |
Q:
MapReduce on more than one datastore kind in Google App Engine
I just watched Batch data processing with App Engine session of Google I/O 2010, read some parts of MapReduce article from Google Research and now I am thinking to use MapReduce on Google App Engine to implement a recommender system in Python.
I prefer using appengine-mapreduce instead of Task Queue API because the former offers easy iteration over all instances of some kind, automatic batching, automatic task chaining, etc. The problem is: my recommender system needs to calculate correlation between instances of two different Models, i.e., instances of two distinct kinds.
Example:
I have these two Models: User and Item. Each one has a list of tags as an attribute. Below are the functions to calculate correlation between users and items. Note that calculateCorrelation should be called for every combination of users and items:
def calculateCorrelation(user, item):
return calculateCorrelationAverage(u.tags, i.tags)
def calculateCorrelationAverage(tags1, tags2):
correlationSum = 0.0
for (tag1, tag2) in allCombinations(tags1, tags2):
correlationSum += correlation(tag1, tag2)
return correlationSum / (len(tags1) + len(tags2))
def allCombinations(list1, list2):
combinations = []
for x in list1:
for y in list2:
combinations.append((x, y))
return combinations
But that calculateCorrelation is not a valid Mapper in appengine-mapreduce and maybe this function is not even compatible with MapReduce computation concept. Yet, I need to be sure... it would be really great for me having those appengine-mapreduce advantages like automatic batching and task chaining.
Is there any solution for that?
Should I define my own InputReader? A new InputReader that reads all instances of two different kinds is compatible with the current appengine-mapreduce implementation?
Or should I try the following?
Combine all keys of all entities of these two kinds, two by two, into instances of a new Model (possibly using MapReduce)
Iterate using mappers over instances of this new Model
For each instance, use keys inside it to get the two entities of different kinds and calculate the correlation between them.
A:
Following Nick Johnson suggestion, I wrote my own InputReader. This reader fetch entities from two different kinds. It yields tuples with all combinations of these entities. Here it is:
class TwoKindsInputReader(InputReader):
_APP_PARAM = "_app"
_KIND1_PARAM = "kind1"
_KIND2_PARAM = "kind2"
MAPPER_PARAMS = "mapper_params"
def __init__(self, reader1, reader2):
self._reader1 = reader1
self._reader2 = reader2
def __iter__(self):
for u in self._reader1:
for e in self._reader2:
yield (u, e)
@classmethod
def from_json(cls, input_shard_state):
reader1 = DatastoreInputReader.from_json(input_shard_state[cls._KIND1_PARAM])
reader2 = DatastoreInputReader.from_json(input_shard_state[cls._KIND2_PARAM])
return cls(reader1, reader2)
def to_json(self):
json_dict = {}
json_dict[self._KIND1_PARAM] = self._reader1.to_json()
json_dict[self._KIND2_PARAM] = self._reader2.to_json()
return json_dict
@classmethod
def split_input(cls, mapper_spec):
params = mapper_spec.params
app = params.get(cls._APP_PARAM)
kind1 = params.get(cls._KIND1_PARAM)
kind2 = params.get(cls._KIND2_PARAM)
shard_count = mapper_spec.shard_count
shard_count_sqrt = int(math.sqrt(shard_count))
splitted1 = DatastoreInputReader._split_input_from_params(app, kind1, params, shard_count_sqrt)
splitted2 = DatastoreInputReader._split_input_from_params(app, kind2, params, shard_count_sqrt)
inputs = []
for u in splitted1:
for e in splitted2:
inputs.append(TwoKindsInputReader(u, e))
#mapper_spec.shard_count = len(inputs) #uncomment this in case of "Incorrect number of shard states" (at line 408 in handlers.py)
return inputs
@classmethod
def validate(cls, mapper_spec):
return True #TODO
This code should be used when you need to process all combinations of entities of two kinds. You can also generalize this for more than two kinds.
Here it is a valid the mapreduce.yaml for TwoKindsInputReader:
mapreduce:
- name: recommendationMapReduce
mapper:
input_reader: customInputReaders.TwoKindsInputReader
handler: recommendation.calculateCorrelationHandler
params:
- name: kind1
default: kinds.User
- name: kind2
default: kinds.Item
- name: shard_count
default: 16
A:
It's difficult to know what to recommend without more details of what you're actually calculating. One simple option is to simply fetch the related entity inside the map call - there's nothing preventing you from doing datastore operations there.
This will result in a lot of small calls, though. Writing a custom InputReader, as you suggest, will allow you to fetch both sets of entities in parallel, which will significantly improve performance.
If you give more details as to how you need to join these entities, we may be able to provide more concrete suggestions.
| MapReduce on more than one datastore kind in Google App Engine | I just watched Batch data processing with App Engine session of Google I/O 2010, read some parts of MapReduce article from Google Research and now I am thinking to use MapReduce on Google App Engine to implement a recommender system in Python.
I prefer using appengine-mapreduce instead of Task Queue API because the former offers easy iteration over all instances of some kind, automatic batching, automatic task chaining, etc. The problem is: my recommender system needs to calculate correlation between instances of two different Models, i.e., instances of two distinct kinds.
Example:
I have these two Models: User and Item. Each one has a list of tags as an attribute. Below are the functions to calculate correlation between users and items. Note that calculateCorrelation should be called for every combination of users and items:
def calculateCorrelation(user, item):
return calculateCorrelationAverage(u.tags, i.tags)
def calculateCorrelationAverage(tags1, tags2):
correlationSum = 0.0
for (tag1, tag2) in allCombinations(tags1, tags2):
correlationSum += correlation(tag1, tag2)
return correlationSum / (len(tags1) + len(tags2))
def allCombinations(list1, list2):
combinations = []
for x in list1:
for y in list2:
combinations.append((x, y))
return combinations
But that calculateCorrelation is not a valid Mapper in appengine-mapreduce and maybe this function is not even compatible with MapReduce computation concept. Yet, I need to be sure... it would be really great for me having those appengine-mapreduce advantages like automatic batching and task chaining.
Is there any solution for that?
Should I define my own InputReader? A new InputReader that reads all instances of two different kinds is compatible with the current appengine-mapreduce implementation?
Or should I try the following?
Combine all keys of all entities of these two kinds, two by two, into instances of a new Model (possibly using MapReduce)
Iterate using mappers over instances of this new Model
For each instance, use keys inside it to get the two entities of different kinds and calculate the correlation between them.
| [
"Following Nick Johnson suggestion, I wrote my own InputReader. This reader fetch entities from two different kinds. It yields tuples with all combinations of these entities. Here it is:\nclass TwoKindsInputReader(InputReader):\n _APP_PARAM = \"_app\"\n _KIND1_PARAM = \"kind1\"\n _KIND2_PARAM = \"kind2\"\n... | [
3,
2
] | [] | [] | [
"google_app_engine",
"google_cloud_datastore",
"mapreduce",
"python",
"task_queue"
] | stackoverflow_0003766154_google_app_engine_google_cloud_datastore_mapreduce_python_task_queue.txt |
Q:
How to convert a special float into a fraction object
I have this function inside another function:
def _sum(k):
return sum([(-1) ** v * fractions.Fraction(str(bin_coeff(k, v))) * fractions.Fraction((n + v) ** m, k + 1) for v in xrange(k + 1)])
When i call fractions.Fraction on bin_coeff it reports me this error:
ValueError: Invalid literal for Fraction: '1.05204948186e+12'
How can can I convert a float in that form into a Fraction object?
Is there a better solution than:
fractions.Fraction(*bin_coeff(k, v).as_integer_ratio())
Thank you,
rubik
P.S. bin_coeff always returns a float
A:
I cannot reproduce your error in py3k, but you could pass your float straight to from_float class method:
>>> fractions.Fraction.from_float(1.05204948186e+12)
Fraction(1052049481860, 1)
A:
If you're curious, this is due (as you might expect) to the Fraction regex in fractions.py:
_RATIONAL_FORMAT = re.compile(r"""
\A\s* # optional whitespace at the start, then
(?P<sign>[-+]?) # an optional sign, then
(?=\d|\.\d) # lookahead for digit or .digit
(?P<num>\d*) # numerator (possibly empty)
(?: # followed by an optional
/(?P<denom>\d+) # / and denominator
| # or
\.(?P<decimal>\d*) # decimal point and fractional part
)?
\s*\Z # and optional whitespace to finish
""", re.VERBOSE)
which doesn't match floats in scientific notation. This was changed in Python 2.7 (the following is from 3.1 because I don't have 2.7 installed):
_RATIONAL_FORMAT = re.compile(r"""
\A\s* # optional whitespace at the start, then
(?P<sign>[-+]?) # an optional sign, then
(?=\d|\.\d) # lookahead for digit or .digit
(?P<num>\d*) # numerator (possibly empty)
(?: # followed by
(?:/(?P<denom>\d+))? # an optional denominator
| # or
(?:\.(?P<decimal>\d*))? # an optional fractional part
(?:E(?P<exp>[-+]?\d+))? # and optional exponent
)
\s*\Z # and optional whitespace to finish
""", re.VERBOSE | re.IGNORECASE)
| How to convert a special float into a fraction object | I have this function inside another function:
def _sum(k):
return sum([(-1) ** v * fractions.Fraction(str(bin_coeff(k, v))) * fractions.Fraction((n + v) ** m, k + 1) for v in xrange(k + 1)])
When i call fractions.Fraction on bin_coeff it reports me this error:
ValueError: Invalid literal for Fraction: '1.05204948186e+12'
How can can I convert a float in that form into a Fraction object?
Is there a better solution than:
fractions.Fraction(*bin_coeff(k, v).as_integer_ratio())
Thank you,
rubik
P.S. bin_coeff always returns a float
| [
"I cannot reproduce your error in py3k, but you could pass your float straight to from_float class method:\n>>> fractions.Fraction.from_float(1.05204948186e+12)\nFraction(1052049481860, 1)\n\n",
"If you're curious, this is due (as you might expect) to the Fraction regex in fractions.py:\n_RATIONAL_FORMAT = re.com... | [
1,
1
] | [] | [] | [
"floating_point",
"fractions",
"python"
] | stackoverflow_0003978130_floating_point_fractions_python.txt |
Q:
Since Python doesn't have a switch statement, what should I use?
Possible Duplicate:
Replacements for switch statement in python?
I'm making a little console based application in Python and I wanted to use a Switch statement to handle the users choice of a menu selection.
What do you vets suggest I use. Thanks!
A:
There are two choices, first is the standard if ... elif ... chain. The other is a dictionary mapping selections to callables (of functions are a subset). Depends on exactly what you're doing which one is the better idea.
elif chain
selection = get_input()
if selection == 'option1':
handle_option1()
elif selection == 'option2':
handle_option2()
elif selection == 'option3':
some = code + that
[does(something) for something in range(0, 3)]
else:
I_dont_understand_you()
dictionary:
# Somewhere in your program setup...
def handle_option3():
some = code + that
[does(something) for something in range(0, 3)]
seldict = {
'option1': handle_option1,
'option2': handle_option2,
'option3': handle_option3
}
# later on
selection = get_input()
callable = seldict.get(selection)
if callable is None:
I_dont_understand_you()
else:
callable()
A:
Use a dictionary to map input to functions.
switchdict = { "inputA":AHandler, "inputB":BHandler}
Where the handlers can be any callable. Then you use it like this:
switchdict[input]()
A:
Dispatch tables, or rather dictionaries.
You map keys aka. values of the menu selection to functions performing said choices:
def AddRecordHandler():
print("added")
def DeleteRecordHandler():
print("deleted")
def CreateDatabaseHandler():
print("done")
def FlushToDiskHandler():
print("i feel flushed")
def SearchHandler():
print("not found")
def CleanupAndQuit():
print("byez")
menuchoices = {'a':AddRecordHandler, 'd':DeleteRecordHandler, 'c':CreateDatabaseHandler, 'f':FlushToDiskHandler, 's':SearchHandler, 'q':CleanupAndQuit}
ret = menuchoices[input()]()
if ret is None:
print("Something went wrong! Call the police!")
menuchoices['q']()
Remember to validate your input! :)
| Since Python doesn't have a switch statement, what should I use? |
Possible Duplicate:
Replacements for switch statement in python?
I'm making a little console based application in Python and I wanted to use a Switch statement to handle the users choice of a menu selection.
What do you vets suggest I use. Thanks!
| [
"There are two choices, first is the standard if ... elif ... chain. The other is a dictionary mapping selections to callables (of functions are a subset). Depends on exactly what you're doing which one is the better idea.\nelif chain\n selection = get_input()\n if selection == 'option1':\n handle_option1()\... | [
12,
9,
8
] | [] | [] | [
"python",
"switch_statement"
] | stackoverflow_0003978624_python_switch_statement.txt |
Q:
Finding a strings in a text using regular expressions with Python
I have a text, in which only <b> and </b> has been used.for example<b>abcd efg-123</b> . Can can I extract the string between these tags? also I need to extract 3 words before and after this chunk of <b>abcd efg-123</b> string.
How can I do that? what would be the suitable regular expression for this?
A:
this will get what's in between the tags,
>>> s="1 2 3<b>abcd efg-123</b>one two three"
>>> for i in s.split("</b>"):
... if "<b>" in i:
... print i.split("<b>")[-1]
...
abcd efg-123
A:
This is actually a very dumb version and doesn't allow nested tags.
re.search(r"(\w+)\s+(\w+)\s+(\w+)\s+<b>([^<]+)</b>\s+(\w+)\s+(\w+)\s+(\w+)", text)
See Python documentation.
A:
Handles tags inside the <b> unless they are <b> ofcouse.
import re
sometext = 'blah blah 1 2 3<b>abcd efg-123</b>word word2 word3 blah blah'
result = re.findall(
r'(((?:(?:^|\s)+\w+){3}\s*)' # Match 3 words before
r'<b>([^<]*|<[^/]|</[^b]|</b[^>])</b>' # Match <b>...</b>
r'(\s*(?:\w+(?:\s+|$)){3}))', sometext) # Match 3 words after
result == [(' 1 2 3<b>abcd efg-123</b>word word2 word3 ',
' 1 2 3',
'abcd efg-123',
'word word2 word3 ')]
This should work, and perform well, but if it gets any more advanced then this you should consider using a html parser.
A:
You should not use regexes for HTML parsing. That way madness lies.
The above-linked article actually provides a regex for your problem -- but don't use it.
| Finding a strings in a text using regular expressions with Python | I have a text, in which only <b> and </b> has been used.for example<b>abcd efg-123</b> . Can can I extract the string between these tags? also I need to extract 3 words before and after this chunk of <b>abcd efg-123</b> string.
How can I do that? what would be the suitable regular expression for this?
| [
"this will get what's in between the tags,\n>>> s=\"1 2 3<b>abcd efg-123</b>one two three\"\n>>> for i in s.split(\"</b>\"):\n... if \"<b>\" in i:\n... print i.split(\"<b>\")[-1]\n...\nabcd efg-123\n\n",
"This is actually a very dumb version and doesn't allow nested tags.\nre.search(r\"(\\w+)\\s+(\\w+)\\s+... | [
3,
1,
1,
0
] | [] | [] | [
"parsing",
"python",
"regex"
] | stackoverflow_0003978480_parsing_python_regex.txt |
Q:
Name isn't found in my Python application
keepProgramRunning = True
while keepProgramRunning:
print "Welcome to the Calculator!"
print "Please choose what you'd like to do:"
print "0: Addition"
print "1: Subtraction"
print "2: Multiplication"
print "3: Division"
#Capture the menu choice.
choice = raw_input()
#Capture the numbers you want to work with.
numberA = raw_input("Enter your first number: ")
numberB = raw_input("Enter your second number: ")
if choice == "0":
print "Your result is:"
print Addition(numberA, numberB)
elif choice == "1":
print "Your result is:"
print Subtraction(numberA, numberB)
elif choice == "2":
print "Your result is:"
print Multiplication(numberA, numberB)
elif choice == "3":
print "Your result is:"
print Division(numberA, numberB)
else:
print "Please choose a valid option."
def Addition(a, b):
return a + b
def Subtraction(a, b):
return a - b
def Multiplication(a, b):
return a * b
def Division(a, b):
return a / b
Here's the error:
Traceback (most recent call last):
File "C:\Users\Sergio.Tapia\Documents\NetBeansProjects\Tutorials\src\tutorials.py", line 23, in <module>
print Addition(numberA, numberB)
NameError: name 'Addition' is not defined
Thanks for the help!
Ps. I realize the loop will never end, I haven't added the menu option yet. :P
A:
You need to define your functions before calling them.
When the interpreter reads the line where Addition() is called it hasn't yet reached the line where Addition() will be defined. It therefore throws an Exception.
A:
Reorder your code, so that the functions will be defined before they're used:
def Addition(a, b):
return a + b
def Subtraction(a, b):
return a - b
def Multiplication(a, b):
return a * b
def Division(a, b):
return a / b
keepProgramRunning = True
while keepProgramRunning:
print "Welcome to the Calculator!"
print "Please choose what you'd like to do:"
print "0: Addition"
print "1: Subtraction"
print "2: Multiplication"
print "3: Division"
#Capture the menu choice.
choice = raw_input()
#Capture the numbers you want to work with.
numberA = raw_input("Enter your first number: ")
numberB = raw_input("Enter your second number: ")
if choice == "0":
print "Your result is:"
print Addition(numberA, numberB)
elif choice == "1":
print "Your result is:"
print Subtraction(numberA, numberB)
elif choice == "2":
print "Your result is:"
print Multiplication(numberA, numberB)
elif choice == "3":
print "Your result is:"
print Division(numberA, numberB)
else:
print "Please choose a valid option."
alternatively, you can use main() function to keep it above everything:
def main():
keepProgramRunning = True
while keepProgramRunning:
print "Welcome to the Calculator!"
print "Please choose what you'd like to do:"
print "0: Addition"
print "1: Subtraction"
print "2: Multiplication"
print "3: Division"
#Capture the menu choice.
choice = raw_input()
#Capture the numbers you want to work with.
numberA = raw_input("Enter your first number: ")
numberB = raw_input("Enter your second number: ")
if choice == "0":
print "Your result is:"
print Addition(numberA, numberB)
elif choice == "1":
print "Your result is:"
print Subtraction(numberA, numberB)
elif choice == "2":
print "Your result is:"
print Multiplication(numberA, numberB)
elif choice == "3":
print "Your result is:"
print Division(numberA, numberB)
else:
print "Please choose a valid option."
def Addition(a, b):
return a + b
def Subtraction(a, b):
return a - b
def Multiplication(a, b):
return a * b
def Division(a, b):
return a / b
if __name__ == '__main__':
main()
A:
For that to work, you need to have some definition of Addition available by the time execution needs it. One way is to put your addition definitions higher in the file.
Another way is just to use the operator directly:
# was: print Addition(numberA, numberB)
print numberA + numberB
A third way is to use the functions in the operator module:
import operator
# ...
print operator.add(numberA, numberB)
A:
You need to define your functions before calling them. Function definitions are executable statements in Python and because of your infinite loop, they don't get a chance to get defined.
You should move the four definitions to above the loop and this error will disappear.
On a more stylistic note, you should structure your module in a way that's importable rather than just runnable. The __name__ == "__main__" trick which Python programs use is the canonical way and this article by the founder of the language offers some insights into how to construct it properly.
| Name isn't found in my Python application | keepProgramRunning = True
while keepProgramRunning:
print "Welcome to the Calculator!"
print "Please choose what you'd like to do:"
print "0: Addition"
print "1: Subtraction"
print "2: Multiplication"
print "3: Division"
#Capture the menu choice.
choice = raw_input()
#Capture the numbers you want to work with.
numberA = raw_input("Enter your first number: ")
numberB = raw_input("Enter your second number: ")
if choice == "0":
print "Your result is:"
print Addition(numberA, numberB)
elif choice == "1":
print "Your result is:"
print Subtraction(numberA, numberB)
elif choice == "2":
print "Your result is:"
print Multiplication(numberA, numberB)
elif choice == "3":
print "Your result is:"
print Division(numberA, numberB)
else:
print "Please choose a valid option."
def Addition(a, b):
return a + b
def Subtraction(a, b):
return a - b
def Multiplication(a, b):
return a * b
def Division(a, b):
return a / b
Here's the error:
Traceback (most recent call last):
File "C:\Users\Sergio.Tapia\Documents\NetBeansProjects\Tutorials\src\tutorials.py", line 23, in <module>
print Addition(numberA, numberB)
NameError: name 'Addition' is not defined
Thanks for the help!
Ps. I realize the loop will never end, I haven't added the menu option yet. :P
| [
"You need to define your functions before calling them. \nWhen the interpreter reads the line where Addition() is called it hasn't yet reached the line where Addition() will be defined. It therefore throws an Exception.\n",
"Reorder your code, so that the functions will be defined before they're used:\ndef Additi... | [
8,
4,
1,
1
] | [] | [] | [
"python"
] | stackoverflow_0003978787_python.txt |
Q:
python serval variables combine into a dict?
All,
A class:
class foo():
def __init__(self):
self.avar1 = 0
self.bvar2 = 1
self.cvar3 = 3
def debug_info(self):
print "avar1:" avar1
print "bvar2:" bvar2
print "cvar3:" cvar3
my question, it is too complex to write the debug_info() if I got a lot of self.vars
,then I want to migrate them into a dict to print them in one go for debug and monitor purpose, however, maybe I do not want to lost the format
foo.avar1 = 0
to access the vars inside of the class, because it is easy to read and easy to use and modify. Do you have better idea instead of visit the class.dict for the debug_info output? or does someone knows a better way to make this class more simple?
Thanks!
A:
def debug_info ( self ):
for ( key, value ) in self.__dict__.items():
print( key, '=', value )
A:
Every Python object already has such a dictionary it is the self.__dict__ This prints reasonably well like and other Python dict, but you could control the format using the Data pretty printer in the Python standard library of looping through the elements in the dictionary and printing them however you want as shown below:
class foo():
def __init__(self):
self.avar1 = 0
self.bvar2 = 1
self.cvar3 = 3
def debug_info(self):
for k in self.__dict__:
print k + ':' + str(self.__dict__[k])
>>> foo().debug_info()
bvar2:1
cvar3:3
avar1:0
A:
Modules, Classes and Instances all have __dict__ so you can reference it on the instance you wish to display the debug for:
>>> f = foo()
>>> f.__dict__
{'bvar2': 1, 'cvar3': 3, 'avar1': 0}
A:
While using __dict__ will work in most cases, it won't always work - specifically for objects that have a __slots__ variable.
Instead, try the following:
vars = [var for var in dir(obj) if not callable(getattr(obj, var)) and not var.startswith('__')]
A:
Others have answered your question about printing. If you have a lot of variables to set in your __init__ method, you can also use __dict__ to make that a little shorter. If you have a lot of variables, this will at least keep you from having to type self all the time. I think the standard way is more readable in most cases, but your mileage may vary.
def __init__(self):
vars = dict(avar1=0, bvar2=1, cvar3=3)
self.__dict__.update(vars)
| python serval variables combine into a dict? | All,
A class:
class foo():
def __init__(self):
self.avar1 = 0
self.bvar2 = 1
self.cvar3 = 3
def debug_info(self):
print "avar1:" avar1
print "bvar2:" bvar2
print "cvar3:" cvar3
my question, it is too complex to write the debug_info() if I got a lot of self.vars
,then I want to migrate them into a dict to print them in one go for debug and monitor purpose, however, maybe I do not want to lost the format
foo.avar1 = 0
to access the vars inside of the class, because it is easy to read and easy to use and modify. Do you have better idea instead of visit the class.dict for the debug_info output? or does someone knows a better way to make this class more simple?
Thanks!
| [
"def debug_info ( self ):\n for ( key, value ) in self.__dict__.items():\n print( key, '=', value )\n\n",
"Every Python object already has such a dictionary it is the self.__dict__ This prints reasonably well like and other Python dict, but you could control the format using the Data pretty printer in t... | [
3,
2,
1,
1,
1
] | [] | [] | [
"dictionary",
"python",
"variables"
] | stackoverflow_0003977408_dictionary_python_variables.txt |
Q:
SQLite equivalent of Python's "'%s %s' % (first_string, second_string)"
As the title says, what is the equivalent of Python's '%s %s' % (first_string, second_string) in SQLite? I know I can do concatenation like first_string || " " || second_string, but it looks very ugly.
A:
I can understand not liking first_string || ' ' || second_string, but that's the equivalent. Standard SQL (which SQLite speaks in this area) just isn't the world's prettiest string manipulation language. You could try getting the results of the query back into some other language (e.g., Python which you appear to like) and doing the concatenation there; it's usually best to not do "presentation" in the database layer (and definitely not a good idea to use the result of concatenation as something to search against; that makes it impossible to optimize with indices!)
A:
Note you can create your own SQL level functions. For example you could have this Python function:
def format_data(one, two):
return "%s %s" % (one, two)
Use pysqlite's create_function or APSW's createscalarfunction to tell SQLite about it. Then you can do queries like these:
SELECT format_data(col1, col2) FROM table WHERE condition;
SELECT * from TABLE where col1 = format_data('prefix', col2);
Consequently you can put the formatting logic in your nice readable Python code while keeping the SQL simple but clearly showing the intent.
A:
Are you sure you're not looking for parameter substitution?
Directly from the sqlite3 module documentation:
Instead, use the DB-API’s parameter substitution. Put ? as a placeholder wherever you want to use a value, and then provide a tuple of values as the second argument to the cursor’s execute() method. (Other database modules may use a different placeholder, such as %s or :1.)
For example:
# Never do this -- insecure!
symbol = 'IBM'
c.execute("... where symbol = '%s'" % symbol)
# Do this instead
t = (symbol,)
c.execute('select * from stocks where symbol=?', t)
# Larger example
for t in [('2006-03-28', 'BUY', 'IBM', 1000, 45.00),
('2006-04-05', 'BUY', 'MSOFT', 1000, 72.00),
('2006-04-06', 'SELL', 'IBM', 500, 53.00),
]:
c.execute('insert into stocks values (?,?,?,?,?)', t)
A:
There isn't one.
A:
I am not entirely sure what you are looking for, but it might be the group_concat aggregate function.
| SQLite equivalent of Python's "'%s %s' % (first_string, second_string)" | As the title says, what is the equivalent of Python's '%s %s' % (first_string, second_string) in SQLite? I know I can do concatenation like first_string || " " || second_string, but it looks very ugly.
| [
"I can understand not liking first_string || ' ' || second_string, but that's the equivalent. Standard SQL (which SQLite speaks in this area) just isn't the world's prettiest string manipulation language. You could try getting the results of the query back into some other language (e.g., Python which you appear to ... | [
2,
2,
1,
0,
0
] | [] | [] | [
"python",
"sqlite",
"string"
] | stackoverflow_0003976313_python_sqlite_string.txt |
Q:
Regular Expressions - testing if a String contains another String
Suppose you have some this String (one line)
10.254.254.28 - - [06/Aug/2007:00:12:20 -0700] "GET
/keyser/22300/ HTTP/1.0" 302 528 "-"
"Mozilla/5.0 (X11; U; Linux i686
(x86_64); en-US; rv:1.8.1.4)
Gecko/20070515 Firefox/2.0.0.4"
and you want to extract the part between the GET and HTTP (i.e., some url) but only if it contains the word 'puzzle'. How would you do that using regular expressions in Python?
Here's my solution so far.
match = re.search(r'GET (.*puzzle.*) HTTP', my_string)
It works but I have something in mind that I have to change the first/second/both .* to .*? in order for them to be non-greedy. Does it actually matter in this case?
A:
No need regex
>>> s
'10.254.254.28 - - [06/Aug/2007:00:12:20 -0700] "GET /keyser/22300/ HTTP/1.0" 302 528 "-" "Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.8.1.4) Gecko/20070515 Firefox/2.0.0.4"'
>>> s.split("HTTP")[0]
'10.254.254.28 - - [06/Aug/2007:00:12:20 -0700] "GET /keyser/22300/ '
>>> if "puzzle" in s.split("HTTP")[0].split("GET")[-1]:
... print "found puzzle"
...
A:
It does matter. The User-Agent can contain anything. Use non-greedy for both of them.
A:
>>> s = '10.254.254.28 - - [06/Aug/2007:00:12:20 -0700] "GET /keyser/22300/ HTTP/1.0" 302 528 "-" "Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.8.1.4) Gecko/20070515 Firefox/2.0.0.4"'
>>> s.split()[6]
'/keyser/22300/'
| Regular Expressions - testing if a String contains another String | Suppose you have some this String (one line)
10.254.254.28 - - [06/Aug/2007:00:12:20 -0700] "GET
/keyser/22300/ HTTP/1.0" 302 528 "-"
"Mozilla/5.0 (X11; U; Linux i686
(x86_64); en-US; rv:1.8.1.4)
Gecko/20070515 Firefox/2.0.0.4"
and you want to extract the part between the GET and HTTP (i.e., some url) but only if it contains the word 'puzzle'. How would you do that using regular expressions in Python?
Here's my solution so far.
match = re.search(r'GET (.*puzzle.*) HTTP', my_string)
It works but I have something in mind that I have to change the first/second/both .* to .*? in order for them to be non-greedy. Does it actually matter in this case?
| [
"No need regex\n>>> s\n'10.254.254.28 - - [06/Aug/2007:00:12:20 -0700] \"GET /keyser/22300/ HTTP/1.0\" 302 528 \"-\" \"Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.8.1.4) Gecko/20070515 Firefox/2.0.0.4\"'\n\n>>> s.split(\"HTTP\")[0]\n'10.254.254.28 - - [06/Aug/2007:00:12:20 -0700] \"GET /keyser/22300/ '\n\... | [
5,
2,
1
] | [] | [] | [
"python",
"regex",
"string"
] | stackoverflow_0003978549_python_regex_string.txt |
Q:
How to schedule an event in python without multithreading?
Is it possible to schedule an event in python without multithreading?
I am trying to obtain something like scheduling a function to execute every x seconds.
A:
Maybe sched?
A:
You could use a combination of signal.alarm and a signal handler for SIGALRM like so to repeat the function every 5 seconds.
import signal
def handler(sig, frame):
print ("I am done this time")
signal.alarm(5) #Schedule this to happen again.
signal.signal(signal.SIGALRM, handler)
signal.alarm(5)
The other option is to use the sched module that comes along with Python but I don't know whether it uses threads or not.
A:
Sched is probably the way to go for this, as @eumiro points out. However, if you don't want to do that, then you could do this:
import time
while 1:
#call your event
time.sleep(x) #wait for x many seconds before calling the script again
A:
You could use celery:
Celery is an open source asynchronous
task queue/job queue based on
distributed message passing. It is
focused on real-time operation, but
supports scheduling as well.
The execution units, called tasks, are
executed concurrently on one or more
worker nodes. Tasks can execute
asynchronously (in the background) or
synchronously (wait until ready).
and a code example:
You probably want to see some code by
now, so here’s an example task adding
two numbers:
from celery.decorators import task
@task
def add(x, y):
return x + y
You can execute the task in the
background, or wait for it to finish:
>>> result = add.delay(4, 4)
>>> result.wait() # wait for and return the result 8
This is of more general use than the problem you describe requires, though.
A:
Without threading it seldom makes sense to periodically call a function. Because your main thread is blocked by waiting - it simply does nothing.
However if you really want to do so:
import time
for x in range(3):
print('Loop start')
time.sleep(2)
print('Calling some function...')
I this what you really want?
| How to schedule an event in python without multithreading? | Is it possible to schedule an event in python without multithreading?
I am trying to obtain something like scheduling a function to execute every x seconds.
| [
"Maybe sched?\n",
"You could use a combination of signal.alarm and a signal handler for SIGALRM like so to repeat the function every 5 seconds.\nimport signal\n\ndef handler(sig, frame):\n print (\"I am done this time\")\n signal.alarm(5) #Schedule this to happen again.\n\nsignal.signal(signal.SIGALRM, handle... | [
4,
3,
2,
2,
1
] | [] | [] | [
"python"
] | stackoverflow_0003978974_python.txt |
Q:
How to break out of double while loop in python?
Newbie python here. How can I break out of the second while loop if a user selects "Q" for "Quit?"
If I hit "m," it goes to the main menu and there I can quit hitting the "Q" key.
while loop == 1:
choice = main_menu()
if choice == "1":
os.system("clear")
while loop == 1:
choice = app_menu()
if choice == "1":
source = '%s/%s/external' % (app_help_path,app_version_10)
target = '%s/%s' % (target_app_help_path,app_version_10)
elif choice == "2":
source = '%s/%s/external' % (app_help_path,app_version_8)
target = '%s/%s' % (target_app_help_path,app_version_8)
elif choice.lower() == "m":
break
loop = 0
elif choice.lower() == "q":
break
loop = 0
sendfiles(source, target)
# Internal files
elif choice == "q":
loop = 0
App menu method:
def app_menu()
print "Select APP version"
print "-------------------"
print "1) 11"
print "2) 10"
print "3) 8"
print "m) Main Menu"
print "q) Quit"
print
return raw_input("Select an option: ")
A:
You nearly have it; you just need to swap these two lines.
elif choice.lower() == "m":
break
loop = 0
elif choice.lower() == "m":
loop = 0
break
You break out of the nested loop before setting loop. :)
A:
Change
break
loop = 0
to
loop = 0
break
in your elif blocks.
A:
Use an exception.
class Quit( Exception ): pass
running= True
while running:
choice = main_menu()
if choice == "1":
os.system("clear")
try:
while True:
choice = app_menu()
if choice == "1":
elif choice == "2":
elif choice.lower() == "m":
break
# No statement after break is ever executed.
elif choice.lower() == "q":
raise Quit
sendfiles(source, target)
except Quit:
running= False
elif choice == "q":
running= False
A:
Use two distinct variables for both loops, eg loop1 and loop2.
When you first press m in the inner loop you just break outside, and then you can handle q separately.
By the way you shouldn't need the inner variable to keep looping, just go with an infinite loop until key 'm' is pressed. Then you break out from inner loop while keeping first one.
A:
Rename your top loop to something like mainloop, and set mainloop = 0 when q is received.
while mainloop == 1:
choice = main_menu()
if choice == "1":
os.system("clear")
while loop == 1:
choice = app_menu()
if choice == "1":
source = '%s/%s/external' % (app_help_path,app_version_10)
target = '%s/%s' % (target_app_help_path,app_version_10)
elif choice == "2":
source = '%s/%s/external' % (app_help_path,app_version_8)
target = '%s/%s' % (target_app_help_path,app_version_8)
elif choice.lower() == "m":
loop = 0
break
elif choice.lower() == "q":
mainloop = 0break
break
sendfiles(source, target)
# Internal files
elif choice == "q":
mainloop = 0
A:
You could put this into a function and return:
import os.path
def do_whatever():
while True:
choice = main_menu()
if choice == "1":
os.system("clear")
while True:
choice = app_menu()
if choice in ("1", "2"):
app_version = app_version_10 if choice == "1" else app_version_8
source = os.path.join(app_help_path, app_version, "external")
target = os.path.join(target_app_help_path, app_version)
sendfiles(source, target)
elif choice.lower() == "m":
break
elif choice.lower() == "q":
return
Admittedly, I don't quite get when you want to break the inner loop and when you want to quit both loops, but this will give you the idea.
| How to break out of double while loop in python? | Newbie python here. How can I break out of the second while loop if a user selects "Q" for "Quit?"
If I hit "m," it goes to the main menu and there I can quit hitting the "Q" key.
while loop == 1:
choice = main_menu()
if choice == "1":
os.system("clear")
while loop == 1:
choice = app_menu()
if choice == "1":
source = '%s/%s/external' % (app_help_path,app_version_10)
target = '%s/%s' % (target_app_help_path,app_version_10)
elif choice == "2":
source = '%s/%s/external' % (app_help_path,app_version_8)
target = '%s/%s' % (target_app_help_path,app_version_8)
elif choice.lower() == "m":
break
loop = 0
elif choice.lower() == "q":
break
loop = 0
sendfiles(source, target)
# Internal files
elif choice == "q":
loop = 0
App menu method:
def app_menu()
print "Select APP version"
print "-------------------"
print "1) 11"
print "2) 10"
print "3) 8"
print "m) Main Menu"
print "q) Quit"
print
return raw_input("Select an option: ")
| [
"You nearly have it; you just need to swap these two lines. \nelif choice.lower() == \"m\":\n break\n loop = 0\n\nelif choice.lower() == \"m\":\n loop = 0\n break\n\nYou break out of the nested loop before setting loop. :)\n",
"Change\nbreak\nloop = 0\n\nto\nloop = 0\nbreak\n\nin your elif blocks.\n... | [
5,
2,
2,
1,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0003978890_python.txt |
Q:
How to use python in windows to open javascript, have it interpreted by WScript, and pass it the command line arguments
I have a format holding paths to files and command line arguments to pass to those files when they are opened in Windows.
For example I might have a path to a javascript file and a list of command line arguments to pass it, in such a case I want to open the javascript file in the same way you might with os.startfile and pass it the command line arguments - since the arguments are saved as a string I would like to pass it as a string but I can also pass it as a list if need be.
I am not quite sure what I should be using for this since a .js is not an executable, and thus will raise errors in Popen while startfile only takes verbs as its second command.
This problem can be extended to an arbitrary number of file extensions that need to be opened, and passed command line arguments, but will be interpreted by a true executable when opening.
A:
If windows has registered the .js extension to open with wscript, you can do this, by leaving that decision up to the windows shell.
You can just use os.system() to do the same thing as you would do when you type it at the command prompt, for example:
import os
os.system('example.js arg1 arg2')
You can also use the start command:
os.system('start example.js arg1 arg2')
If you need more power, for example to get results, you can use subprocess.Popen(), but make sure to use shell=True (so that the shell can call the right application):
from subprocess import Popen
p = Popen('example.js arg1 arg2', shell=True)
# you can also do pass the filename and arguments separately:
# p = Popen(['example.js', 'arg1', 'arg2'], shell=True)
stdoutdata, stderrdata = p.communicate()
(Although this would probably require cscript instead of wscript)
If Windows doesn't have any default application to open the file with (or if it's not the one you want), well, you're on your own of course...
| How to use python in windows to open javascript, have it interpreted by WScript, and pass it the command line arguments | I have a format holding paths to files and command line arguments to pass to those files when they are opened in Windows.
For example I might have a path to a javascript file and a list of command line arguments to pass it, in such a case I want to open the javascript file in the same way you might with os.startfile and pass it the command line arguments - since the arguments are saved as a string I would like to pass it as a string but I can also pass it as a list if need be.
I am not quite sure what I should be using for this since a .js is not an executable, and thus will raise errors in Popen while startfile only takes verbs as its second command.
This problem can be extended to an arbitrary number of file extensions that need to be opened, and passed command line arguments, but will be interpreted by a true executable when opening.
| [
"If windows has registered the .js extension to open with wscript, you can do this, by leaving that decision up to the windows shell.\nYou can just use os.system() to do the same thing as you would do when you type it at the command prompt, for example:\nimport os\nos.system('example.js arg1 arg2')\n\nYou can also ... | [
2
] | [] | [] | [
"popen",
"process",
"python",
"windows"
] | stackoverflow_0003978542_popen_process_python_windows.txt |
Q:
How should I comment partial Python functions?
say I have the following code:
def func(x, y = 1, z = 2):
""" A comment about this function """
return x + y + z
another_func = partial(func, z = 4)
What would be the correct or Pythonic way of documenting the another_func function?
A:
See partial() description on http://docs.python.org/library/functools.html#functools.partial
Like this:
another_func.__doc__ = "My documentation"
| How should I comment partial Python functions? | say I have the following code:
def func(x, y = 1, z = 2):
""" A comment about this function """
return x + y + z
another_func = partial(func, z = 4)
What would be the correct or Pythonic way of documenting the another_func function?
| [
"See partial() description on http://docs.python.org/library/functools.html#functools.partial\nLike this:\nanother_func.__doc__ = \"My documentation\"\n\n"
] | [
6
] | [] | [] | [
"comments",
"function",
"partial",
"python"
] | stackoverflow_0003979417_comments_function_partial_python.txt |
Q:
What's wrong with my Python SOAPpy webservice call?
I am playing around trying to call a simple SOAP webservice using the following code in the Python interpreter:
from SOAPpy import WSDL
wsdl = "http://www.webservicex.net/whois.asmx?wsdl"
proxy = WSDL.Proxy(wsdl)
proxy.soapproxy.config.dumpSOAPOut=1
proxy.soapproxy.config.dumpSOAPIn=1
proxy.GetWhoIS(HostName="google.com")
(Yep, I'm new to Python, doing the diveintopython thing...)
The call to the GetWhoIS method fails - otherwise I wouldn't be asking here, I guess.
Here's my outgoing SOAP:
<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope
SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:xsi="http://www.w3.org/1999/XMLSchema-instance"
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:xsd="http://www.w3.org/1999/XMLSchema">
<SOAP-ENV:Body>
<GetWhoIS SOAP-ENC:root="1">
<HostName xsi:type="xsd:string">google.com</HostName>
</GetWhoIS>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
And here's the incoming response.
<?xml version="1.0" encoding="utf-8"?>
<soap:Envelope
xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<soap:Body>
<soap:Fault>
<faultcode>soap:Server</faultcode>
<faultstring>
System.Web.Services.Protocols.SoapException:
Server was unable to process request. --->
System.ArgumentNullException: Value cannot be null.
at whois.whois.GetWhoIS(String HostName)
--- End of inner exception stack trace ---
</faultstring>
<detail />
</soap:Fault>
</soap:Body>
</soap:Envelope>
(manually formatted for easier reading)
Can anyone tell me what am I doing wrong?
Ideally both in terms of use of SOAPpy, and why the SOAP message is incorrect.
Thanks!
A:
Your call seems all right to me, i think this could be a soappy problem or misconfigured server (although i have not checked this thoroughly).
This document also suggests incompatibilities between soappy and webservicex.net:
http://users.jyu.fi/~mweber/teaching/ITKS545/exercises/ex5.pdf
How i would work around this in this specific case?
import urllib
url_handle = urllib.urlopen( "http://www.webservicex.net/whois.asmx/GetWhoIS?HostName=%s" \
% ("www.google.com") )
print url_handle.read()
A:
As mentioned by @ChristopheD, SOAPpy seems to be buggy for certain configurations of WDSL.
I tried using suds (sudo easy_install suds on Ubuntu) instead, worked first time.
from suds.client import Client
client = Client('http://www.webservicex.net/whois.asmx?wsdl')
client.service.run_GetWhoIS(HostName="google.com")
Job's a good 'un.
A:
For some reason the client is sending the request using an outdated form that is almost never used anymore ("SOAP Section 5 encoding"). You can tell based on this:
SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"
But based on the WSDL, the service only accepts regular SOAP messages. So, most likely something is wrong in the WSDL parsing part of the SOAP library you are using.
A:
Please check my answer to another question here. .net requires the soap action have the name space prepended.
| What's wrong with my Python SOAPpy webservice call? | I am playing around trying to call a simple SOAP webservice using the following code in the Python interpreter:
from SOAPpy import WSDL
wsdl = "http://www.webservicex.net/whois.asmx?wsdl"
proxy = WSDL.Proxy(wsdl)
proxy.soapproxy.config.dumpSOAPOut=1
proxy.soapproxy.config.dumpSOAPIn=1
proxy.GetWhoIS(HostName="google.com")
(Yep, I'm new to Python, doing the diveintopython thing...)
The call to the GetWhoIS method fails - otherwise I wouldn't be asking here, I guess.
Here's my outgoing SOAP:
<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope
SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:xsi="http://www.w3.org/1999/XMLSchema-instance"
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:xsd="http://www.w3.org/1999/XMLSchema">
<SOAP-ENV:Body>
<GetWhoIS SOAP-ENC:root="1">
<HostName xsi:type="xsd:string">google.com</HostName>
</GetWhoIS>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
And here's the incoming response.
<?xml version="1.0" encoding="utf-8"?>
<soap:Envelope
xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<soap:Body>
<soap:Fault>
<faultcode>soap:Server</faultcode>
<faultstring>
System.Web.Services.Protocols.SoapException:
Server was unable to process request. --->
System.ArgumentNullException: Value cannot be null.
at whois.whois.GetWhoIS(String HostName)
--- End of inner exception stack trace ---
</faultstring>
<detail />
</soap:Fault>
</soap:Body>
</soap:Envelope>
(manually formatted for easier reading)
Can anyone tell me what am I doing wrong?
Ideally both in terms of use of SOAPpy, and why the SOAP message is incorrect.
Thanks!
| [
"Your call seems all right to me, i think this could be a soappy problem or misconfigured server (although i have not checked this thoroughly).\nThis document also suggests incompatibilities between soappy and webservicex.net:\nhttp://users.jyu.fi/~mweber/teaching/ITKS545/exercises/ex5.pdf\nHow i would work around ... | [
3,
2,
1,
0
] | [] | [] | [
"python",
"soappy",
"wsdl"
] | stackoverflow_0000679302_python_soappy_wsdl.txt |
Q:
How can I use Python to get the contents inside of this span tag?
I'm trying to scrape the information from Google Translate as a learning exercise and I can't figure out how to reach the content of this span tag.
<span title="Hello" onmouseover="this.style.backgroundColor='#ebeff9'"
onmouseout="this.style.backgroundColor='#fff'">
Hallo
</span>
How would I use Python to reach into the contents. Since the 'title' parameter of this span is dynamic, I guess I can target that as a point of entry?
For example trying to translate:
Hi, welcome to my house. Would you like a glass of tea or maybe some biscuits?
results in the following html output:
<span title="Hi, welcome to my house."
onmouseover="this.style.backgroundColor='#ebeff9'"
onmouseout="this.style.backgroundColor='#fff'">
Hallo, mein Haus begrüßen zu dürfen.
</span>
A:
Checkout BeautifulSoup
A:
# -*- coding: utf-8 -*-
def gettext(html):
for sp in myhtml.split("</span>"):
if "<span" in sp:
return sp.rsplit(">")[-1].strip()
myhtml="""
<span title="Hello" onmouseover="this.style.backgroundColor='#ebeff9'"
onmouseout="this.style.backgroundColor='#fff'">
Hallo
</span>
"""
print gettext(myhtml)
myhtml="""
<span title="Hi, welcome to my house."
onmouseover="this.style.backgroundColor='#ebeff9'"
onmouseout="this.style.backgroundColor='#fff'">
Hallo, mein Haus begrüßen zu dürfen.
</span>
"""
print gettext(myhtml)
output
$ python mytranslate.py
Hallo
Hallo, mein Haus begrüßen zu dürfen.
A:
Python ships with a few XML and HTML parsers.
Element Tree Parser
This is supposed to be the most pythonic method of parsing XML files.
xml.etree.ElementTree
DOM XML Parsers
xml.dom
xml.dom.minidom
SAX XML Parser
xml.sax
Expat XML Parser
xml.parsers.expat
Simple HTML and XHTML Parser
HTMLParser
Third Party Parsers
If you don't like any of the parsers included with python.
lxml
BeautifulSoup
I would suggest that you look at the parsers that come with Python first, then look at third party parsers if you don't find any of the included modules acceptable.
| How can I use Python to get the contents inside of this span tag? | I'm trying to scrape the information from Google Translate as a learning exercise and I can't figure out how to reach the content of this span tag.
<span title="Hello" onmouseover="this.style.backgroundColor='#ebeff9'"
onmouseout="this.style.backgroundColor='#fff'">
Hallo
</span>
How would I use Python to reach into the contents. Since the 'title' parameter of this span is dynamic, I guess I can target that as a point of entry?
For example trying to translate:
Hi, welcome to my house. Would you like a glass of tea or maybe some biscuits?
results in the following html output:
<span title="Hi, welcome to my house."
onmouseover="this.style.backgroundColor='#ebeff9'"
onmouseout="this.style.backgroundColor='#fff'">
Hallo, mein Haus begrüßen zu dürfen.
</span>
| [
"Checkout BeautifulSoup\n",
"# -*- coding: utf-8 -*-\ndef gettext(html):\n for sp in myhtml.split(\"</span>\"):\n if \"<span\" in sp:\n return sp.rsplit(\">\")[-1].strip()\n\nmyhtml=\"\"\"\n<span title=\"Hello\" onmouseover=\"this.style.backgroundColor='#ebeff9'\"\n onmouseout=\"this.style.... | [
3,
0,
0
] | [] | [] | [
"html_parsing",
"python"
] | stackoverflow_0003979962_html_parsing_python.txt |
Q:
Widgets disappear after tkMessageBox in Tkinter
Every time I use this code in my applications:
tkMessageBox.showinfo("Test", "Info goes here!")
a message box pops up (like it is supposed to), but after I click OK, the box disappears along with most of the other widgets on the window. How do I prevent the other widgets from disappearing?
Here Is My Code:
from Tkinter import *
import tkMessageBox
root = Tk()
root.minsize(600,600)
root.maxsize(600,600)
p1 = Label(root, bg='blue')
p1.place(width=600, height=600)
b1 = Button(p1, text="Test Button")
b1.place(x="30", y="50")
tkMessageBox.showinfo("Test", Info")
root.mainloop()
A:
Ok, there are a few things going wrong here. First, your label has no string or image associated with it. Therefore, it's width and height will be very small. Because you use pack, the containing widget (the root window) will "shrink to fit" around this widget and any other widgets you pack in the root window.
Second, you use place for the button which means its size will not affect the size of the parent. Not only that, but you place the button inside the very tiny label. Thus, the only thing controlling the size of the parent is the label so the main window ends up being very small.
You have another problem is that you're showing the dialog before entering the event loop. I'm a bit surprised that it even works, but Tkinter sometimes does unusual things under the covers. You should enter the event loop before calling the dialog.
Try this variation of your code as a starting point:
from Tkinter import *
import tkMessageBox
def showInfo():
tkMessageBox.showinfo("Test","Info")
root = Tk()
p1 = Label(root, bg='blue', text="hello")
p1.pack()
b1 = Button(root, text="Test Button", command=showInfo)
b1.pack()
root.mainloop()
| Widgets disappear after tkMessageBox in Tkinter | Every time I use this code in my applications:
tkMessageBox.showinfo("Test", "Info goes here!")
a message box pops up (like it is supposed to), but after I click OK, the box disappears along with most of the other widgets on the window. How do I prevent the other widgets from disappearing?
Here Is My Code:
from Tkinter import *
import tkMessageBox
root = Tk()
root.minsize(600,600)
root.maxsize(600,600)
p1 = Label(root, bg='blue')
p1.place(width=600, height=600)
b1 = Button(p1, text="Test Button")
b1.place(x="30", y="50")
tkMessageBox.showinfo("Test", Info")
root.mainloop()
| [
"Ok, there are a few things going wrong here. First, your label has no string or image associated with it. Therefore, it's width and height will be very small. Because you use pack, the containing widget (the root window) will \"shrink to fit\" around this widget and any other widgets you pack in the root window.\n... | [
1
] | [] | [] | [
"python",
"tkinter",
"tkmessagebox",
"widget",
"windows"
] | stackoverflow_0003974512_python_tkinter_tkmessagebox_widget_windows.txt |
Q:
Negative lookbehind in Python regular expressions
I am trying to parse a list of data out of a file using python - however I don't want to extract any data that is commented out. An example of the way the data is structured is:
#commented out block
uncommented block
# commented block
I am trying to only retrieve the middle item, so am trying to exclude the items with hashes at the start. The issue is that some hashes are directly next to the commented items, and some arent, and the expression I currently have only works if items have been commented in the first example above -
(?<!#)(commented)
I tried adding \s+ to the negative lookahead but then I get a complaint that the expression does not have an obvious maximum length. Is there any way to do what I'm attempting to do?
Thanks in advance,
Dan
A:
Why using regex? String methods would do just fine:
>>> s = """#commented out block
uncommented block
# commented block
""".splitlines()
>>> for line in s:
not line.lstrip().startswith('#')
False
True
False
A:
As SilentGhost indicated, a regular expression isn't the best solution to this problem, but I thought I'd address the negative look behind.
You thought of doing this:
(?<!#\s+)(commented)
This doesn't work, because the look behind needs a finite length. You could do something like this:
(?<!#)(\s+commented)
This would match the lines you want, but of course, you'd have to strip the whitespace off the comment group. Again, string manipulation is better for what you're doing, but I wanted to show how negative look behind could work since you were asking.
A:
>>> s = """#commented out block
... uncommented block
... # commented block
... """
>>> for i in s.splitlines():
... if not i.lstrip().startswith("#"):
... print i
...
uncommented block
| Negative lookbehind in Python regular expressions | I am trying to parse a list of data out of a file using python - however I don't want to extract any data that is commented out. An example of the way the data is structured is:
#commented out block
uncommented block
# commented block
I am trying to only retrieve the middle item, so am trying to exclude the items with hashes at the start. The issue is that some hashes are directly next to the commented items, and some arent, and the expression I currently have only works if items have been commented in the first example above -
(?<!#)(commented)
I tried adding \s+ to the negative lookahead but then I get a complaint that the expression does not have an obvious maximum length. Is there any way to do what I'm attempting to do?
Thanks in advance,
Dan
| [
"Why using regex? String methods would do just fine:\n>>> s = \"\"\"#commented out block\nuncommented block\n# commented block\n\"\"\".splitlines()\n>>> for line in s:\n not line.lstrip().startswith('#')\n\n\nFalse\nTrue\nFalse\n\n",
"As SilentGhost indicated, a regular expression isn't the best solution to ... | [
6,
4,
0
] | [] | [] | [
"python",
"regex",
"string"
] | stackoverflow_0003980213_python_regex_string.txt |
Q:
higher level Python GUI toolkit, e.g. pass dict for TreeView/Grid
Started my first Python pet project using PyGTK. Though it is a really powerful GUI toolkit and looks excellent, I have some pet peeves. So I thought about transitioning to something else, as it's not yet too extensive. Had a look around on SO and python documentation, but didn't get a good overview.
What's nice about PyGTK:
Glade files
self.signal_autoconnect({...})
self.get_widget() as __getattr__
This is bugging me however:
manual gobject.idle_add(lambda: ... and False)
no standard functionality to save application/window states
TreeView needs array building
widget.get_selection().get_selected(), model.get_value(iter, liststore_index)
TreeView: Because this is the main interface element, it's the most distracting. Basically my application builds a list of dictionaries to be displayed name=column+row=>value. To display it using GTK there needs to be a manual conversion process, ordering, typecasts. This seems a lot of overhead, and I wished for something more object-oriented here. PyGtk has many abstractions atop gtk+ but still seems rather low-levelish. I'd prefer to pass my dict as-is and have columns pre-defined somehow. (GtkBuilder can predefine TreeView columns, but this doesn't solve the data representation overhead.)
When I get a mousclick on my TreeView list, I also have to convert everything back into my application data structures. And it's also irksome that PyGTK doesn't wrap gtk+ calls with gobject.idle itself, if run from a non-main thread. Right now there is a lot of GUI code that I believe shouldn't be necessary, or could be rationalized away.
? So, are there maybe additional wrappers on top of PyGTK. Or which other toolkit supports simpler interfaces for displaying a Grid / TreeView. I've read a lot about wxPython being everyones favourite, but it's less mature on Linux. And PyQT seems to be mostly the same abstraction level as PyGTK. Haven't used TkInter much so don't know about if it has simpler interfaces, but it anyway looks unattractive. As does PyFLTK. PyJamas sounds fascinating, but is already too far out (Desktop application).
.
So, GUI toolkit with dict -> Grid display. Which would you pick?
.
Just as exhibit, this is my current TreeView mapping function. Sort of works, but I would rather have something standard:
#-- fill a treeview
#
# Adds treeviewcolumns/cellrenderers and liststore from a data dictionary.
# Its datamap and the table contents can be supplied in one or two steps.
# When new data gets applied, the columns aren't recreated.
#
# The columns are created according to the datamap, which describes cell
# mapping and layout. Columns can have multiple cellrenderers, but usually
# there is a direct mapping to a data source key from entries.
#
# datamap = [ # title width dict-key type, renderer, attrs
# ["Name", 150, ["titlerow", str, "text", {} ] ],
# [False, 0, ["interndat", int, None, {} ] ],
# ["Desc", 200, ["descriptn", str, "text", {} ], ["icon",str,"pixbuf",{}] ],
#
# An according entries list then would contain a dictionary for each row:
# entries = [ {"titlerow":"first", "interndat":123}, {"titlerow":"..."}, ]
# Keys not mentioned in the datamap get ignored, and defaults are applied
# for missing cols. All values must already be in the correct type however.
#
@staticmethod
def columns(widget, datamap=[], entries=[], pix_entry=False):
# create treeviewcolumns?
if (not widget.get_column(0)):
# loop through titles
datapos = 0
for n_col,desc in enumerate(datamap):
# check for title
if (type(desc[0]) != str):
datapos += 1 # if there is none, this is just an undisplayed data column
continue
# new tvcolumn
col = gtk.TreeViewColumn(desc[0]) # title
col.set_resizable(True)
# width
if (desc[1] > 0):
col.set_sizing(gtk.TREE_VIEW_COLUMN_FIXED)
col.set_fixed_width(desc[1])
# loop through cells
for var in xrange(2, len(desc)):
cell = desc[var]
# cell renderer
if (cell[2] == "pixbuf"):
rend = gtk.CellRendererPixbuf() # img cell
if (cell[1] == str):
cell[3]["stock_id"] = datapos # for stock icons
expand = False
else:
pix_entry = datapos
cell[3]["pixbuf"] = datapos
else:
rend = gtk.CellRendererText() # text cell
cell[3]["text"] = datapos
col.set_sort_column_id(datapos) # only on textual cells
# attach cell to column
col.pack_end(rend, expand=cell[3].get("expand",True))
# apply attributes
for attr,val in cell[3].iteritems():
col.add_attribute(rend, attr, val)
# next
datapos += 1
# add column to treeview
widget.append_column(col)
# finalize widget
widget.set_search_column(2) #??
widget.set_reorderable(True)
# add data?
if (entries):
#- expand datamap
vartypes = [] #(str, str, bool, str, int, int, gtk.gdk.Pixbuf, str, int)
rowmap = [] #["title", "desc", "bookmarked", "name", "count", "max", "img", ...]
if (not rowmap):
for desc in datamap:
for var in xrange(2, len(desc)):
vartypes.append(desc[var][3]) # content types
rowmap.append(desc[var][0]) # dict{} column keys in entries[] list
# create gtk array storage
ls = gtk.ListStore(*vartypes) # could be a TreeStore, too
# prepare for missing values, and special variable types
defaults = {
str: "",
unicode: u"",
bool: False,
int: 0,
gtk.gdk.Pixbuf: gtk.gdk.pixbuf_new_from_data("\0\0\0\0",gtk.gdk.COLORSPACE_RGB,True,8,1,1,4)
}
if gtk.gdk.Pixbuf in vartypes:
pix_entry = vartypes.index(gtk.gdk.Pixbuf)
# sort data into gtk liststore array
for row in entries:
# generate ordered list from dictionary, using rowmap association
row = [ row.get( skey , defaults[vartypes[i]] ) for i,skey in enumerate(rowmap) ]
# autotransform string -> gtk image object
if (pix_entry and type(row[pix_entry]) == str):
row[pix_entry] = gtk.gdk.pixbuf_new_from_file(row[pix_entry])
# add
ls.append(row) # had to be adapted for real TreeStore (would require additional input for grouping/level/parents)
# apply array to widget
widget.set_model(ls)
return ls
pass
A:
Try Kiwi, maybe? Especially with its ObjectList.
Update: I think Kiwi development has moved to PyGTKHelpers.
A:
I hadn't come across Kiwi before. Thanks, Johannes Sasongko.
Here are some more tooklits that I keep bookmarked. Some of these are wrappers around other toolkits (GTK, wxWidgets) while others stand alone:
AVC
Dabo
pyFLTK
pyglet
PyGTK
PyGUI
uxPython
Wax
wxPython
wxpita
WxWrappers
(I've included a few that were already mentioned for the sake of others who come across this post. I would have posted this as a comment, but it's a bit too long.)
A:
See also pygtkhelpers' ObjectList
A:
I would suggest taking a look at wxPython. I found it really easy to pick up and very powerful too although I'd have to admit I've not done a lot with Treeviews myself.
wxWidgets calls the equivalent control a wxTreeCtrl
[Edit] The wxDataViewTreeCtrl might actually be of more use in your case.
| higher level Python GUI toolkit, e.g. pass dict for TreeView/Grid | Started my first Python pet project using PyGTK. Though it is a really powerful GUI toolkit and looks excellent, I have some pet peeves. So I thought about transitioning to something else, as it's not yet too extensive. Had a look around on SO and python documentation, but didn't get a good overview.
What's nice about PyGTK:
Glade files
self.signal_autoconnect({...})
self.get_widget() as __getattr__
This is bugging me however:
manual gobject.idle_add(lambda: ... and False)
no standard functionality to save application/window states
TreeView needs array building
widget.get_selection().get_selected(), model.get_value(iter, liststore_index)
TreeView: Because this is the main interface element, it's the most distracting. Basically my application builds a list of dictionaries to be displayed name=column+row=>value. To display it using GTK there needs to be a manual conversion process, ordering, typecasts. This seems a lot of overhead, and I wished for something more object-oriented here. PyGtk has many abstractions atop gtk+ but still seems rather low-levelish. I'd prefer to pass my dict as-is and have columns pre-defined somehow. (GtkBuilder can predefine TreeView columns, but this doesn't solve the data representation overhead.)
When I get a mousclick on my TreeView list, I also have to convert everything back into my application data structures. And it's also irksome that PyGTK doesn't wrap gtk+ calls with gobject.idle itself, if run from a non-main thread. Right now there is a lot of GUI code that I believe shouldn't be necessary, or could be rationalized away.
? So, are there maybe additional wrappers on top of PyGTK. Or which other toolkit supports simpler interfaces for displaying a Grid / TreeView. I've read a lot about wxPython being everyones favourite, but it's less mature on Linux. And PyQT seems to be mostly the same abstraction level as PyGTK. Haven't used TkInter much so don't know about if it has simpler interfaces, but it anyway looks unattractive. As does PyFLTK. PyJamas sounds fascinating, but is already too far out (Desktop application).
.
So, GUI toolkit with dict -> Grid display. Which would you pick?
.
Just as exhibit, this is my current TreeView mapping function. Sort of works, but I would rather have something standard:
#-- fill a treeview
#
# Adds treeviewcolumns/cellrenderers and liststore from a data dictionary.
# Its datamap and the table contents can be supplied in one or two steps.
# When new data gets applied, the columns aren't recreated.
#
# The columns are created according to the datamap, which describes cell
# mapping and layout. Columns can have multiple cellrenderers, but usually
# there is a direct mapping to a data source key from entries.
#
# datamap = [ # title width dict-key type, renderer, attrs
# ["Name", 150, ["titlerow", str, "text", {} ] ],
# [False, 0, ["interndat", int, None, {} ] ],
# ["Desc", 200, ["descriptn", str, "text", {} ], ["icon",str,"pixbuf",{}] ],
#
# An according entries list then would contain a dictionary for each row:
# entries = [ {"titlerow":"first", "interndat":123}, {"titlerow":"..."}, ]
# Keys not mentioned in the datamap get ignored, and defaults are applied
# for missing cols. All values must already be in the correct type however.
#
@staticmethod
def columns(widget, datamap=[], entries=[], pix_entry=False):
# create treeviewcolumns?
if (not widget.get_column(0)):
# loop through titles
datapos = 0
for n_col,desc in enumerate(datamap):
# check for title
if (type(desc[0]) != str):
datapos += 1 # if there is none, this is just an undisplayed data column
continue
# new tvcolumn
col = gtk.TreeViewColumn(desc[0]) # title
col.set_resizable(True)
# width
if (desc[1] > 0):
col.set_sizing(gtk.TREE_VIEW_COLUMN_FIXED)
col.set_fixed_width(desc[1])
# loop through cells
for var in xrange(2, len(desc)):
cell = desc[var]
# cell renderer
if (cell[2] == "pixbuf"):
rend = gtk.CellRendererPixbuf() # img cell
if (cell[1] == str):
cell[3]["stock_id"] = datapos # for stock icons
expand = False
else:
pix_entry = datapos
cell[3]["pixbuf"] = datapos
else:
rend = gtk.CellRendererText() # text cell
cell[3]["text"] = datapos
col.set_sort_column_id(datapos) # only on textual cells
# attach cell to column
col.pack_end(rend, expand=cell[3].get("expand",True))
# apply attributes
for attr,val in cell[3].iteritems():
col.add_attribute(rend, attr, val)
# next
datapos += 1
# add column to treeview
widget.append_column(col)
# finalize widget
widget.set_search_column(2) #??
widget.set_reorderable(True)
# add data?
if (entries):
#- expand datamap
vartypes = [] #(str, str, bool, str, int, int, gtk.gdk.Pixbuf, str, int)
rowmap = [] #["title", "desc", "bookmarked", "name", "count", "max", "img", ...]
if (not rowmap):
for desc in datamap:
for var in xrange(2, len(desc)):
vartypes.append(desc[var][3]) # content types
rowmap.append(desc[var][0]) # dict{} column keys in entries[] list
# create gtk array storage
ls = gtk.ListStore(*vartypes) # could be a TreeStore, too
# prepare for missing values, and special variable types
defaults = {
str: "",
unicode: u"",
bool: False,
int: 0,
gtk.gdk.Pixbuf: gtk.gdk.pixbuf_new_from_data("\0\0\0\0",gtk.gdk.COLORSPACE_RGB,True,8,1,1,4)
}
if gtk.gdk.Pixbuf in vartypes:
pix_entry = vartypes.index(gtk.gdk.Pixbuf)
# sort data into gtk liststore array
for row in entries:
# generate ordered list from dictionary, using rowmap association
row = [ row.get( skey , defaults[vartypes[i]] ) for i,skey in enumerate(rowmap) ]
# autotransform string -> gtk image object
if (pix_entry and type(row[pix_entry]) == str):
row[pix_entry] = gtk.gdk.pixbuf_new_from_file(row[pix_entry])
# add
ls.append(row) # had to be adapted for real TreeStore (would require additional input for grouping/level/parents)
# apply array to widget
widget.set_model(ls)
return ls
pass
| [
"Try Kiwi, maybe? Especially with its ObjectList.\nUpdate: I think Kiwi development has moved to PyGTKHelpers.\n",
"I hadn't come across Kiwi before. Thanks, Johannes Sasongko.\nHere are some more tooklits that I keep bookmarked. Some of these are wrappers around other toolkits (GTK, wxWidgets) while others sta... | [
5,
4,
3,
1
] | [] | [] | [
"gtk",
"pygtk",
"python",
"user_interface"
] | stackoverflow_0003136128_gtk_pygtk_python_user_interface.txt |
Q:
How to check the values in a instance with Python?
I have a python class/object as follows.
class Hello:
def __init__(self):
self.x = None
self.y = None
self.z = None
h = Hello()
h.x = 10
h.y = 20
# h.z is not set
I need to check if all the member variables are set (not None). How can I do that automatically?
for value in ??memeber variables in h??:
if value == None:
print 'value is not set'
A:
class Hello(object):
def __init__(self):
self.x = None
self.y = None
self.z = None
def is_all_set(self):
return all(getattr(self, attr) is not None for attr in self.__dict__)
though, as @delnan said, you should prefer to make it impossible for the class to always be in a valid state (this is referred to as preserving "class invariants")
A:
I need to check if all the member variables are set (not None).
Why?
You need the member variables in order to do some processing in the class.
class Hello:
def __init__(self):
self.x = None
self.y = None
self.z = None
def some_processing( self ):
assert all( self.x is not None, self.y is not None, self.z is not None )
# Now do the processing.
| How to check the values in a instance with Python? | I have a python class/object as follows.
class Hello:
def __init__(self):
self.x = None
self.y = None
self.z = None
h = Hello()
h.x = 10
h.y = 20
# h.z is not set
I need to check if all the member variables are set (not None). How can I do that automatically?
for value in ??memeber variables in h??:
if value == None:
print 'value is not set'
| [
"class Hello(object):\n def __init__(self):\n self.x = None\n self.y = None\n self.z = None\n def is_all_set(self):\n return all(getattr(self, attr) is not None for attr in self.__dict__)\n\nthough, as @delnan said, you should prefer to make it impossible for the class to always be... | [
3,
0
] | [] | [] | [
"member",
"python"
] | stackoverflow_0003980374_member_python.txt |
Q:
How can my desktop application be notified of a state change on a remote server?
I'm creating a desktop application that requires authorization from a remote server before performing certain actions locally.
What's the best way to have my desktop application notified when the server approves the request for authorization? Authorization takes 20 seconds average on, 5 seconds minimum, with a 120 second timeout.
I considered polling the server ever 3 seconds or so, but this would be hard to scale when I deploy the application more widely, and seems inelegant.
I have full control over the design of the server and client API. The server is using web.py on Ubuntu 10.10, Python 2.6.
A:
Does the remote end block while it does the authentication? If so, you can use a simple select to block till it returns.
Another way I can think of is to pass a callback URL to the authentication server asking it to call it when it's done so that your client app can proceed. Something like a webhook.
A:
You need to do something asynchronous. I've always been a big fan of twisted, which is a framework for doing async networking in python. It supports a lot of common protocols and has the plumbing to roll your own. If twisted seems a bit overkill, you can compare other async frameworks here
| How can my desktop application be notified of a state change on a remote server? | I'm creating a desktop application that requires authorization from a remote server before performing certain actions locally.
What's the best way to have my desktop application notified when the server approves the request for authorization? Authorization takes 20 seconds average on, 5 seconds minimum, with a 120 second timeout.
I considered polling the server ever 3 seconds or so, but this would be hard to scale when I deploy the application more widely, and seems inelegant.
I have full control over the design of the server and client API. The server is using web.py on Ubuntu 10.10, Python 2.6.
| [
"Does the remote end block while it does the authentication? If so, you can use a simple select to block till it returns.\nAnother way I can think of is to pass a callback URL to the authentication server asking it to call it when it's done so that your client app can proceed. Something like a webhook.\n",
"You n... | [
0,
0
] | [] | [] | [
"authentication",
"authorization",
"polling",
"python",
"web.py"
] | stackoverflow_0003978739_authentication_authorization_polling_python_web.py.txt |
Q:
rotating tire rims of car opengl transformations
Here is the draw function which draws the parts of the car, in this function car rims is checked and flag is checked, and i need to rotate the tire rim as i move the car. Something is not working since the rims are rotated but taken out from the car model, when i press up arrow key, but the car does move.
I also initialized self.fFlag = "false" in initialize function:
def on_draw(self):
# Clears the screen and draws the car
# If needed, extra transformations may be set-up here
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
for name in self.parts:
colors = self.colors
color = colors.get(name, colors["default"])
glColor3f(*color)
if (name == 'Front Driver tire rim') & (self.fFlag == "true"):
bodyFace = self.mini.group(name)
glPushMatrix()
glRotatef(45,1,0,0)
# Drawing the rim
for face in bodyFace:
if len(face) == 3:
glBegin(GL_TRIANGLES)
elif len(face) == 4:
glBegin(GL_QUADS)
else:
glBegin(GL_POLYGON)
for i in face:
glNormal3f(*self.mini.normal(i))
glVertex3f(*self.mini.vertex(i))
glEnd()
glPopMatrix()
self.fFlag == "false"
else:
bodyFace = self.mini.group(name)
for face in bodyFace:
if len(face) == 3:
glBegin(GL_TRIANGLES)
elif len(face) == 4:
glBegin(GL_QUADS)
else:
glBegin(GL_POLYGON)
for i in face:
glNormal3f(*self.mini.normal(i))
glVertex3f(*self.mini.vertex(i))
glEnd()
def on_key_release(self, symbol, modifiers):
"""Process a key pressed event.
"""
if symbol == key.UP:
# Move car forward
# TODO
glTranslatef(0,-1,0)
self.fFlag = "true"
self.on_draw()
pass
Edited: I am trying to make the car rims to rotate when i press the up arrow key, which moves the car forward.
A:
I would highly suggest posting this to the class forum. I don't think TJ would really like to see this, and its very easy to find.
A:
You're almost certainly applying the rotation and transformation in the wrong order, so that the rim is rotated about some point other than the center of the tire.
You might try doing the rotation in the MODELVIEW matrix and the translation in the PROJECTION matrix.
A:
In order to rotate a part about its own center, you need to translate it to the origin, rotate it, and translate it back.
So your
glRotatef(45,1,0,0) # rotate 45 deg about x axis (thru the world origin)
needs to be preceded and followed by translations.
See the accepted answer to this question.
| rotating tire rims of car opengl transformations | Here is the draw function which draws the parts of the car, in this function car rims is checked and flag is checked, and i need to rotate the tire rim as i move the car. Something is not working since the rims are rotated but taken out from the car model, when i press up arrow key, but the car does move.
I also initialized self.fFlag = "false" in initialize function:
def on_draw(self):
# Clears the screen and draws the car
# If needed, extra transformations may be set-up here
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
for name in self.parts:
colors = self.colors
color = colors.get(name, colors["default"])
glColor3f(*color)
if (name == 'Front Driver tire rim') & (self.fFlag == "true"):
bodyFace = self.mini.group(name)
glPushMatrix()
glRotatef(45,1,0,0)
# Drawing the rim
for face in bodyFace:
if len(face) == 3:
glBegin(GL_TRIANGLES)
elif len(face) == 4:
glBegin(GL_QUADS)
else:
glBegin(GL_POLYGON)
for i in face:
glNormal3f(*self.mini.normal(i))
glVertex3f(*self.mini.vertex(i))
glEnd()
glPopMatrix()
self.fFlag == "false"
else:
bodyFace = self.mini.group(name)
for face in bodyFace:
if len(face) == 3:
glBegin(GL_TRIANGLES)
elif len(face) == 4:
glBegin(GL_QUADS)
else:
glBegin(GL_POLYGON)
for i in face:
glNormal3f(*self.mini.normal(i))
glVertex3f(*self.mini.vertex(i))
glEnd()
def on_key_release(self, symbol, modifiers):
"""Process a key pressed event.
"""
if symbol == key.UP:
# Move car forward
# TODO
glTranslatef(0,-1,0)
self.fFlag = "true"
self.on_draw()
pass
Edited: I am trying to make the car rims to rotate when i press the up arrow key, which moves the car forward.
| [
"I would highly suggest posting this to the class forum. I don't think TJ would really like to see this, and its very easy to find.\n",
"You're almost certainly applying the rotation and transformation in the wrong order, so that the rim is rotated about some point other than the center of the tire.\nYou might tr... | [
2,
1,
1
] | [] | [] | [
"opengl",
"python"
] | stackoverflow_0003950829_opengl_python.txt |
Q:
Extract all
Could someone tell me how I can extract and remove all the <script> tags in a HTML document and add them to the end of the document, right before the </body></html>? I'd like to try and avoid using lxml please.
Thanks.
A:
The answer is simple and may miss many nuances. How ever, this should give you an idea of how to go about doing it, improving it in general. I am sure this can be improved but you should be able to do that quickly with help of the documentation.
Reference doc: http://www.crummy.com/software/BeautifulSoup/documentation.html
from bs4 import BeautifulSoup
doc = ['<html><script type="text/javascript">document.write("Hello World!")',
'</script><head><title>Page title</title></head>',
'<body><p id="firstpara" align="center">This is paragraph <b>one</b>.',
'<p id="secondpara" align="blah">This is paragraph <b>two</b>.',
'</html>']
soup = BeautifulSoup(''.join(doc))
for tag in soup.findAll('script'):
# Use extract to remove the tag
tag.extract()
# use simple insert
soup.body.insert(len(soup.body.contents), tag)
print soup.prettify()
Output:
<html>
<head>
<title>
Page title
</title>
</head>
<body>
<p id="firstpara" align="center">
This is paragraph
<b>
one
</b>
.
</p>
<p id="secondpara" align="blah">
This is paragraph
<b>
two
</b>
.
</p>
<script type="text/javascript">
document.write("Hello World!")
</script>
</body>
</html>
| Extract all | Could someone tell me how I can extract and remove all the <script> tags in a HTML document and add them to the end of the document, right before the </body></html>? I'd like to try and avoid using lxml please.
Thanks.
| [
"The answer is simple and may miss many nuances. How ever, this should give you an idea of how to go about doing it, improving it in general. I am sure this can be improved but you should be able to do that quickly with help of the documentation.\nReference doc: http://www.crummy.com/software/BeautifulSoup/document... | [
6
] | [] | [] | [
"beautifulsoup",
"python"
] | stackoverflow_0003980740_beautifulsoup_python.txt |
Q:
Pickling an enum exposed by Boost.Python
Is it possible to pickle (using cPickle) an enum that has been exposed with Boost.Python? I have successfully pickled other objects using the first method described here, but none of that seems to apply for an enum type, and the objects don't seem to be pickleable by default.
A:
Not as they are in the module. I am given to understand that this is SUPPOSED to be possible, but the way the enum_ statement works prevents this.
You can work around this on the python side. Somewhere (probably in a __init__.py file) do something like this:
import yourmodule
def isEnumType(o):
return isinstance(o, type) and issubclass(o,int) and not (o is int)
def _tuple2enum(enum, value):
enum = getattr(yourmodule, enum)
e = enum.values.get(value,None)
if e is None:
e = enum(value)
return e
def _registerEnumPicklers():
from copy_reg import constructor, pickle
def reduce_enum(e):
enum = type(e).__name__.split('.')[-1]
return ( _tuple2enum, ( enum, int(e) ) )
constructor( _tuple2enum)
for e in [ e for e in vars(yourmodule).itervalues() if isEnumType(e) ]:
pickle(e, reduce_enum)
_registerEnumPicklers()
This will make everything pickle just fine.
| Pickling an enum exposed by Boost.Python | Is it possible to pickle (using cPickle) an enum that has been exposed with Boost.Python? I have successfully pickled other objects using the first method described here, but none of that seems to apply for an enum type, and the objects don't seem to be pickleable by default.
| [
"Not as they are in the module. I am given to understand that this is SUPPOSED to be possible, but the way the enum_ statement works prevents this. \nYou can work around this on the python side. Somewhere (probably in a __init__.py file) do something like this:\nimport yourmodule\n\ndef isEnumType(o):\n return i... | [
6
] | [] | [] | [
"boost_python",
"pickle",
"python"
] | stackoverflow_0003214969_boost_python_pickle_python.txt |
Q:
Reference encoding error byte in Python
Suppose I type line = line.decode('gb18030;) and get the error
UnicodeDecodeError: 'gb18030' codec can't decode bytes in position 142-143: illegal multibyte sequence
Is there a nice way to automatically get the error bytes? That is, is there a way to get 142 & 143 or line[142:144] from a built-in command or module? Since I'm fairly confident that there will be only one such error, at most, per line, my first thought was along the lines of:
for i in range(len(line)):
try:
line[i].decode('gb18030')
except UnicodeDecodeError:
error = i
I don't know how to say this correctly, but gb18030 has variable byte length so this method fails once it gets to a Chinese character (2 bytes).
A:
try:
line = line.decode('gb18030')
except UnicodeDecodeError, e:
print "Error in bytes %d through %d" % (e.start, e.end)
A:
Access the start and end attributes of the caught exception object.
u = u'áiuê©'
try:
l = u.encode('latin-1')
print repr(l)
l.decode('utf-8')
except UnicodeDecodeError, e:
print e
print e.start, e.end
| Reference encoding error byte in Python | Suppose I type line = line.decode('gb18030;) and get the error
UnicodeDecodeError: 'gb18030' codec can't decode bytes in position 142-143: illegal multibyte sequence
Is there a nice way to automatically get the error bytes? That is, is there a way to get 142 & 143 or line[142:144] from a built-in command or module? Since I'm fairly confident that there will be only one such error, at most, per line, my first thought was along the lines of:
for i in range(len(line)):
try:
line[i].decode('gb18030')
except UnicodeDecodeError:
error = i
I don't know how to say this correctly, but gb18030 has variable byte length so this method fails once it gets to a Chinese character (2 bytes).
| [
"try:\n line = line.decode('gb18030')\nexcept UnicodeDecodeError, e:\n print \"Error in bytes %d through %d\" % (e.start, e.end)\n\n",
"Access the start and end attributes of the caught exception object.\nu = u'áiuê©'\ntry:\n l = u.encode('latin-1')\n print repr(l)\n l.decode('utf-8')\nexcept UnicodeDeco... | [
2,
1
] | [] | [] | [
"encoding",
"python"
] | stackoverflow_0003980972_encoding_python.txt |
Q:
Yet another python import issue of mine
Looks like I am having a real tough day with python imports.I am using Flask and am trying to organise my app structure.I am using it on GAE and thus have to put python packages in my app itself. It looks something like this below:-
-MyFolder
-flask
-werkzeug
-Myapp
- __init__.py
-templates
-static
-views.py
-blinker
As of now I import the blinker library into Myapp's __init__. But I wanted to organise these extra packages like blinker into a helper package so as to look like this.
-helper
-__init__.py
-blinker
(blinker's __init__.py file looks like this)
from blinker.base import.....
But when I try importing blinker into Myapp's __init__ using
from helper import blinker
I get an import error saying no module named blinker.base.Why should that happen. Looks like it looks for a blinker package outside of the current one. Why should it happen?
A:
sys.path.append could also fit your purpose.
A:
Smells like you want to be using a relative import.
from .base import ...
| Yet another python import issue of mine | Looks like I am having a real tough day with python imports.I am using Flask and am trying to organise my app structure.I am using it on GAE and thus have to put python packages in my app itself. It looks something like this below:-
-MyFolder
-flask
-werkzeug
-Myapp
- __init__.py
-templates
-static
-views.py
-blinker
As of now I import the blinker library into Myapp's __init__. But I wanted to organise these extra packages like blinker into a helper package so as to look like this.
-helper
-__init__.py
-blinker
(blinker's __init__.py file looks like this)
from blinker.base import.....
But when I try importing blinker into Myapp's __init__ using
from helper import blinker
I get an import error saying no module named blinker.base.Why should that happen. Looks like it looks for a blinker package outside of the current one. Why should it happen?
| [
"sys.path.append could also fit your purpose.\n",
"Smells like you want to be using a relative import.\nfrom .base import ...\n\n"
] | [
1,
0
] | [] | [] | [
"import",
"python"
] | stackoverflow_0003980717_import_python.txt |
Q:
Dynamic filenames
So I'm working on a program where I store data into multiple .txt files. The naming convention I want to use is file"xx" where the Xs are numbers, so file00, file01, ... all the way up to file20, and I want the variables assigned to them to be fxx (f00, f01, ...).
How would I access these files in Python using a for loop (or anther method), so I don't have to type out open("fileXX") 21 times?
A:
The names are regular. You can create the list of filenames with a simple list comprehension.
["f%02d"%x for x in range(1,21)]
A:
Look into python's glob module.
It uses the usual shell wildcard syntax, so ?? would match any two characters, while * would match anything. You could use either f??.txt or f*.txt, but the former is a more strict match, and wouldn't match something like fact_or_fiction.txt while the latter would.
E.g.:
import glob
for filename in glob.iglob('f??.txt'):
infile = file(filename)
# Do stuff...
A:
An example of writing t to all files:
for x in range(22): #Remember that the range function returns integers up to 22-1
exec "f%02d = open('file%02d.txt', 'w')" % (x, x)
I use the exec statement but there's probably a better way. I hope you get the idea though.
NOTE: This method will give you the variable names fXX to work with later if needed. The last two lines are just examples. Not really needed if all you need is to assign fileXX.txt to fXX.
EDIT: Removed the other last two lines because it seemed that people just weren't too happy with me putting them there. Explanations for downvotes are always nice.
A:
I think the point is the OP wants to register the variables programagically:
for i in range( 20 ):
locals()[ "f%02d" % i ] = open( "file%02d.txt" % i )
and then e.g.
print( f01 )
...
A:
files = [open("file%02d", "w") for x in xrange(0, 20+1)]
# now use files[0] to files[20], or loop
# example: write number to each file
for n, f in enumerate(files):
files.write("%d\n" % n)
# reopen for next example
files = [open("file%02d", "r") for x in xrange(0, 20+1)]
# example: print first line of each file
for n, f in enumerate(files):
print "%d: %s" % (n, f.readline())
| Dynamic filenames | So I'm working on a program where I store data into multiple .txt files. The naming convention I want to use is file"xx" where the Xs are numbers, so file00, file01, ... all the way up to file20, and I want the variables assigned to them to be fxx (f00, f01, ...).
How would I access these files in Python using a for loop (or anther method), so I don't have to type out open("fileXX") 21 times?
| [
"The names are regular. You can create the list of filenames with a simple list comprehension.\n[\"f%02d\"%x for x in range(1,21)]\n\n",
"Look into python's glob module. \nIt uses the usual shell wildcard syntax, so ?? would match any two characters, while * would match anything. You could use either f??.txt or ... | [
5,
1,
1,
1,
0
] | [] | [] | [
"filenames",
"python"
] | stackoverflow_0003484348_filenames_python.txt |
Q:
How to check if MAX has finished loading
I am trying to instantiate a program via a python script as follows
os.startfile( '"C:/Program Files/Autodesk/3ds Max 2010/3dsmax.exe"' )
since 3dsMax takes a bit of time to load, I wanna wait till it has finished loading completely. I check the task manager to see if 3dsmax10.exe is in the list, but it's in the list as soon as the process starts (obviously). So is there a way to find out if it has completely loaded or not ?
Any help is appreciated
Thanks
A:
Here is a bit of a hackish (not robust) solution. Starti 3ds Max with a MAXScript script on the command-line. For example
c:\3dsmax\3dsmax -U MAXScript myscript.ms
As Parceval suggests, this script can create a new file using the MAXScript command:
createFile c:\tmp\myfile.txt
Next in Python wait until the file exists (don't forget to delete it first). So altogether:
while not os.path.exists(c:\tmp\myfile.txt):
time.sleep(1)
A:
Complete loading is subjective a bit. As far as I remember 3DS MAX performs plug-in loading—which is an absolutely major part of load time—showing you splash-screen. And only after loading completes it shows its main window.
You can use this fact and constantly monitor existing windows using WinAPI to get to know when the main window appeared. Use title text or wnd-class to find it among others.
A:
You could put a simple maxscript script in maxhome\Scripts\Startup which will be started when max finished starting. This script the could create a file, or talk to your python script through com or tcp.
| How to check if MAX has finished loading | I am trying to instantiate a program via a python script as follows
os.startfile( '"C:/Program Files/Autodesk/3ds Max 2010/3dsmax.exe"' )
since 3dsMax takes a bit of time to load, I wanna wait till it has finished loading completely. I check the task manager to see if 3dsmax10.exe is in the list, but it's in the list as soon as the process starts (obviously). So is there a way to find out if it has completely loaded or not ?
Any help is appreciated
Thanks
| [
"Here is a bit of a hackish (not robust) solution. Starti 3ds Max with a MAXScript script on the command-line. For example\nc:\\3dsmax\\3dsmax -U MAXScript myscript.ms\n\nAs Parceval suggests, this script can create a new file using the MAXScript command:\ncreateFile c:\\tmp\\myfile.txt\n\nNext in Python wait until... | [
2,
1,
1
] | [] | [] | [
"3dsmax",
"python"
] | stackoverflow_0002927049_3dsmax_python.txt |
Q:
Python metaprogramming for XML parsing
I'm trying to create a simple XML parser where each different XML schema has it's own parser class but I can't figure out what the best way is. What I in effect would like to do is something like this:
in = sys.stdin
xmldoc = minidom.parse(in).documentElement
xmlParser = xmldoc.nodeName
parser = xmlParser()
out = parser.parse(xmldoc)
I'm not also quite sure if I get the document root name correctly, but that's the idea: create an object of a class with similar name to the document root and use the parse() function in that class to parse and handle the input.
What would be the simplest way to achieve this? I've been reading about introspection and templates but haven't been able to figure this out yet. I've done a similar thing with Java in the past and AFAIK, Ruby also makes this simple. What's the pythonian way?
A:
I think most python programmers would just use lxml to parse their xml. If you still want to wrap that in classes you could, but as delnan said in his comment, it's a bit unclear what you really mean.
from lxml import etree
tree = etree.parse('my_doc.xml')
for element in tree.getroot():
...
A couple of side notes, if other programmers are going to be reading your code, you should try to at least roughly follow PEP 8. More importantly though, you really shouldn't assign to builtins like "in."
A:
As pointed out by Mark in his comment, to get a reference to a class that you know the name of at runtime, you use getattr.
doc = minidom.parse(sys.stdin)
# is equivalent to
doc = getattr(minidom, "parse")(sys.stdin)
Below is a corrected version of your pseudo-code.
from xml.dom import minidom
import sys
import myParsers # a module containing your parsers
xmldoc = minidom.parse(sys.stdin).documentElement
myParserName = xmldoc.nodeName
myParserClass = getattr(myParsers, myParserName)
# create an instance of myParserClass by calling it with the documentElement
parser = myParserClass(xmldoc)
# do whatever you want with the instance of your parser class
output = parser.generateOutput()
getattr will return an AttributeError if the attribute doesn't exist, so you can wrap the call in a try...except or pass a third argument to getattr, wich will be returned if the attribute isn't found.
| Python metaprogramming for XML parsing | I'm trying to create a simple XML parser where each different XML schema has it's own parser class but I can't figure out what the best way is. What I in effect would like to do is something like this:
in = sys.stdin
xmldoc = minidom.parse(in).documentElement
xmlParser = xmldoc.nodeName
parser = xmlParser()
out = parser.parse(xmldoc)
I'm not also quite sure if I get the document root name correctly, but that's the idea: create an object of a class with similar name to the document root and use the parse() function in that class to parse and handle the input.
What would be the simplest way to achieve this? I've been reading about introspection and templates but haven't been able to figure this out yet. I've done a similar thing with Java in the past and AFAIK, Ruby also makes this simple. What's the pythonian way?
| [
"I think most python programmers would just use lxml to parse their xml. If you still want to wrap that in classes you could, but as delnan said in his comment, it's a bit unclear what you really mean.\nfrom lxml import etree\n\ntree = etree.parse('my_doc.xml')\nfor element in tree.getroot():\n ...\n\nA couple ... | [
1,
1
] | [] | [] | [
"metaprogramming",
"python",
"xml"
] | stackoverflow_0003618246_metaprogramming_python_xml.txt |
Q:
how to input python code in run time and execute it?
Well i want to input a python function as an input in run time and execute that part of code 'n' no of times. For example using tkinter i create a textbox where the user writes the function and submits it , also mentioning how many times it wants to be executed. My program should be able to run that function as many times as mentioned by the user.
Ps: i did think of an alternative method where the user can write the program in a file and then i can simply execute it as python filename as a system cmd inside my python program , but i dont want it that way.
A:
Python provides number of ways to do this using function calls:
- eval()
- exec()
For your needs you should read about exec.
A:
That's what execfile() is for.
http://docs.python.org/library/functions.html#execfile
Create a temporary file.
Write the content of the textbox into the file.
Close.
Execfile.
Delete when done.
A:
Would IPython do?
Doc: http://ipython.scipy.org/moin/Documentation
| how to input python code in run time and execute it? | Well i want to input a python function as an input in run time and execute that part of code 'n' no of times. For example using tkinter i create a textbox where the user writes the function and submits it , also mentioning how many times it wants to be executed. My program should be able to run that function as many times as mentioned by the user.
Ps: i did think of an alternative method where the user can write the program in a file and then i can simply execute it as python filename as a system cmd inside my python program , but i dont want it that way.
| [
"Python provides number of ways to do this using function calls:\n- eval()\n- exec()\nFor your needs you should read about exec.\n",
"That's what execfile() is for.\nhttp://docs.python.org/library/functions.html#execfile\n\nCreate a temporary file.\nWrite the content of the textbox into the file.\nClose.\nExecfil... | [
2,
2,
0
] | [] | [] | [
"python"
] | stackoverflow_0003981357_python.txt |
Q:
PySide or PyQT SQLite support in Ubuntu
I am running Ubuntu 10.04 Lucid and am developing a application in QT using Python. Today I tried to create a database binding to a SQLite database via QtSQL.QAddDatabase and got the following error:
QSqlDatabase: QSQLITE driver not loaded
QSqlDatabase: available drivers: QMYSQL3 QMYSQL
So obviously I don't have the SQLite driver...how can I add it to my install? I installed PySide to see if it included it (via the PPA)...same thing...no SQLite...maybe I can reconfigure and build the python-qt-sql package but I need instructions on how to to it...
A:
Does this help?
$> apt-cache search qt mysql
libqt3-mt-mysql - MySQL database driver for Qt3 (Threaded)
qtstalker - commodity and stock market charting and technical analysis
tora - A graphical toolkit for database developers and administrators
libqt4-sql-mysql - Qt 4 MySQL database driver
Sounds like the packages you need are there!
Edit: right, you said SQLite, sorry. Here:
$> apt-cache search qt sqlite
libqt3-mt-sqlite - SQLite database driver for Qt3 (Threaded)
sqlitebrowser - GUI editor for SQLite databases
strigi-daemon - fast indexing and searching tool for your personal data (daemon)
libqt4-sql-sqlite - Qt 4 SQLite 3 database driver
libqt4-sql-sqlite2 - Qt 4 SQLite 2 database driver
You can use the apt-cache tool to look for packages matching expressions. I posted the code above so you could do this by yourself in the future :).
A:
Got it solved...the QT4 SQLite driver was missing...used this:
sudo aptitude install libqt4-sql-sqlite
That solved it and now it works in PyQT and PySide.
| PySide or PyQT SQLite support in Ubuntu | I am running Ubuntu 10.04 Lucid and am developing a application in QT using Python. Today I tried to create a database binding to a SQLite database via QtSQL.QAddDatabase and got the following error:
QSqlDatabase: QSQLITE driver not loaded
QSqlDatabase: available drivers: QMYSQL3 QMYSQL
So obviously I don't have the SQLite driver...how can I add it to my install? I installed PySide to see if it included it (via the PPA)...same thing...no SQLite...maybe I can reconfigure and build the python-qt-sql package but I need instructions on how to to it...
| [
"Does this help?\n$> apt-cache search qt mysql\nlibqt3-mt-mysql - MySQL database driver for Qt3 (Threaded)\nqtstalker - commodity and stock market charting and technical analysis\ntora - A graphical toolkit for database developers and administrators\nlibqt4-sql-mysql - Qt 4 MySQL database driver\n\nSounds like the ... | [
3,
1
] | [] | [] | [
"pyqt",
"pyside",
"python",
"sqlite",
"ubuntu"
] | stackoverflow_0003980974_pyqt_pyside_python_sqlite_ubuntu.txt |
Q:
Serving static files with apache and mod_wsgi without changing apache's configuration?
I have a Django application, and I'm using a shared server hosting, so I cannot change apache's config files. The only thing that I can change is the .htaccess file in my application. I also have a standard django.wsgi python file, as an entry point.
In dev environment, I'm using Django to serve the static files, but it is discouraged in the official documentation, saying that you should do it using the web server instead.
Is there a way to serve static files through apache without having access to Apache's configuration, changing only the .htaccess or django.wsgi files??
A:
The first step is to add just
AddHandler wsgi-script .wsgi
to your .htaccess file with nothing else to establish wsgi as the handler. This will make requests to django.wsgi and django.wsgi/whatever go to your django app.
To make the django.wsgi part of the URL go away, you will need to use mod_rewrite. Hopefully your host has it enabled. An example is
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^(.*)$ /django.wsgi/$1 [QSA,PT,L]
which will serve the file if the URL matches a file, or be served by django if it doesn't. Another option would be
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !^/?static/
RewriteRule ^(.*)$ /django.wsgi/$1 [QSA,PT,L]
to make requests for /static/* go to the file itself and everything else to go through django.
Then you will need to hide django.wsgi from generated URLs. This can be done with a snippet like this in your django.wsgi
def _application(environ, start_response):
# The original application.
...
import posixpath
def application(environ, start_response):
# Wrapper to set SCRIPT_NAME to actual mount point.
environ['SCRIPT_NAME'] = posixpath.dirname(environ['SCRIPT_NAME'])
if environ['SCRIPT_NAME'] == '/':
environ['SCRIPT_NAME'] = ''
return _application(environ, start_response)
If this doesn't work out quite exactly right, then be sure to consult http://code.google.com/p/modwsgi/wiki/ConfigurationGuidelines. I pulled most of the details of the answer from there but just got the right parts put together as a step by step.
A:
Serve the static files from a different virtual host.
| Serving static files with apache and mod_wsgi without changing apache's configuration? | I have a Django application, and I'm using a shared server hosting, so I cannot change apache's config files. The only thing that I can change is the .htaccess file in my application. I also have a standard django.wsgi python file, as an entry point.
In dev environment, I'm using Django to serve the static files, but it is discouraged in the official documentation, saying that you should do it using the web server instead.
Is there a way to serve static files through apache without having access to Apache's configuration, changing only the .htaccess or django.wsgi files??
| [
"The first step is to add just\nAddHandler wsgi-script .wsgi\n\nto your .htaccess file with nothing else to establish wsgi as the handler. This will make requests to django.wsgi and django.wsgi/whatever go to your django app.\nTo make the django.wsgi part of the URL go away, you will need to use mod_rewrite. Hopefu... | [
2,
1
] | [] | [] | [
".htaccess",
"apache",
"django",
"mod_wsgi",
"python"
] | stackoverflow_0003981267_.htaccess_apache_django_mod_wsgi_python.txt |
Q:
How do I use an external .py file?
I downloaded beautifulsoup.py for use on a little project I'm making. Do I need to import this .py file in my project?
Do I just copy and paste the code somewhere inside my current python script?
Thank you for the help.
I found this but it doesn't say anything regarding Windows.
http://mail.python.org/pipermail/tutor/2002-April/013953.html
I'm getting this error when using it. I copied and pasted the .py file to the folder where my project was on Windows Explorer, not this happens. Any suggestions?
A:
If it's in the same directory as your little project, all you should need to do is:
import BeautifulSoup
If you are keeping it in some other directory, the easiest way to do it is:
from sys import path
path.append(path_to_Beautiful_Soup)
import BeautifulSoup
Python keeps track of where it is currently, and first looks in the current directory. Then it checks through all of the paths in sys.path for the module in question. If it cannot find it in any of those places, it throws an error.
A:
When you install beautifulsoup the canonical way (with easy_install for example, or with a windows installer, if any) the beautifulsoup module will probably be added to your PYTHONDIR\lib\site-packages directory.
This means
import beautifulsoup
should do the trick.
Otherwise, adding beautifulsoup.py (if it's a single file) to your current project directory and then issuing import beautifulsoup should also do the trick.
A:
You have several choices:
you can cut and paste in the code, assuming the license permits etc. however, what happens when the code is updated?
you can put the code into the same directory (ie folder) as your code. Then all you need to do is say import beautifulsoup before you try to use it.
you can put the code somewhere in the python load path.
A:
I've done it before, putting BeautifulSoup.py inside the same directory as the script and import it would work. If you have multiple scripts spread over different directories, put the beautifulsoup file in the root directory and do a relative import.
A:
you have to install a new package correctly in python by using in the command line:
pip install BeautifulSoup
if you don't know the name of the package use :
pip search beautiful
and the pip will get all package that have "beautiful" in their name or description ...
One more thing that is very important ; and because you use eclipse (netbeans it's the same) and pydev as i can see; you should refresh the list of package used by pydev when installing a new package by going in the menu to (for eclipse) Window -> Preference -> Pydev -> Interpreter - Python and clicking on Apply why is that ?, so that you can use the full power of pydev correctly (code completion , F3 ...) and because Pydev don't know if a package has been added until you tell him so.
the steps are for eclipse you can do their analog in netbeans right ?
| How do I use an external .py file? | I downloaded beautifulsoup.py for use on a little project I'm making. Do I need to import this .py file in my project?
Do I just copy and paste the code somewhere inside my current python script?
Thank you for the help.
I found this but it doesn't say anything regarding Windows.
http://mail.python.org/pipermail/tutor/2002-April/013953.html
I'm getting this error when using it. I copied and pasted the .py file to the folder where my project was on Windows Explorer, not this happens. Any suggestions?
| [
"If it's in the same directory as your little project, all you should need to do is:\nimport BeautifulSoup\n\nIf you are keeping it in some other directory, the easiest way to do it is:\nfrom sys import path\npath.append(path_to_Beautiful_Soup)\n\nimport BeautifulSoup\n\nPython keeps track of where it is currently,... | [
11,
5,
2,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0003980059_python.txt |
Q:
Parser generation
i am doing a project on SOFWARE PLAGIARISM DETECTION..i am intended to do it with language C..for that i am supposed to create a token generator, and a parser..but i dont know where to start..any one can help me out with this..
i created a database of tokens and i separated the tokens from my program.Next thing i wanna do is to compare two programs to find out whether it's plagiarized or not. For that i need to create a syntax analyzer.I don't know where to start from...
i.e I want to create a parser for c programs in python
A:
If you want to create a parser in Python you can look at these libraries:
PLY
pyparsing
and Lepl - new but very powerful
A:
Building a real C parser by yourself is a really big task.
I suggest you either find one that is already done, eg. pycparser or you define a really simple subset of C that is easily parsed.
You'll have plenty of work to do for your plagiarism detector after you are done parsing C.
A:
I'm not sure you need to parse the token stream to detect the features you're looking for. In fact, it's probably going to complicate things more than anything.
what you're really looking for is sequences of original source code that have a very strong similarity with a suspect sample code being tested. This sounds very similar to the purpose of a Bayes classifier, like those used in spam filtering and language detection.
| Parser generation | i am doing a project on SOFWARE PLAGIARISM DETECTION..i am intended to do it with language C..for that i am supposed to create a token generator, and a parser..but i dont know where to start..any one can help me out with this..
i created a database of tokens and i separated the tokens from my program.Next thing i wanna do is to compare two programs to find out whether it's plagiarized or not. For that i need to create a syntax analyzer.I don't know where to start from...
i.e I want to create a parser for c programs in python
| [
"If you want to create a parser in Python you can look at these libraries:\nPLY\npyparsing\nand Lepl - new but very powerful\n",
"Building a real C parser by yourself is a really big task.\nI suggest you either find one that is already done, eg. pycparser or you define a really simple subset of C that is easily p... | [
3,
1,
0
] | [] | [] | [
"parsing",
"plagiarism_detection",
"python"
] | stackoverflow_0003976665_parsing_plagiarism_detection_python.txt |
Q:
How can I deal with accented letters, german letters and other characters?
My python script is working now, but I'm having a little trouble:
Here is the output:
from BeautifulSoup import BeautifulSoup
import urllib
langCode={
"arabic":"ar", "bulgarian":"bg", "chinese":"zh-CN",
"croatian":"hr", "czech":"cs", "danish":"da", "dutch":"nl",
"english":"en", "finnish":"fi", "french":"fr", "german":"de",
"greek":"el", "hindi":"hi", "italian":"it", "japanese":"ja",
"korean":"ko", "norwegian":"no", "polish":"pl", "portugese":"pt",
"romanian":"ro", "russian":"ru", "spanish":"es", "swedish":"sv" }
def setUserAgent(userAgent):
urllib.FancyURLopener.version = userAgent
pass
def translate(text, fromLang, toLang):
setUserAgent("Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008070400 SUSE/3.0.1-0.1 Firefox/3.0.1")
try:
postParameters = urllib.urlencode({"langpair":"%s|%s" %(langCode[fromLang.lower()],langCode[toLang.lower()]), "text":text,"ie":"UTF8", "oe":"UTF8"})
except KeyError, error:
print "Currently we do not support %s" %(error.args[0])
return
page = urllib.urlopen("http://translate.google.com/translate_t", postParameters)
content = page.read()
page.close()
htmlSource = BeautifulSoup(content)
translation = htmlSource.find('span', title=text )
return translation.renderContents()
print translate("Good morning to you friend!", "English", "German")
print translate("Good morning to you friend!", "English", "Italian")
print translate("Good morning to you friend!", "English", "Spanish")
Guten Morgen, du Freund!
Buongiorno a te amico!
Buenos dÃas a ti amigo!
How do I manage the letters that aren't basic english letters? How would you recommend I solve this? I was thinking a dictionary to replace certain chains with another character, but I'm sure Python has something like this already. Batteries included and whatnot. :P
Thanks.
A:
Don't parse http://translate.google.com/translate_t since Google provides an AJAX service for this purpose. The translatedText in the json data returned by ajax.googleapis.com is already a unicode string.
import urllib2
import urllib
import sys
import json
LANG={
"arabic":"ar", "bulgarian":"bg", "chinese":"zh-CN",
"croatian":"hr", "czech":"cs", "danish":"da", "dutch":"nl",
"english":"en", "finnish":"fi", "french":"fr", "german":"de",
"greek":"el", "hindi":"hi", "italian":"it", "japanese":"ja",
"korean":"ko", "norwegian":"no", "polish":"pl", "portugese":"pt",
"romanian":"ro", "russian":"ru", "spanish":"es", "swedish":"sv" }
def translate(text,lang1,lang2):
base_url='http://ajax.googleapis.com/ajax/services/language/translate?'
langpair='%s|%s'%(LANG.get(lang1.lower(),lang1),
LANG.get(lang2.lower(),lang2))
params=urllib.urlencode( (('v',1.0),
('q',text.encode('utf-8')),
('langpair',langpair),) )
url=base_url+params
content=urllib2.urlopen(url).read()
try: trans_dict=json.loads(content)
except AttributeError:
try: trans_dict=json.load(content)
except AttributeError: trans_dict=json.read(content)
return trans_dict['responseData']['translatedText']
print translate("Good morning to you friend!", "English", "German")
print translate("Good morning to you friend!", "English", "Italian")
print translate("Good morning to you friend!", "English", "Spanish")
yields
Guten Morgen, du Freund!
Buongiorno a te amico!
Buenos días a ti amigo!
A:
Parse the proper charset from the headers returned by urlopen() and pass it as the fromEncoding argument to the BeautifulSoup constructor.
| How can I deal with accented letters, german letters and other characters? | My python script is working now, but I'm having a little trouble:
Here is the output:
from BeautifulSoup import BeautifulSoup
import urllib
langCode={
"arabic":"ar", "bulgarian":"bg", "chinese":"zh-CN",
"croatian":"hr", "czech":"cs", "danish":"da", "dutch":"nl",
"english":"en", "finnish":"fi", "french":"fr", "german":"de",
"greek":"el", "hindi":"hi", "italian":"it", "japanese":"ja",
"korean":"ko", "norwegian":"no", "polish":"pl", "portugese":"pt",
"romanian":"ro", "russian":"ru", "spanish":"es", "swedish":"sv" }
def setUserAgent(userAgent):
urllib.FancyURLopener.version = userAgent
pass
def translate(text, fromLang, toLang):
setUserAgent("Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008070400 SUSE/3.0.1-0.1 Firefox/3.0.1")
try:
postParameters = urllib.urlencode({"langpair":"%s|%s" %(langCode[fromLang.lower()],langCode[toLang.lower()]), "text":text,"ie":"UTF8", "oe":"UTF8"})
except KeyError, error:
print "Currently we do not support %s" %(error.args[0])
return
page = urllib.urlopen("http://translate.google.com/translate_t", postParameters)
content = page.read()
page.close()
htmlSource = BeautifulSoup(content)
translation = htmlSource.find('span', title=text )
return translation.renderContents()
print translate("Good morning to you friend!", "English", "German")
print translate("Good morning to you friend!", "English", "Italian")
print translate("Good morning to you friend!", "English", "Spanish")
Guten Morgen, du Freund!
Buongiorno a te amico!
Buenos dÃas a ti amigo!
How do I manage the letters that aren't basic english letters? How would you recommend I solve this? I was thinking a dictionary to replace certain chains with another character, but I'm sure Python has something like this already. Batteries included and whatnot. :P
Thanks.
| [
"Don't parse http://translate.google.com/translate_t since Google provides an AJAX service for this purpose. The translatedText in the json data returned by ajax.googleapis.com is already a unicode string. \nimport urllib2\nimport urllib\nimport sys\nimport json\n\nLANG={\n \"arabic\":\"ar\", \"bulgarian\":\"bg\... | [
1,
0
] | [] | [] | [
"python",
"unicode"
] | stackoverflow_0003981732_python_unicode.txt |
Q:
Question regarding python profiling
I'm trying to do profiling of my application in python. I'm using the cProfile library. I need to profile the onFrame function of my application, but this is called by an outside application. I've tried loads of things, but at the moment I have the following in my onFrame method:
runProfiler(self)
and then outside of my class I have the following:
o = None
def doProfile ():
print "doProfile invoked"
o.structure.updateOrders
def runProfiler(self):
print "runProfiler invoked"
o = self
cProfile.run('doProfile()', 'profile.log')
If this seems strange, it's because I've tried everything to get rid of the error "name doProfile not defined". Even now, the runProfiler method gets called, and the "runProfiler invoked" gets printed, but then I get the error just described. What am I doing wrong?
A:
In that case, you should pass the necessary context to cProfile.runctx:
cProfile.runctx("doProfile()", globals(), locals(), "profile.log")
A:
An alternative is to use the runcall method of a Profile object.
profiler = cProfile.Profile()
profiler.runcall(doProfile)
profiler.dump_stats("profile.log")
| Question regarding python profiling | I'm trying to do profiling of my application in python. I'm using the cProfile library. I need to profile the onFrame function of my application, but this is called by an outside application. I've tried loads of things, but at the moment I have the following in my onFrame method:
runProfiler(self)
and then outside of my class I have the following:
o = None
def doProfile ():
print "doProfile invoked"
o.structure.updateOrders
def runProfiler(self):
print "runProfiler invoked"
o = self
cProfile.run('doProfile()', 'profile.log')
If this seems strange, it's because I've tried everything to get rid of the error "name doProfile not defined". Even now, the runProfiler method gets called, and the "runProfiler invoked" gets printed, but then I get the error just described. What am I doing wrong?
| [
"In that case, you should pass the necessary context to cProfile.runctx:\ncProfile.runctx(\"doProfile()\", globals(), locals(), \"profile.log\")\n\n",
"An alternative is to use the runcall method of a Profile object.\nprofiler = cProfile.Profile()\nprofiler.runcall(doProfile)\nprofiler.dump_stats(\"profile.log\")... | [
1,
0
] | [] | [] | [
"profiling",
"python"
] | stackoverflow_0003981569_profiling_python.txt |
Q:
performance of modules in Python
Which is the best: create the modules and put them in a separate file and import them or put them all together in the same file?
Is there any significant difference?
A:
Same as disscussion on .py and .pyc. Having modules allows you to load them faster through precompiled modules. How ever negligible, this adds to performance. Though execution speed remains the same.
Please look at the following for a detailed answer. Repeating it is not useful.
What is the difference between .py and .pyc files?
Why are main runnable Python scripts not compiled to pyc files like modules?
[Edit:]
Decomposing the solution in modules is always better for future maintenance and enhances readability. The performance is almost always not the primary reason for decomposition of solution into various modules.
| performance of modules in Python | Which is the best: create the modules and put them in a separate file and import them or put them all together in the same file?
Is there any significant difference?
| [
"Same as disscussion on .py and .pyc. Having modules allows you to load them faster through precompiled modules. How ever negligible, this adds to performance. Though execution speed remains the same.\nPlease look at the following for a detailed answer. Repeating it is not useful.\n\nWhat is the difference between ... | [
3
] | [] | [] | [
"performance",
"python"
] | stackoverflow_0003982021_performance_python.txt |
Q:
How can i extract files using custom names with zipfile module from python?
I want to add suffix to names of my files, for example uuid. How can i extract files using zipfile and pass custom names?
A:
Use ZipFile.open() to open a read-only file-like to the file data, then copy it to a write-only file with the correct name using shutil.copyfileobj().
A:
Step 1: Extract the files.
Step 2: Rename them.
| How can i extract files using custom names with zipfile module from python? | I want to add suffix to names of my files, for example uuid. How can i extract files using zipfile and pass custom names?
| [
"Use ZipFile.open() to open a read-only file-like to the file data, then copy it to a write-only file with the correct name using shutil.copyfileobj().\n",
"Step 1: Extract the files.\nStep 2: Rename them.\n"
] | [
4,
0
] | [] | [] | [
"python",
"python_zipfile"
] | stackoverflow_0003982034_python_python_zipfile.txt |
Q:
cron-like recurring task scheduler design
Say you want to schedule recurring tasks, such as:
Send email every wednesday at 10am
Create summary on the first day of every month
And you want to do this for a reasonable number of users in a web app - ie. 100k users each user can decide what they want scheduled when.
And you want to ensure that the scheduled items run, even if they were missed originally - eg. for some reason the email didn't get sent on wednesday at 10am, it should get sent out at the next checking interval, say wednesday at 11am.
How would you design that?
If you use cron to trigger your scheduling app every x minutes, what's a good way to implement the part that decides what should run at each point in time?
The cron-like implementations I've seen compare the current time to the trigger time for all specified items, but I'd like to deal with missed items as well.
I have a feeling there's a more clever design than the one I'm cooking up, so please enlighten me.
A:
There's 2 designs, basically.
One runs regularly and compares the current time to the scheduling spec (i.e. "Does this run now?"), and executes those that qualify.
The other technique takes the current scheduling spec and finds the NEXT time that the item should fire. Then, it compares the current time to all of those items who's "next time" is less than "current time", and fires those. Then, when an item is complete, it is rescheduled for the new "next time".
The first technique can not handle "missed" items, the second technique can only handle those items that were previously scheduled.
Specifically consider you you have a schedule that runs once every hour, at the top of the hour.
So, say, 1pm, 2pm, 3pm, 4pm.
At 1:30pm, the run task is down and not executing any processes. It does not start again until 3:20pm.
Using the first technique, the scheduler will have fired the 1pm task, but not fired the 2pm, and 3pm tasks, as it was not running when those times passed. The next job to run will be the 4pm job, at, well, 4pm.
Using the second technique, the scheduler will have fired the 1pm task, and scheduled the next task at 2pm. Since the system was down, the 2pm task did not run, nor did the 3pm task. But when the system restarted at 3:20, it saw that it "missed" the 2pm task, and fired it off at 3:20, and then scheduled it again for 4pm.
Each technique has it's ups and downs. With the first technique, you miss jobs. With the second technique you can still miss jobs, but it can "catch up" (to a point), but it may also run a job "at the wrong time" (maybe it's supposed to run at the top of the hour for a reason).
A benefit of the second technique is that if you reschedule at the END of the executing job, you don't have to worry about a cascading job problem.
Consider that you have a job that runs every minute. With the first technique, the job gets fired each minute. However, typically, if the job is not FINISHED within it's minute, then you can potentially have 2 jobs running (one late in the process, the other starting up). This can be a problem if the job is not designed to run more than once simultaneously. And it can exacerbate (if there's a real problem, after 10 minutes you have 10 jobs all fighting each other).
With the second technique, if you schedule at the end of the job, then if a job happens to run just over a minute, then you'll "skip" a minute" and start up the following minute rather than run on top of itself. So, you can have a job scheduled for every minute actually run at 1:01pm, 1:03pm, 1:05pm, etc.
Depending on your job design, either of these can be "good" or "bad". There's no right answer here.
Finally, implementing the first technique is really, quite trivial compared to implementing the second. The code to determine if a cron string (say) matches a given time is simple compared to deriving what time a cron string will be valid NEXT. I know, and I have a couple hundred lines of code to prove it. It's not pretty.
A:
In case you want to skip designing and start using have a look at Celery. The scheduler is called celerybeat.
Edit:
Also relevant: How to send 100,000 emails weekly?
A:
Using a backing Java process with Quartz scheduler is a likely potential solution. I believe Quartz should scale to this level reasonably well. See this related SO question: "How to scale the Quartz Scheduler"...
If you take a careful look at the Quartz documentation, I think you'll find that your concerns regarding triggering and missed executions are dealt with cleanly, and offer a number of suitable policies to choose from. In terms of scalability, I believe you can store jobs in a JDBC backing store.
Struck out, since the questioner was specifically looking for a design discussion...
If you framed your initial StackOverflow search prior to asking the question in terms of "task schedulers for Python", you would have turned this up: "An enterprise scheduler for python...". I strongly suggest looking for an existing implementation rather than attempting a NIH development for something like this, despite the great observations about how you might do this in the other answer. Given your stated scalability goals, you're biting off a fairly challenging task, and you should eliminate all other options before going down the from-scratch road on a topic as heavily developed as this one. One possible avenue to consider would be adaptation to the well-regarded Quartz via Jython, and determine whether your use cases could be handled in that context with minimal dipping into the Java world (presumably not your first choice).
| cron-like recurring task scheduler design | Say you want to schedule recurring tasks, such as:
Send email every wednesday at 10am
Create summary on the first day of every month
And you want to do this for a reasonable number of users in a web app - ie. 100k users each user can decide what they want scheduled when.
And you want to ensure that the scheduled items run, even if they were missed originally - eg. for some reason the email didn't get sent on wednesday at 10am, it should get sent out at the next checking interval, say wednesday at 11am.
How would you design that?
If you use cron to trigger your scheduling app every x minutes, what's a good way to implement the part that decides what should run at each point in time?
The cron-like implementations I've seen compare the current time to the trigger time for all specified items, but I'd like to deal with missed items as well.
I have a feeling there's a more clever design than the one I'm cooking up, so please enlighten me.
| [
"There's 2 designs, basically.\nOne runs regularly and compares the current time to the scheduling spec (i.e. \"Does this run now?\"), and executes those that qualify.\nThe other technique takes the current scheduling spec and finds the NEXT time that the item should fire. Then, it compares the current time to all ... | [
7,
4,
2
] | [] | [] | [
"cron",
"python",
"scheduling"
] | stackoverflow_0003980782_cron_python_scheduling.txt |
Q:
Django ImageField: files dont get uploaded
I implemented some ImageFields in my model and installed PIL (not the cleanest install). Things seem to work as I get an upload button in the admin and when I call the .url property in the view I get the string with the filename + its upload property.
The problem is that the file is not there, apparently it doesnt get uploaded once I save the model.
Any idea?
Thanks
Here's a sample of my code situation
models.py
class My_Model(models.Model):
[...]
image = models.ImageField(upload_to = 'images/my_models/main')
view.py
'image': query.my_model.image.url
result:
static/images/my_models/main/theimage.png
A:
Make sure that you're binding request.FILES to the form when POSTing, and that the form is declared as multi-part in the template
Here's the view from one of my applications:
@login_required
def submit(request):
if request.method == 'POST':
(Photo.objects.count()+1, request.FILES['photo'].name.split(".")[1]), request.FILES['photo'])}
form = PhotoForm(request.POST, request.FILES)
if form.is_valid():
new = Photo(photo=request.FILES['photo'], name=request.POST['name'])
new.save()
return HttpResponseRedirect('/') # Redirect after POST
else:
form = PhotoForm()
return render_to_response('app/submit.html', {'form': form}, context_instance=RequestContext(request))
and the PhotoForm class:
class PhotoForm(forms.ModelForm):
class Meta:
model = Photo
fields = ('name', 'photo')
| Django ImageField: files dont get uploaded | I implemented some ImageFields in my model and installed PIL (not the cleanest install). Things seem to work as I get an upload button in the admin and when I call the .url property in the view I get the string with the filename + its upload property.
The problem is that the file is not there, apparently it doesnt get uploaded once I save the model.
Any idea?
Thanks
Here's a sample of my code situation
models.py
class My_Model(models.Model):
[...]
image = models.ImageField(upload_to = 'images/my_models/main')
view.py
'image': query.my_model.image.url
result:
static/images/my_models/main/theimage.png
| [
"Make sure that you're binding request.FILES to the form when POSTing, and that the form is declared as multi-part in the template\nHere's the view from one of my applications:\n@login_required\ndef submit(request):\n if request.method == 'POST':\n (Photo.objects.count()+1, request.FILES['photo'].name.spl... | [
1
] | [] | [] | [
"django",
"django_admin",
"django_models",
"python",
"python_imaging_library"
] | stackoverflow_0003981451_django_django_admin_django_models_python_python_imaging_library.txt |
Q:
Getting The Most Recent Data Item - Google App Engine - Python
I need to retrieve the most recent item added to a collection. Here is how I'm doing it:
class Box(db.Model):
ID = db.IntegerProperty()
class Item(db.Model):
box = db.ReferenceProperty(Action, collection_name='items')
date = db.DateTimeProperty(auto_now_add=True)
#get most recent item
lastItem = box.items.order('-date')[0]
Is this an expensive way to do it? Is there a better way?
A:
If you are going to iterate over a list of boxes, that is a very bad way to do it. You will run an additional query for every box. You can easily see what is going on with Appstats.
If you are doing one of those per request, it may be ok. But it is not ideal. you might also want to use: lastItem = box.items.order('-date').get(). get will only return the first result to the app.
If possible it would be significantly faster to add a lastItem property to Box, or store the Box ID (your attribute) on Item. In other words, denormalize the data. If you are going to fetch a list of Boxes and their most recent item you need to use this type of approach.
| Getting The Most Recent Data Item - Google App Engine - Python | I need to retrieve the most recent item added to a collection. Here is how I'm doing it:
class Box(db.Model):
ID = db.IntegerProperty()
class Item(db.Model):
box = db.ReferenceProperty(Action, collection_name='items')
date = db.DateTimeProperty(auto_now_add=True)
#get most recent item
lastItem = box.items.order('-date')[0]
Is this an expensive way to do it? Is there a better way?
| [
"If you are going to iterate over a list of boxes, that is a very bad way to do it. You will run an additional query for every box. You can easily see what is going on with Appstats.\nIf you are doing one of those per request, it may be ok. But it is not ideal. you might also want to use: lastItem = box.items.... | [
4
] | [] | [] | [
"google_app_engine",
"google_cloud_datastore",
"gql",
"python"
] | stackoverflow_0003981997_google_app_engine_google_cloud_datastore_gql_python.txt |
Q:
Python: How do I redirect this output?
I'm calling rtmpdump via subprocess and trying to redirect its output to a file. The problem is that I simply can't redirect it.
I tried first setting up the sys.stdout to the opened file. This works for, say, ls, but not for rtmpdump. I also tried setting the sys.stderr just to make sure and it also didn't work.
I tried then using a ">> file" with the command line argument but again it doesn't seem to work.
Also for the record, for some reason, Eclipse prints rtmpdump's output even if I use subprocess.call instead of subprocess.check_output, and without having to call the print method. This is black magic!
Any suggestions?
Edit: Here's some sample code.
# /!\ note: need to use os.chdir first to get to the folder with rtmpdump!
command = './rtmpdump -r rtmp://oxy.videolectures.net/video/ -y 2007/pascal/bootcamp07_vilanova/keller_mikaela/bootcamp07_keller_bss_01 -a video -s http://media.videolectures.net/jw-player/player.swf -w ffa4f0c469cfbe1f449ec42462e8c3ba16600f5a4b311980bb626893ca81f388 -x 53910 -o test.flv'
split_command = shlex.split(command)
subprocess.call(split_command)
A:
sys.stdout is the python's idea of the parent's output stream.
In any case you want to change the child's output stream.
subprocess.call and subprocess.Popen take named parameters for the output streams.
So open the file you want to output to and then pass that as the appropriate argument to subprocess.
f = open("outputFile","wb")
subprocess.call(argsArray,stdout=f)
Your talk of using >> suggest you are using shell=True, or think you are passing your arguments to the shell. In any case it is better to use the array form of subprocess, which avoid an unnecessary process, and any weirdness from the shell.
EDIT:
So I downloaded RTMPDump and tried it out, it would appear the messages are appearing on stderr.
So with the following program, nothing appears on the programs output, and the rtmpdump logs when into the stderr.txt file:
#!/usr/bin/env python
import os
import subprocess
RTMPDUMP="./rtmpdump"
assert os.path.isfile(RTMPDUMP)
command = [RTMPDUMP,'-r','rtmp://oxy.videolectures.net/video/',
'-y','2007/pascal/bootcamp07_vilanova/keller_mikaela/bootcamp07_keller_bss_01',
'-a','video','-s',
'http://media.videolectures.net/jw-player/player.swf',
'-w','ffa4f0c469cfbe1f449ec42462e8c3ba16600f5a4b311980bb626893ca81f388'
,'-x','53910','-o','test.flv']
stdout = open("stdout.txt","wb")
stderr = open("stderr.txt","wb")
subprocess.call(command,stdout=stdout,stderr=stderr)
A:
See the link on getting the output from subprocess on SO
Getting the entire output from subprocess.Popen
https://stackoverflow.com/questions/tagged/subprocess
I guess the way would be to collect the output and write it to a file directly or provide file descriptors to which you output can be written.
Something like this:
f = open('dump.txt', 'wb')
p = subprocess.Popen(args, stdout=f, stderr=subprocess.STDOUT, shell=True)
| Python: How do I redirect this output? | I'm calling rtmpdump via subprocess and trying to redirect its output to a file. The problem is that I simply can't redirect it.
I tried first setting up the sys.stdout to the opened file. This works for, say, ls, but not for rtmpdump. I also tried setting the sys.stderr just to make sure and it also didn't work.
I tried then using a ">> file" with the command line argument but again it doesn't seem to work.
Also for the record, for some reason, Eclipse prints rtmpdump's output even if I use subprocess.call instead of subprocess.check_output, and without having to call the print method. This is black magic!
Any suggestions?
Edit: Here's some sample code.
# /!\ note: need to use os.chdir first to get to the folder with rtmpdump!
command = './rtmpdump -r rtmp://oxy.videolectures.net/video/ -y 2007/pascal/bootcamp07_vilanova/keller_mikaela/bootcamp07_keller_bss_01 -a video -s http://media.videolectures.net/jw-player/player.swf -w ffa4f0c469cfbe1f449ec42462e8c3ba16600f5a4b311980bb626893ca81f388 -x 53910 -o test.flv'
split_command = shlex.split(command)
subprocess.call(split_command)
| [
"sys.stdout is the python's idea of the parent's output stream.\nIn any case you want to change the child's output stream.\nsubprocess.call and subprocess.Popen take named parameters for the output streams.\nSo open the file you want to output to and then pass that as the appropriate argument to subprocess.\nf = op... | [
21,
1
] | [] | [] | [
"python",
"subprocess"
] | stackoverflow_0003982577_python_subprocess.txt |
Q:
Consequences of changing __type__
I'm attempting to create what a believe (in my ignorance) is known as a class factory. Essentially, I've got a parent class that I'd like to take an __init__ argument and become one of several child classes. I found an example of this recommended on StackOverflow here, and it looks like this:
class Vehicle(object):
def __init__(self, vtype):
self.vtype = vtype
if vtype=='c':
self.__class__ = Car
elif vtype == 't':
self.__class__ = Truck
I've heard that changing __type__ can be dangerous. Are there any negative approaches to this approach? I'd use a function to dynamically create objects, but it wouldn't work with the existing code I'm using. It expects a class where I plan to do the dynamic type change.
Thanks!
A:
I think a class factory is defined as a callable that returns a class (not an instance):
def vehicle_factory(vtype):
if vtype == 'c':
return Car
if vtype == 't':
return Truck
VehicleClass = vehicle_factory(c)
vehicle_instance_1 = VehicleClass(*args, **kwargs)
VehicleClass = vehicle_factory(t)
vehicle_instance_2 = VehicleClass(*args, **kwargs)
| Consequences of changing __type__ | I'm attempting to create what a believe (in my ignorance) is known as a class factory. Essentially, I've got a parent class that I'd like to take an __init__ argument and become one of several child classes. I found an example of this recommended on StackOverflow here, and it looks like this:
class Vehicle(object):
def __init__(self, vtype):
self.vtype = vtype
if vtype=='c':
self.__class__ = Car
elif vtype == 't':
self.__class__ = Truck
I've heard that changing __type__ can be dangerous. Are there any negative approaches to this approach? I'd use a function to dynamically create objects, but it wouldn't work with the existing code I'm using. It expects a class where I plan to do the dynamic type change.
Thanks!
| [
"I think a class factory is defined as a callable that returns a class (not an instance):\ndef vehicle_factory(vtype):\n if vtype == 'c':\n return Car\n if vtype == 't':\n return Truck\n\nVehicleClass = vehicle_factory(c)\nvehicle_instance_1 = VehicleClass(*args, **kwargs)\nVehicleClass = vehicl... | [
1
] | [
"Don't do it this way. Override __new__() instead.\n"
] | [
-1
] | [
"factory",
"python",
"types"
] | stackoverflow_0003982566_factory_python_types.txt |
Q:
How to create QString in PyQt4?
>>> from PyQt4 import QtCore
>>> str = QtCore.QString('Hello')
AttributeError: 'module' object has no attribute 'QString'
>>> QtCore.QString._init_(self)
AttributeError: 'module' object has no attribute 'QString'
Yes, I've read QString Class Reference
Why can't I import QString from QtCore, as specified in the docs ?
A:
In Python 3, QString is automatically mapped to the native Python string by default:
The QString class is implemented as a mapped type that is automatically converted to and from a Python string. In addition a None is converted to a null QString. However, a null QString is converted to an empty Python string (and not None). (This is because Qt often returns a null QString when it should probably return an empty QString.)
The QChar and QStringRef classes are implemented as mapped types that are automatically converted to and from Python strings.
The QStringList class is implemented as a mapped type that is automatically converted to and from Python lists of strings.
The QLatin1Char, QLatin1String and QStringMatcher classes are not implemented.
http://pyqt.sourceforge.net/Docs/PyQt4/qstring.html
A:
From PyQt4 4.6+ in Python3 QString doesn't exist and you are supposed to use ordinary Python3 unicode objects (string literals). To do this so that your code will work in both Python 2.x AND Python 3.x you can do following:
try:
from PyQt4.QtCore import QString
except ImportError:
# we are using Python3 so QString is not defined
QString = type("")
Depending on your use case you might get away with this simple hack.
A:
In [1]: from PyQt4 import QtCore
In [2]: s = QtCore.QString('foo')
In [3]: s
Out[3]: PyQt4.QtCore.QString(u'foo')
A:
It depends on your import statement.
If you write
from PyQt4 import QtGui, QtCore
you must call QString with
yourstr = QtCore.QString('foo')
I think you've written this :
from PyQt4.QtGui import *
from PyQt4.QtCore import *
It's not really recommended, but you should call String with :
yourstr = QString('foo')
| How to create QString in PyQt4? | >>> from PyQt4 import QtCore
>>> str = QtCore.QString('Hello')
AttributeError: 'module' object has no attribute 'QString'
>>> QtCore.QString._init_(self)
AttributeError: 'module' object has no attribute 'QString'
Yes, I've read QString Class Reference
Why can't I import QString from QtCore, as specified in the docs ?
| [
"In Python 3, QString is automatically mapped to the native Python string by default:\n\nThe QString class is implemented as a mapped type that is automatically converted to and from a Python string. In addition a None is converted to a null QString. However, a null QString is converted to an empty Python string (a... | [
19,
18,
9,
2
] | [] | [] | [
"pyqt",
"python",
"user_interface"
] | stackoverflow_0001400858_pyqt_python_user_interface.txt |
Q:
Structuring Django Many-to-Many Relation
In writing an application for my school's yearbook committee, I've hit a bit of a dead end with modeling a specific relation. Currently I have a photo class
class Photo(models.Model):
photo = models.ImageField(upload_to="user_photos/")
name = models.CharField(blank=True, max_length=50)
rating = models.IntegerField(default=1000)
wins = models.IntegerField(default=0)
matches = models.IntegerField(default=0)
and a user class
class UserProfile(models.Model):
user = models.ForeignKey(User, unique=True)
group = models.CharField(max_length=50)
both of which are working swimmingly. What I'd like to do is break it up so that a Photo will have global rating derived from votes of the entire userbase as well as a rating based only on the users votes on that photo. Unfortunately, I'm at a loss on how to structure this. My first thought was a ManyToMany field, but I was also thinking that something like breaking rating into its own model like this:
class Rating(models.Model)
photo = models.ManyToOne(Photo)
rating = models.IntegerField(default=1500)
could work.
Could a Django (or really, anyone who's slightly competent, because I know I'm not) guru point me in the proper direction on approaching this simple conundrum?
A:
You want a through table.
A:
you want to have a many-to-many field, but custom defined.
class Rating(models.Model):
photo = models.ForeignKey(Photo)
user = models.ForeignKey(User)
rating = models.IntegerField(default=1500)
class Photo(models.Model):
photo = models.ImageField(upload_to="user_photos/")
name = models.CharField(blank=True, max_length=50)
rating = models.ManyToManyField(User, through='Rating')
wins = models.IntegerField(default=0)
matches = models.IntegerField(default=0)
you can then query it either having a photo object or a user object.
This approach is naive and won't scale well. On a very popular website the Ratings model would be overloaded with requests. What would a more professional website do is denormalise the dependency by introducing a redundant integer field such as rating_total and set up a cron job to update it periodically so that you don't need to construct a query through a many-to-many relationship, but get the result from a field straight away.
A:
I do not exactly understand your question, but for represeting a many-to-one-relation in django you use a ForeignKey-field! But as Ignacio already pointed out a many-to-many relation with an intermediary model might also be useful for you!
A:
I don't understand well this sentence : "Photo will have global rating derived from votes of the entire userbase as well as a rating based only on the users votes on that photo."
But if you mean that you want to create a field rating for where you have the sum of all users rate and also saving another one that contain user photo rate ; you should first make a think is it worth it because for calculated fields you have the choice between physically storing (global rate) such values in the database or calculating them as needed. you should make the decision based on how often the computed value is needed and how much information is needed to calculate it.
Note that storing the computed value means computing it each time you do a change in the users rate.
| Structuring Django Many-to-Many Relation | In writing an application for my school's yearbook committee, I've hit a bit of a dead end with modeling a specific relation. Currently I have a photo class
class Photo(models.Model):
photo = models.ImageField(upload_to="user_photos/")
name = models.CharField(blank=True, max_length=50)
rating = models.IntegerField(default=1000)
wins = models.IntegerField(default=0)
matches = models.IntegerField(default=0)
and a user class
class UserProfile(models.Model):
user = models.ForeignKey(User, unique=True)
group = models.CharField(max_length=50)
both of which are working swimmingly. What I'd like to do is break it up so that a Photo will have global rating derived from votes of the entire userbase as well as a rating based only on the users votes on that photo. Unfortunately, I'm at a loss on how to structure this. My first thought was a ManyToMany field, but I was also thinking that something like breaking rating into its own model like this:
class Rating(models.Model)
photo = models.ManyToOne(Photo)
rating = models.IntegerField(default=1500)
could work.
Could a Django (or really, anyone who's slightly competent, because I know I'm not) guru point me in the proper direction on approaching this simple conundrum?
| [
"You want a through table.\n",
"you want to have a many-to-many field, but custom defined.\n\nclass Rating(models.Model):\n photo = models.ForeignKey(Photo)\n user = models.ForeignKey(User)\n rating = models.IntegerField(default=1500)\n\nclass Photo(models.Model):\n photo = models.ImageField(upload_to... | [
2,
1,
0,
0
] | [] | [] | [
"django",
"django_models",
"python"
] | stackoverflow_0003982260_django_django_models_python.txt |
Q:
python on xp: errno 13 permission denied - limits to number of files in folder?
I'm running Python 2.6.2 on XP. I have a large number of text files (100k+) spread across several folders that I would like to consolidate in a single folder on an external drive.
I've tried using shutil.copy() and shutil.copytree() and distutils.file_util.copy_file() to copy files from source to destination. None of these methods has successfully copied all files from a source folder, and each attempt has ended with IOError Errno 13 Permission Denied and I am unable to create a new destination file.
I have noticed that all the destination folders I've used, regardless of the source folders used, have ended up with exactly 13,106 files. I cannot open any new files for writing in folders that have this many (or more files), which may be why I'm getting Errno 13.
I'd be grateful for suggestions on whether and why this problem is occurring.
many thanks,
nick
A:
Are you using FAT32? The maximum number of directory entries in a FAT32 folder is is 65.534. If a filename is longer than 8.3, it will take more than one directory entry. If you are conking out at 13,106, this indicates that each filename is long enough to require five directory entries.
Solution: Use an NTFS volume; it does not have per-folder limits and supports long filenames natively (that is, instead of using multiple 8.3 entries). The total number of files on an NTFS volume is limited to around 4.3 billion, but they can be put in folders in any combination.
A:
I wouldn't have that many files in a single folder, it is a maintenance nightmare. BUT if you need to, don't do this on FAT: you have max. 64k files in a FAT folder.
Read the error message
Your specific problem could also be be, that you as the error message suggests are hitting a file which you can't access. And there's no reason to believe that the count of files until this happens should change. It is a computer after all, and you are repeating the same operation.
A:
I predict that your external drive is formatted 32 and that the filenames you're writing to it are somewhere around 45 characters long.
FAT32 can only have 65536 directory entries in a directory. Long file names use multiple directory entries each. And "." always takes up one entry. That you are able to write 65536/5 - 1 = 13106 entries strongly suggests that your filenames take up 5 entries each and that you have a FAT32 filesystem. This is because there exists code using 16-bit numbers as directory entry offsets.
Additionally, you do not want to search through multi-1000 entry directories in FAT -- the search is linear. I.e. fopen(some_file) will induce the OS to march linearly through the list of files, from the beginning every time, until it finds some_file or marches off the end of the list.
Short answer: Directories are a good thing.
| python on xp: errno 13 permission denied - limits to number of files in folder? | I'm running Python 2.6.2 on XP. I have a large number of text files (100k+) spread across several folders that I would like to consolidate in a single folder on an external drive.
I've tried using shutil.copy() and shutil.copytree() and distutils.file_util.copy_file() to copy files from source to destination. None of these methods has successfully copied all files from a source folder, and each attempt has ended with IOError Errno 13 Permission Denied and I am unable to create a new destination file.
I have noticed that all the destination folders I've used, regardless of the source folders used, have ended up with exactly 13,106 files. I cannot open any new files for writing in folders that have this many (or more files), which may be why I'm getting Errno 13.
I'd be grateful for suggestions on whether and why this problem is occurring.
many thanks,
nick
| [
"Are you using FAT32? The maximum number of directory entries in a FAT32 folder is is 65.534. If a filename is longer than 8.3, it will take more than one directory entry. If you are conking out at 13,106, this indicates that each filename is long enough to require five directory entries.\nSolution: Use an NTFS vo... | [
2,
0,
0
] | [] | [] | [
"python",
"windows_xp"
] | stackoverflow_0003982881_python_windows_xp.txt |
Q:
How to add xml header to dom object
I'm using Python's xml.dom.minidom but I think the question is valid for any DOM parser.
My original file has a line like this at the beginning:
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
This doesn't seem to be part of the dom, so when I do something like dom.toxml() the resulting string have not line at the beginning.
How can I add it?
example outpupt:
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<Root xmlns:aid="http://xxxxxxxxxxxxxxxxxx">
<Section>BANDSAW BLADES</Section>
</Root>
hope to be clear.
A:
This doesn't seem to be part of the dom
The XML Declaration doesn't get a node of its own, no, but the properties declared in it are visible on the Document object:
>>> doc= minidom.parseString('<?xml version="1.0" encoding="utf-8" standalone="yes"?><a/>')
>>> doc.encoding
'utf-8'
>>> doc.standalone
True
Serialising the document should include the standalone="yes" part of the declaration, but toxml() doesn't. You could consider this a bug, perhaps, but really the toxml() method doesn't make any promises to serialise the XML declaration in an appropriate way. (eg you don't get an encoding unless you specifically ask for it either.)
You could take charge of writing the document yourself:
xml= []
xml.append('<?xml version="1.0" encoding="utf-8" standalone="yes"?>')
for child in doc.childNodes:
xml.append(child.toxml())
but do you really need the XML Declaration here? You are using the default version and encoding, and since you have no DOCTYPE there can be no externally-defined entities, so the document is already standalone by nature. As per the XML standard: “if there are no external markup declarations, the standalone document declaration has no meaning”. It seems to me you could safely omit it completely.
| How to add xml header to dom object | I'm using Python's xml.dom.minidom but I think the question is valid for any DOM parser.
My original file has a line like this at the beginning:
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
This doesn't seem to be part of the dom, so when I do something like dom.toxml() the resulting string have not line at the beginning.
How can I add it?
example outpupt:
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<Root xmlns:aid="http://xxxxxxxxxxxxxxxxxx">
<Section>BANDSAW BLADES</Section>
</Root>
hope to be clear.
| [
"\nThis doesn't seem to be part of the dom\n\nThe XML Declaration doesn't get a node of its own, no, but the properties declared in it are visible on the Document object:\n>>> doc= minidom.parseString('<?xml version=\"1.0\" encoding=\"utf-8\" standalone=\"yes\"?><a/>')\n>>> doc.encoding\n'utf-8'\n>>> doc.standalone... | [
2
] | [] | [] | [
"dom",
"python",
"xml"
] | stackoverflow_0003982887_dom_python_xml.txt |
Q:
Always return proper URL no matter what the user enters?
I have the following python code
from urlparse import urlparse
def clean_url(url):
new_url = urlparse(url)
if new_url.netloc == '':
return new_url.path.strip().decode()
else:
return new_url.netloc.strip().decode()
print clean_url("http://www.facebook.com/john.doe")
print clean_url("http://facebook.com/john.doe")
print clean_url("facebook.com/john.doe")
print clean_url("www.facebook.com/john.doe")
print clean_url("john.doe")
In each example I take in a string and return it. This is not what I want. I am trying to take each example and always return "http://www.facebook.com/john.doe" even if they just type www.* or just john.doe.
I am fairly new to programming so please be gentle.
A:
I know this answer is a little late to the party, but if this is exactly what you're trying to do, I recommend a slightly different approach. Rather than reinventing the wheel for canonicalizing facebook urls, consider using the work that Google has already done for use with their Social Graph API.
They've already implemented patterns for a number of similar sites, including facebook. More information on that is here:
http://code.google.com/p/google-sgnodemapper/
A:
import urlparse
p = urlparse.urlsplit("john.doe")
=> ('','','john.doe','','')
The first element of the tuple should be "http://", the second element of the tuple should be "www.facebook.com/", and you can leave the fourth and fifth elements of the tuple alone. You can then reassemble your URL after processing it.
Just an FYI, to ensure a safe url segment for 'john.doe' (this may not apply to facebook, but its a good rule to know) use urllib.quote(string) to properly escape whitespace, etc.
A:
I am not very sure if I understood what you asked, but you can try this code, I tested and works fine but If you have trouble with this let me know.
I hope it helps
! /usr/bin/env python
import urlparse
def clean_url(url):
url_list = []
# split values into tuple
url_tuple = urlparse.urlsplit(url)
# as tuples are immutable so take this to a list
# so we can change the values that we need
counter = 0
for element in url_tuple:
url_list.append(element)
# validate each element individually
url_list[0] = 'http'
url_list[1] = 'www.facebook.com'
# get user name from the original url
# ** I understood the user is the only value
# for sure in the url, right??
user = url.split('/')
if len(user) == 1:
# the user was the only value sent
url_list[2] = user[0]
else:
# get the last element of the list
url_list[2] = user[len(user)-1]
# convert the list into a tuple and
# get all the elements together in the url again
new_url = urlparse.urlunsplit(tuple(url_list))
return new_url
if name == 'main':
print clean_url("http://www.facebook.com/john.doe")
print clean_url("http://facebook.com/john.doe")
print clean_url("facebook.com/john.doe")
print clean_url("www.facebook.com/john.doe")
print clean_url("john.doe")
| Always return proper URL no matter what the user enters? | I have the following python code
from urlparse import urlparse
def clean_url(url):
new_url = urlparse(url)
if new_url.netloc == '':
return new_url.path.strip().decode()
else:
return new_url.netloc.strip().decode()
print clean_url("http://www.facebook.com/john.doe")
print clean_url("http://facebook.com/john.doe")
print clean_url("facebook.com/john.doe")
print clean_url("www.facebook.com/john.doe")
print clean_url("john.doe")
In each example I take in a string and return it. This is not what I want. I am trying to take each example and always return "http://www.facebook.com/john.doe" even if they just type www.* or just john.doe.
I am fairly new to programming so please be gentle.
| [
"I know this answer is a little late to the party, but if this is exactly what you're trying to do, I recommend a slightly different approach. Rather than reinventing the wheel for canonicalizing facebook urls, consider using the work that Google has already done for use with their Social Graph API.\nThey've alread... | [
1,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0003938674_python.txt |
Q:
rename files with python - regex
I am wanting to rename 1k files using python. they are all in the format somejunkDATE.doc
basically, I would like to delete all the junk, and only leave the date. I am unsure how to match this for all files in a directory.
thanks
A:
If your date format is the same throughout, just use slicing
>>> file="someJunk20101022.doc"
>>> file[-12:]
'20101022.doc'
>>> import os
>>> os.rename(file, file[-12:]
If you want to check if the numbers are valid dates, pass file[-12:-3] to time or datetime module to check.
Say your files are all in a directory (no sub directories)
import os
import glob
import datetime,time #as required
os.chdir("/mypath")
for files in glob.glob("*.doc"):
newfilename = files[-12:]
# here to check date if desired
try:
os.rename(files,newfilename)
except OSError,e:
print e
else: print "ok"
| rename files with python - regex | I am wanting to rename 1k files using python. they are all in the format somejunkDATE.doc
basically, I would like to delete all the junk, and only leave the date. I am unsure how to match this for all files in a directory.
thanks
| [
"If your date format is the same throughout, just use slicing\n>>> file=\"someJunk20101022.doc\"\n>>> file[-12:]\n'20101022.doc'\n>>> import os\n>>> os.rename(file, file[-12:]\n\nIf you want to check if the numbers are valid dates, pass file[-12:-3] to time or datetime module to check.\nSay your files are all in a ... | [
8
] | [] | [] | [
"file",
"python",
"regex",
"rename"
] | stackoverflow_0003983309_file_python_regex_rename.txt |
Q:
Check facebook object type
In my app, I have a form where user should submit a facebook page URL.
How to check that it's correct?
Presently, I'm just checking that it begins with 'http://www.facebook.com'
How can I check that it is a page (where you can become a fan) and not a profile, event or whatever?
I'm using the python api and appengine.
Thanks!
A:
You could hit up the graph api with the id and see what you get back.
https://graph.facebook.com/{OBJECTID}
| Check facebook object type | In my app, I have a form where user should submit a facebook page URL.
How to check that it's correct?
Presently, I'm just checking that it begins with 'http://www.facebook.com'
How can I check that it is a page (where you can become a fan) and not a profile, event or whatever?
I'm using the python api and appengine.
Thanks!
| [
"You could hit up the graph api with the id and see what you get back.\nhttps://graph.facebook.com/{OBJECTID}\n"
] | [
0
] | [] | [] | [
"facebook",
"python"
] | stackoverflow_0003983168_facebook_python.txt |
Q:
Python Class Decorator
I am trying to decorate an actual class, using this code:
def my_decorator(cls):
def wrap(*args, **kw):
return object.__new__(cls)
return wrap
@my_decorator
class TestClass(object):
def __init__(self):
print "__init__ should run if object.__new__ correctly returns an instance of cls"
test = TestClass() # shouldn't TestClass.__init__() be run here?
I get no errors, but I also don't see the message from TestClass.__init__().
According to the docs for new-style classes:
Typical implementations create a new instance of the class by invoking the superclass’s __new__() method using super(currentclass, cls).__new__(cls[, ...]) with appropriate arguments and then modifying the newly-created instance as necessary before returning it.
If __new__() returns an instance of cls, then the new instance’s __init__() method will be invoked like __init__(self[, ...]), where self is the new instance and the remaining arguments are the same as were passed to __new__().
Any ideas why __init__ isn't running?
Also, I have tried to call __new__ like this:
return super(cls.__bases__[0], cls).__new__(cls)
but it would return a TypeError:
TypeError: super.__new__(TestClass): TestClass is not a subtype of super
A:
__init__ isn't running because object.__new__ doesn't know to call it. If you change it to
cls.__call__(*args, **kwargs), or better, cls(*args, **kwargs), it should work. Remember that a class is a callable: calling it produces a new instance. Just calling __new__ returns an instance but doesn't go through the initialization. An alternative would be to call __new__ and then manually call __init__ but this is just replacing the logic that is already embodied in __call__.
The documentation that you quote is referring to calling super from within the __new__ method of the class. Here, you are calling it from the outside and not in the usual way as I've already discussed.
A:
Couldn't tell you the reason but this hack does run __init__
def my_decorator(cls):
print "In my_decorator()"
def wrap(*args, **kw):
print "In wrap()"
return cls.__init__(object.__new__(cls), *args, **kw)
return wrap
@my_decorator
class TestClass(object):
def __init__(self):
print "__init__ should run if object.__new__ correctly returns an instance of cls"
| Python Class Decorator | I am trying to decorate an actual class, using this code:
def my_decorator(cls):
def wrap(*args, **kw):
return object.__new__(cls)
return wrap
@my_decorator
class TestClass(object):
def __init__(self):
print "__init__ should run if object.__new__ correctly returns an instance of cls"
test = TestClass() # shouldn't TestClass.__init__() be run here?
I get no errors, but I also don't see the message from TestClass.__init__().
According to the docs for new-style classes:
Typical implementations create a new instance of the class by invoking the superclass’s __new__() method using super(currentclass, cls).__new__(cls[, ...]) with appropriate arguments and then modifying the newly-created instance as necessary before returning it.
If __new__() returns an instance of cls, then the new instance’s __init__() method will be invoked like __init__(self[, ...]), where self is the new instance and the remaining arguments are the same as were passed to __new__().
Any ideas why __init__ isn't running?
Also, I have tried to call __new__ like this:
return super(cls.__bases__[0], cls).__new__(cls)
but it would return a TypeError:
TypeError: super.__new__(TestClass): TestClass is not a subtype of super
| [
"__init__ isn't running because object.__new__ doesn't know to call it. If you change it to \ncls.__call__(*args, **kwargs), or better, cls(*args, **kwargs), it should work. Remember that a class is a callable: calling it produces a new instance. Just calling __new__ returns an instance but doesn't go through the i... | [
10,
0
] | [] | [] | [
"decorator",
"python"
] | stackoverflow_0003983378_decorator_python.txt |
Q:
CSRF error in Django; How can I add CSRF to my login view?
I have a simple form I want users to be able to log into; here is the template code with the CSRF tag in it:
<html>
<head><title>My Site</title></head>
<body>
<form action="" method="post">{% csrf_token %}
<label for="username">User name:</label>
<input type="text" name="username" value="" id="username">
<label for="password">Password:</label>
<input type="password" name="password" value="" id="password">
<input type="submit" value="login" />
<input type="hidden" name="next" value="{{ next|escape }}" />
</form>
</body>
</html>
Now here is my views.py page. The question is where do I put in the CSRF supporting part (right now I get a CFRS token error) in my view and how do i do it?
from django.contrib import auth
def login_view(request):
username = request.POST.get('username', '')
password = request.POST.get('password', '')
user = auth.authenticate(username=username, password=password)
if user is not None and user.is_active:
# Correct password, and the user is marked "active"
auth.login(request, user)
# Redirect to a success page
return HttpResponseRedirect("/account/loggedin/")
else:
# Show an error page
return HttpResponseRedirect("account/invalid/")
def logout_view(request):
A:
You have to add the RequestContext to the view that renders the page with the {% csrf_token %} line in it. Here is the example from the tutorial:
# The {% csrf_token %} tag requires information from the request object, which is
# not normally accessible from within the template context. To fix this,
# a small adjustment needs to be made to the detail view, so that it looks
# like the following:
#
from django.template import RequestContext
# ...
def detail(request, poll_id):
p = get_object_or_404(Poll, pk=poll_id)
return render_to_response('polls/detail.html', {'poll': p},
context_instance=RequestContext(request))
The context_instance=RequestContext(request) part is the important part. This makes the RequestContext available to the form template when it is rendered.
| CSRF error in Django; How can I add CSRF to my login view? | I have a simple form I want users to be able to log into; here is the template code with the CSRF tag in it:
<html>
<head><title>My Site</title></head>
<body>
<form action="" method="post">{% csrf_token %}
<label for="username">User name:</label>
<input type="text" name="username" value="" id="username">
<label for="password">Password:</label>
<input type="password" name="password" value="" id="password">
<input type="submit" value="login" />
<input type="hidden" name="next" value="{{ next|escape }}" />
</form>
</body>
</html>
Now here is my views.py page. The question is where do I put in the CSRF supporting part (right now I get a CFRS token error) in my view and how do i do it?
from django.contrib import auth
def login_view(request):
username = request.POST.get('username', '')
password = request.POST.get('password', '')
user = auth.authenticate(username=username, password=password)
if user is not None and user.is_active:
# Correct password, and the user is marked "active"
auth.login(request, user)
# Redirect to a success page
return HttpResponseRedirect("/account/loggedin/")
else:
# Show an error page
return HttpResponseRedirect("account/invalid/")
def logout_view(request):
| [
"You have to add the RequestContext to the view that renders the page with the {% csrf_token %} line in it. Here is the example from the tutorial:\n# The {% csrf_token %} tag requires information from the request object, which is \n# not normally accessible from within the template context. To fix this, \n# a smal... | [
5
] | [] | [] | [
"csrf",
"django",
"django_csrf",
"python"
] | stackoverflow_0003983474_csrf_django_django_csrf_python.txt |
Q:
How do you generate xml from non string data types using minidom?
How do you generate xml from non string data types using minidom? I have a feeling someone is going to tell me to generate strings before hand, but this is not what I'm after.
from datetime import datetime
from xml.dom.minidom import Document
num = "1109"
bool = "false"
time = "2010-06-24T14:44:46.000"
doc = Document()
Submission = doc.createElement("Submission")
Submission.setAttribute("bool",bool)
doc.appendChild(Submission)
Schedule = doc.createElement("Schedule")
Schedule.setAttribute("id",num)
Schedule.setAttribute("time",time)
Submission.appendChild(Schedule)
print doc.toprettyxml(indent=" ",encoding="UTF-8")
This is the result:
<?xml version="1.0" encoding="UTF-8"?>
<Submission bool="false">
<Schedule id="1109" time="2010-06-24T14:44:46.000"/>
</Submission>
How do I get valid xml representations of non-string datatypes?
from datetime import datetime
from xml.dom.minidom import Document
num = 1109
bool = False
time = datetime.now()
doc = Document()
Submission = doc.createElement("Submission")
Submission.setAttribute("bool",bool)
doc.appendChild(Submission)
Schedule = doc.createElement("Schedule")
Schedule.setAttribute("id",num)
Schedule.setAttribute("time",time)
Submission.appendChild(Schedule)
print doc.toprettyxml(indent=" ",encoding="UTF-8")
File "C:\Python25\lib\xml\dom\minidom.py", line 299, in _write_data
data = data.replace("&", "&").replace("<", "<")
AttributeError: 'bool' object has no attribute 'replace'
A:
The bound method setAttribute expects its second argument, the value, to be a string. You can help the process along by converting the data to strings:
bool = str(False)
or, converting to strings when you call setAttribute:
Submission.setAttribute("bool",str(bool))
(and of course, the same must be done for num and time).
| How do you generate xml from non string data types using minidom? | How do you generate xml from non string data types using minidom? I have a feeling someone is going to tell me to generate strings before hand, but this is not what I'm after.
from datetime import datetime
from xml.dom.minidom import Document
num = "1109"
bool = "false"
time = "2010-06-24T14:44:46.000"
doc = Document()
Submission = doc.createElement("Submission")
Submission.setAttribute("bool",bool)
doc.appendChild(Submission)
Schedule = doc.createElement("Schedule")
Schedule.setAttribute("id",num)
Schedule.setAttribute("time",time)
Submission.appendChild(Schedule)
print doc.toprettyxml(indent=" ",encoding="UTF-8")
This is the result:
<?xml version="1.0" encoding="UTF-8"?>
<Submission bool="false">
<Schedule id="1109" time="2010-06-24T14:44:46.000"/>
</Submission>
How do I get valid xml representations of non-string datatypes?
from datetime import datetime
from xml.dom.minidom import Document
num = 1109
bool = False
time = datetime.now()
doc = Document()
Submission = doc.createElement("Submission")
Submission.setAttribute("bool",bool)
doc.appendChild(Submission)
Schedule = doc.createElement("Schedule")
Schedule.setAttribute("id",num)
Schedule.setAttribute("time",time)
Submission.appendChild(Schedule)
print doc.toprettyxml(indent=" ",encoding="UTF-8")
File "C:\Python25\lib\xml\dom\minidom.py", line 299, in _write_data
data = data.replace("&", "&").replace("<", "<")
AttributeError: 'bool' object has no attribute 'replace'
| [
"The bound method setAttribute expects its second argument, the value, to be a string. You can help the process along by converting the data to strings:\nbool = str(False)\n\nor, converting to strings when you call setAttribute:\nSubmission.setAttribute(\"bool\",str(bool))\n\n(and of course, the same must be done ... | [
3
] | [] | [] | [
"minidom",
"python",
"xml"
] | stackoverflow_0003983890_minidom_python_xml.txt |
Q:
Django template: Why block in included template can't be overwritten by child template?
To illustrate my question more clearly, let's suppose I have a include.html template with content:
{% block test_block %}This is include{% endblock %}
I have another template called parent.html with content like this:
This is parent
{% include "include.html" %}
Now I create a templated called child.html that extends parent.html:
{% extends "parent.html" %}
{% block test_block %}This is child{% endblock %}
My idea is that when rendering child.html, the test_block in child.html can overwrite the one in include.html. As per my understanding, when a template is included, it is included as it is. So in my case, I think parent.html equals to:
This is parent
{% block test_block %}This is include{% endblock %}
So child.html should be able to overwrite test_block. But looks like it can't. Why? Is there a workaround?
A:
When you include a template, it renders the template, then includes the rendered content.
From the django docs:
The include tag should be considered as an implementation of "render this subtemplate and include the HTML", not as "parse this subtemplate and include its contents as if it were part of the parent". This means that there is no shared state between included templates -- each include is a completely independent rendering process.
A workaround would be to have the child template extend the included template instead of the including template. Then, include the child template.
| Django template: Why block in included template can't be overwritten by child template? | To illustrate my question more clearly, let's suppose I have a include.html template with content:
{% block test_block %}This is include{% endblock %}
I have another template called parent.html with content like this:
This is parent
{% include "include.html" %}
Now I create a templated called child.html that extends parent.html:
{% extends "parent.html" %}
{% block test_block %}This is child{% endblock %}
My idea is that when rendering child.html, the test_block in child.html can overwrite the one in include.html. As per my understanding, when a template is included, it is included as it is. So in my case, I think parent.html equals to:
This is parent
{% block test_block %}This is include{% endblock %}
So child.html should be able to overwrite test_block. But looks like it can't. Why? Is there a workaround?
| [
"When you include a template, it renders the template, then includes the rendered content.\nFrom the django docs:\n\nThe include tag should be considered as an implementation of \"render this subtemplate and include the HTML\", not as \"parse this subtemplate and include its contents as if it were part of the paren... | [
13
] | [] | [] | [
"django",
"extend",
"include",
"python",
"templates"
] | stackoverflow_0003983872_django_extend_include_python_templates.txt |
Q:
Is there any reason for using classes in Python if there is only one class in the program?
I've seen some people writing Python code by creating one class and then an object to call all the methods. Is there any advantage of using classes if we don't make use of inheritance, encapsulation etc? Such code seems to me less clean with all these 'self' arguments, which we could avoid. Is this practice an influence from other programming languages such as Java or is there any good reason why Python programs should be structured like this?
example code:
class App:
# all the methods go here
a = App()
A:
One advantage, though not always applicable, is that it makes it easy to extend the program by subclassing the one class. For example I can subclass it and override the method that reads from, say, a csv file to reading an xml file and then instantiate the subclass or original class based on run-time information. From there, the logic of the program can proceed normally.
That of course raises the question of if reading the file is really the responsibility of the class or more properly belongs to a class that has subclasses for reading different types of data and presents a uniform interface to that data but that is, of course, another question.
Personally, I find that the cleanest way to do it is to put functions which make strong assumptions about their parameters on the appropriate class as methods and to put functions which make very weak assumptions about their arguments in the module as functions.
A:
I often do the same myself for the managers parts of an app which, in theory, could be reduced to simple, very sequential, functional programming.
The main advantage to it (for me) is that it encapsulates well your main loop or your run-once execution, and it allows to configure that run and persist data efficiently and cleanly across blocks, fundamentally re-configuring it as needed without having to change the code itself. Not to mention the ability to subclass an execution to a different and extended one.
It also tends to be a lot easier to expand the main that way, then it is when you have a solid block of 200 lines throwing around rather cryptically scoped stuff.
The self, after you write enough Python, kinda goes away as an impedement, and personally I like how it immediately offers a visual distinction between what I will obviously want to persist across scopes, and what is a disposable element that I DO want to run out of scope and get collected as soon as a particular step is done.
Last but not least, some people will object orient anything they get their hands on, or won't be able to read it. I'm one of them :)
A:
1. I know of one use for this.
Though this can be achieved by other means. It ensures that both modules - module_B and module_C uses the same instance of App and do not instantiate separate objects.
In module_A.py
class App:
....
a = App()
In module_B.py
from module_A import a
In module_C.py
from module_A import a
Of course, you could create separate objects but that was is not the intention of above modules.
What if you did the following in module_D.py
from module_A import App
a = App()
2. [Pedantic]
You could avoid using classes and decompose your solution using only modules and functions. But won't that look ugly in a large program. Wouldn't you want to use object oriented paradigm in your program. All languages provide certain way to take advantage of OO. We all like some thing or the other and some do turn us off. Explicit is better than implicit is usually the Python way. Though this is not good enough reason for it's inclusion.
A:
I tend to use classes when I think it is likely that I will need multiple instances of something, or if I think inheriting from it will be useful. If I'm just looking for a way to group related functions and settings that affect their behavior, well, a module can work just as well for that, with less syntax. Sometimes I will use both approaches in the same module.
Also, although you can write decorators and context managers as classes, I personally tend to find them clearer when written as functions, so that's how I typically write them.
Python is a multi-paradigm programming language. Use whatever approach you feel expresses your intent most clearly. You never need to use OO.
An approach I like to use is to think of how I'd want the various pieces of functionality provided to me if someone else was going to write a module to solve a problem like mine in a generic way. Basically, I try to design the interfaces between my modules first, and make each as simple and easy to use as possible. At this point, it starts to become clear to me what approach I should use for each bit of functionality.
But I'm just one guy, and programming is not my main job.
A:
Is there any advantage of using classes if we don't make use of inheritance, encapsulation etc?
Yes.
this practice an influence from other programming languages such as Java
No.
There any good reason why Python programs should be structured like this?
Yes.
However. From your question, you've clearly decided that "Such code seems to me less clean with all these 'self' arguments".
There's not a lot of point in explaining the advantages of a class declaration if you're already sure it's "less clean".
Consider this.
A script is never static.
A class-free design will have to be changed.
At some point, the changes will lead to a second class.
Bottom-line.
You can work without a class for brief moments. Until you make changes. Then you'll regret not having the word class and all those "unclean" self arguments.
You'll add them eventually.
Why wait?
| Is there any reason for using classes in Python if there is only one class in the program? | I've seen some people writing Python code by creating one class and then an object to call all the methods. Is there any advantage of using classes if we don't make use of inheritance, encapsulation etc? Such code seems to me less clean with all these 'self' arguments, which we could avoid. Is this practice an influence from other programming languages such as Java or is there any good reason why Python programs should be structured like this?
example code:
class App:
# all the methods go here
a = App()
| [
"One advantage, though not always applicable, is that it makes it easy to extend the program by subclassing the one class. For example I can subclass it and override the method that reads from, say, a csv file to reading an xml file and then instantiate the subclass or original class based on run-time information. ... | [
8,
5,
3,
2,
1
] | [] | [] | [
"class",
"oop",
"python"
] | stackoverflow_0003983520_class_oop_python.txt |
Q:
DJango Dev Server strange output ppcfinder.net/judge.php
I wonder if anyone has seen this. I am developing a web app and the dev server just output the following when I was doing some testing.
logging on
[21/Oct/2010 13:42:56] "POST /members/logon/ HTTP/1.1" 302 0
[21/Oct/2010 13:42:57] "GET / HTTP/1.1" 200 20572
[21/Oct/2010 13:42:59] "GET http://ppcfinder.net/judge.php HTTP/1.1" 404 1744
----------------------------------------
Exception happened during processing of request from ('221.195.73.68', 2884)
Traceback (most recent call last):
File "C:\Python26\lib\SocketServer.py", line 281, in _handle_request_noblock
self.process_request(request, client_address)
File "C:\Python26\lib\SocketServer.py", line 307, in process_request
self.finish_request(request, client_address)
File "C:\Python26\lib\SocketServer.py", line 320, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "C:\Python26\lib\site-packages\django\core\servers\basehttp.py", line 562
, in __init__
BaseHTTPRequestHandler.__init__(self, *args, **kwargs)
File "C:\Python26\lib\SocketServer.py", line 615, in __init__
self.handle()
File "C:\Python26\lib\site-packages\django\core\servers\basehttp.py", line 602
, in handle
self.raw_requestline = self.rfile.readline()
File "C:\Python26\lib\socket.py", line 406, in readline
data = self._sock.recv(self._rbufsize)
error: [Errno 10054] An existing connection was forcibly closed by the remote ho
st
----------------------------------------
logging on
[21/Oct/2010 13:43:44] "POST /members/signup/ HTTP/1.1" 302 0
----------------------------------------
Exception happened during processing of request from ('221.195.73.68', 3227)
Traceback (most recent call last):
File "C:\Python26\lib\SocketServer.py", line 281, in _handle_request_noblock
self.process_request(request, client_address)
File "C:\Python26\lib\SocketServer.py", line 307, in process_request
self.finish_request(request, client_address)
File "C:\Python26\lib\SocketServer.py", line 320, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "C:\Python26\lib\site-packages\django\core\servers\basehttp.py", line 562
, in __init__
BaseHTTPRequestHandler.__init__(self, *args, **kwargs)
File "C:\Python26\lib\SocketServer.py", line 615, in __init__
self.handle()
File "C:\Python26\lib\site-packages\django\core\servers\basehttp.py", line 602
, in handle
self.raw_requestline = self.rfile.readline()
File "C:\Python26\lib\socket.py", line 406, in readline
data = self._sock.recv(self._rbufsize)
error: [Errno 10054] An existing connection was forcibly closed by the remote ho
st
----------------------------------------
checking for entry_details, oh yeh
member invitation
Everything until the /ppcfinder.net/judge.php line is expected, I have no idea what this judge.php is doing or where it came from. Myoutput starts again at
checking for entry_details, oh yeh
member invitation
I searched through all Python / django source in case I got something nasty in there but nothing. I searched through all my own source and it's not in there either.
What on earth is it and where did it come from. I saw it last week as well but couldn't find it. Last week judge.php was being accessed via an IP address rather than ppcfinder.net URL. I checked out ppcfinder.net after turning off Javascript and it appears to be a rubbish search site. It's a bit scary in case something has got enbeded somewhere in Python source and mighjt be stealing stuff but I can't find it.
Has anyone seen this?
Rich
A:
Seems like a bot in another host hit yours searching for known vulnerabilities to exploit.
| DJango Dev Server strange output ppcfinder.net/judge.php | I wonder if anyone has seen this. I am developing a web app and the dev server just output the following when I was doing some testing.
logging on
[21/Oct/2010 13:42:56] "POST /members/logon/ HTTP/1.1" 302 0
[21/Oct/2010 13:42:57] "GET / HTTP/1.1" 200 20572
[21/Oct/2010 13:42:59] "GET http://ppcfinder.net/judge.php HTTP/1.1" 404 1744
----------------------------------------
Exception happened during processing of request from ('221.195.73.68', 2884)
Traceback (most recent call last):
File "C:\Python26\lib\SocketServer.py", line 281, in _handle_request_noblock
self.process_request(request, client_address)
File "C:\Python26\lib\SocketServer.py", line 307, in process_request
self.finish_request(request, client_address)
File "C:\Python26\lib\SocketServer.py", line 320, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "C:\Python26\lib\site-packages\django\core\servers\basehttp.py", line 562
, in __init__
BaseHTTPRequestHandler.__init__(self, *args, **kwargs)
File "C:\Python26\lib\SocketServer.py", line 615, in __init__
self.handle()
File "C:\Python26\lib\site-packages\django\core\servers\basehttp.py", line 602
, in handle
self.raw_requestline = self.rfile.readline()
File "C:\Python26\lib\socket.py", line 406, in readline
data = self._sock.recv(self._rbufsize)
error: [Errno 10054] An existing connection was forcibly closed by the remote ho
st
----------------------------------------
logging on
[21/Oct/2010 13:43:44] "POST /members/signup/ HTTP/1.1" 302 0
----------------------------------------
Exception happened during processing of request from ('221.195.73.68', 3227)
Traceback (most recent call last):
File "C:\Python26\lib\SocketServer.py", line 281, in _handle_request_noblock
self.process_request(request, client_address)
File "C:\Python26\lib\SocketServer.py", line 307, in process_request
self.finish_request(request, client_address)
File "C:\Python26\lib\SocketServer.py", line 320, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "C:\Python26\lib\site-packages\django\core\servers\basehttp.py", line 562
, in __init__
BaseHTTPRequestHandler.__init__(self, *args, **kwargs)
File "C:\Python26\lib\SocketServer.py", line 615, in __init__
self.handle()
File "C:\Python26\lib\site-packages\django\core\servers\basehttp.py", line 602
, in handle
self.raw_requestline = self.rfile.readline()
File "C:\Python26\lib\socket.py", line 406, in readline
data = self._sock.recv(self._rbufsize)
error: [Errno 10054] An existing connection was forcibly closed by the remote ho
st
----------------------------------------
checking for entry_details, oh yeh
member invitation
Everything until the /ppcfinder.net/judge.php line is expected, I have no idea what this judge.php is doing or where it came from. Myoutput starts again at
checking for entry_details, oh yeh
member invitation
I searched through all Python / django source in case I got something nasty in there but nothing. I searched through all my own source and it's not in there either.
What on earth is it and where did it come from. I saw it last week as well but couldn't find it. Last week judge.php was being accessed via an IP address rather than ppcfinder.net URL. I checked out ppcfinder.net after turning off Javascript and it appears to be a rubbish search site. It's a bit scary in case something has got enbeded somewhere in Python source and mighjt be stealing stuff but I can't find it.
Has anyone seen this?
Rich
| [
"Seems like a bot in another host hit yours searching for known vulnerabilities to exploit.\n"
] | [
2
] | [] | [] | [
"django",
"malware",
"python"
] | stackoverflow_0003984008_django_malware_python.txt |
Q:
Kindly review the python code to boost its performance
I'm doing an Information Retrieval task. I built a simple searchengine. The InvertedIndex is a python dictionary object which is serialized (pickled in python terminology) to a file. Size of this file is InvertedIndex is just 6.5MB.
So, my Code just unpickles it and searches it for query & ranks the matching documents according to TF-IDF score. Doesn't sound anything huge right?
It started running 30min ago and still running. Private Bytes & Virtual Size usage of pythonw.exe running my 100line python script is 88MB and 168MB respectively.
When I tried it with index of smaller size it was fast. Is it the python or my code? Why is it so slow?
stopwords = ['a' , 'a\'s' , 'able' , 'about' , 'above' , 'according' , 'accordingly' , 'across' , 'actually' , 'after' , 'afterwards' , 'again' , 'against' , 'ain\'t' , 'all' , 'allow' , 'allows' , 'almost' , 'alone' , 'along' , 'already' , 'also' , 'although' , 'always' , 'am' , 'among' , 'amongst' , 'an' , 'and' , 'another' , 'any' , 'anybody' , 'anyhow' , 'anyone' , 'anything' , 'anyway' , 'anyways' , 'anywhere' , 'apart' , 'appear' , 'appreciate' , 'appropriate' , 'are' , 'aren\'t' , 'around' , 'as' , 'aside' , 'ask' , 'asking' , 'associated' , 'at' , 'available' , 'away' , 'awfully' , 'b' , 'be' , 'became' , 'because' , 'become' , 'becomes' , 'becoming' , 'been' , 'before' , 'beforehand' , 'behind' , 'being' , 'believe' , 'below' , 'beside' , 'besides' , 'best' , 'better' , 'between' , 'beyond' , 'both' , 'brief' , 'but' , 'by' , 'c' , 'c\'mon' , 'c\'s' , 'came' , 'can' , 'can\'t' , 'cannot' , 'cant' , 'cause' , 'causes' , 'certain' , 'certainly' , 'changes' , 'clearly' , 'co' , 'com' , 'come' , 'comes' , 'concerning' , 'consequently' , 'consider' , 'considering' , 'contain' , 'containing' , 'contains' , 'corresponding' , 'could' , 'couldn\'t' , 'course' , 'currently' , 'd' , 'definitely' , 'described' , 'despite' , 'did' , 'didn\'t' , 'different' , 'do' , 'does' , 'doesn\'t' , 'doing' , 'don\'t' , 'done' , 'down' , 'downwards' , 'during' , 'e' , 'each' , 'edu' , 'eg' , 'eight' , 'either' , 'else' , 'elsewhere' , 'enough' , 'entirely' , 'especially' , 'et' , 'etc' , 'even' , 'ever' , 'every' , 'everybody' , 'everyone' , 'everything' , 'everywhere' , 'ex' , 'exactly' , 'example' , 'except' , 'f' , 'far' , 'few' , 'fifth' , 'first' , 'five' , 'followed' , 'following' , 'follows' , 'for' , 'former' , 'formerly' , 'forth' , 'four' , 'from' , 'further' , 'furthermore' , 'g' , 'get' , 'gets' , 'getting' , 'given' , 'gives' , 'go' , 'goes' , 'going' , 'gone' , 'got' , 'gotten' , 'greetings' , 'h' , 'had' , 'hadn\'t' , 'happens' , 'hardly' , 'has' , 'hasn\'t' , 'have' , 'haven\'t' , 'having' , 'he' , 'he\'s' , 'hello' , 'help' , 'hence' , 'her' , 'here' , 'here\'s' , 'hereafter' , 'hereby' , 'herein' , 'hereupon' , 'hers' , 'herself' , 'hi' , 'him' , 'himself' , 'his' , 'hither' , 'hopefully' , 'how' , 'howbeit' , 'however' , 'i' , 'i\'d' , 'i\'ll' , 'i\'m' , 'i\'ve' , 'ie' , 'if' , 'ignored' , 'immediate' , 'in' , 'inasmuch' , 'inc' , 'indeed' , 'indicate' , 'indicated' , 'indicates' , 'inner' , 'insofar' , 'instead' , 'into' , 'inward' , 'is' , 'isn\'t' , 'it' , 'it\'d' , 'it\'ll' , 'it\'s' , 'its' , 'itself' , 'j' , 'just' , 'k' , 'keep' , 'keeps' , 'kept' , 'know' , 'knows' , 'known' , 'l' , 'last' , 'lately' , 'later' , 'latter' , 'latterly' , 'least' , 'less' , 'lest' , 'let' , 'let\'s' , 'like' , 'liked' , 'likely' , 'little' , 'look' , 'looking' , 'looks' , 'ltd' , 'm' , 'mainly' , 'many' , 'may' , 'maybe' , 'me' , 'mean' , 'meanwhile' , 'merely' , 'might' , 'more' , 'moreover' , 'most' , 'mostly' , 'much' , 'must' , 'my' , 'myself' , 'n' , 'name' , 'namely' , 'nd' , 'near' , 'nearly' , 'necessary' , 'need' , 'needs' , 'neither' , 'never' , 'nevertheless' , 'new' , 'next' , 'nine' , 'no' , 'nobody' , 'non' , 'none' , 'noone' , 'nor' , 'normally' , 'not' , 'nothing' , 'novel' , 'now' , 'nowhere' , 'o' , 'obviously' , 'of' , 'off' , 'often' , 'oh' , 'ok' , 'okay' , 'old' , 'on' , 'once' , 'one' , 'ones' , 'only' , 'onto' , 'or' , 'other' , 'others' , 'otherwise' , 'ought' , 'our' , 'ours' , 'ourselves' , 'out' , 'outside' , 'over' , 'overall' , 'own' , 'p' , 'particular' , 'particularly' , 'per' , 'perhaps' , 'placed' , 'please' , 'plus' , 'possible' , 'presumably' , 'probably' , 'provides' , 'q' , 'que' , 'quite' , 'qv' , 'r' , 'rather' , 'rd' , 're' , 'really' , 'reasonably' , 'regarding' , 'regardless' , 'regards' , 'relatively' , 'respectively' , 'right' , 's' , 'said' , 'same' , 'saw' , 'say' , 'saying' , 'says' , 'second' , 'secondly' , 'see' , 'seeing' , 'seem' , 'seemed' , 'seeming' , 'seems' , 'seen' , 'self' , 'selves' , 'sensible' , 'sent' , 'serious' , 'seriously' , 'seven' , 'several' , 'shall' , 'she' , 'should' , 'shouldn\'t' , 'since' , 'six' , 'so' , 'some' , 'somebody' , 'somehow' , 'someone' , 'something' , 'sometime' , 'sometimes' , 'somewhat' , 'somewhere' , 'soon' , 'sorry' , 'specified' , 'specify' , 'specifying' , 'still' , 'sub' , 'such' , 'sup' , 'sure' , 't' , 't\'s' , 'take' , 'taken' , 'tell' , 'tends' , 'th' , 'than' , 'thank' , 'thanks' , 'thanx' , 'that' , 'that\'s' , 'thats' , 'the' , 'their' , 'theirs' , 'them' , 'themselves' , 'then' , 'thence' , 'there' , 'there\'s' , 'thereafter' , 'thereby' , 'therefore' , 'therein' , 'theres' , 'thereupon' , 'these' , 'they' , 'they\'d' , 'they\'ll' , 'they\'re' , 'they\'ve' , 'think' , 'third' , 'this' , 'thorough' , 'thoroughly' , 'those' , 'though' , 'three' , 'through' , 'throughout' , 'thru' , 'thus' , 'to' , 'together' , 'too' , 'took' , 'toward' , 'towards' , 'tried' , 'tries' , 'truly' , 'try' , 'trying' , 'twice' , 'two' , 'u' , 'un' , 'under' , 'unfortunately' , 'unless' , 'unlikely' , 'until' , 'unto' , 'up' , 'upon' , 'us' , 'use' , 'used' , 'useful' , 'uses' , 'using' , 'usually' , 'uucp' , 'v' , 'value' , 'various' , 'very' , 'via' , 'viz' , 'vs' , 'w' , 'want' , 'wants' , 'was' , 'wasn\'t' , 'way' , 'we' , 'we\'d' , 'we\'ll' , 'we\'re' , 'we\'ve' , 'welcome' , 'well' , 'went' , 'were' , 'weren\'t' , 'what' , 'what\'s' , 'whatever' , 'when' , 'whence' , 'whenever' , 'where' , 'where\'s' , 'whereafter' , 'whereas' , 'whereby' , 'wherein' , 'whereupon' , 'wherever' , 'whether' , 'which' , 'while' , 'whither' , 'who' , 'who\'s' , 'whoever' , 'whole' , 'whom' , 'whose' , 'why' , 'will' , 'willing' , 'wish' , 'with' , 'within' , 'without' , 'won\'t' , 'wonder' , 'would' , 'would' , 'wouldn\'t' , 'x' , 'y' , 'yes' , 'yet' , 'you' , 'you\'d' , 'you\'ll' , 'you\'re' , 'you\'ve' , 'your' , 'yours' , 'yourself' , 'yourselves' , 'z' , 'zero']
import PorterStemmer
import math
import pickle
def TF(term,doc):
#Term Frequency: No. of times `term` occured in `doc`
global InvertedIndex
idx = InvertedIndex[term].index(doc)
count = 0
while (idx < len(InvertedIndex[term])) and InvertedIndex[term][idx] == doc:
count= count+1
idx = idx+1
return count
def DF(term):
#Document Frequency: No. of documents containing `term`
global InvertedIndex
return len(set(InvertedIndex[term]))
def avgTF(term, doc):
global docs
TFs = []
for term in docs[doc]:
TFs.append(TF(term,doc))
return sum(TFs)/len(TFs)
def maxTF(term, doc):
global docs
TFs = []
for term in docs[doc]:
TFs.append(TF(term,doc))
return max(TFs)
def getValues4Term(term, doc):
TermFrequency = {}
TermFrequency['natural'] = TF(term,doc)
TermFrequency['log'] = 1+math.log( TF(term,doc) )
TermFrequency['aug'] = 0.5+float(0.5*TF(term,doc)/maxTF(term,doc))
TermFrequency['bool'] = 1 if TF(term,doc)>0 else 0
TermFrequency['log_avg'] = float(1+math.log( TF(term,doc) ))/(1+math.log( avgTF(term,doc) ))
DocumentFrequency = {}
DocumentFrequency['no'] = 1
DocumentFrequency['idf'] = math.log( len(docs)/DF(term) )
DocumentFrequency['probIDF'] = max( [0, math.log( float(len(docs)-DF(term))/DF(term) )] )
return [TermFrequency, DocumentFrequency]
def Cosine(resultDocVector, qVector, doc):
#`doc` parameter is the document number corresponding to resultDocVector
global qterms,docs
# Defining Cosine similarity : cos(a) = A.B/|A||B|
dotProduct = 0
commonTerms_q_d = set(qterms).intersection(docs[doc]) #commonTerms in both query & document
for cmnTerm in commonTerms_q_d:
dotProduct = dotProduct + resultDocVector[docs[doc].index(cmnTerm)] * qVector[qterms.index(cmnTerm)]
resultSquares = []
for k in resultDocVector:
resultSquares.append(k*k)
qSquares = []
for k in qVector:
qSquares.append(k*k)
denominator = math.sqrt(sum(resultSquares)) * math.sqrt(sum(qSquares))
return dotProduct/denominator
def load():
#load index from a file
global InvertedIndex, docIDs, docs
PICKLE_InvertedIndex_FILE = open("InvertedIndex.db", 'rb')
InvertedIndex = pickle.load(PICKLE_InvertedIndex_FILE)
PICKLE_InvertedIndex_FILE.close()
PICKLE_docIDs_FILE = open("docIDs.db", 'rb')
docIDs = pickle.load(PICKLE_docIDs_FILE)
PICKLE_docIDs_FILE.close()
PICKLE_docs_FILE = open("docs.db", 'rb')
docs = pickle.load(PICKLE_docs_FILE)
PICKLE_docs_FILE.close()
########################
docs = []
docIDs = []
InvertedIndex = {}
load()
stemmer = PorterStemmer.PorterStemmer()
#<getting results for a query
query = 'Antarctica exploration'
qwords = query.strip().split()
qterms = []
qterms1 = []
for qword in qwords:
qword = qword.lower()
if qword in stopwords:
continue
qterm = stemmer.stem(qword,0,len(qword)-1)
qterms1.append(qterm)
qterms = list(set(qterms1))
#getting posting lists for each qterms & merging them
prev = set()
i = 0
for qterm in qterms:
if InvertedIndex.has_key(qterm):
if i == 0:
prev = set(InvertedIndex[qterm])
i = i+1
continue
prev = prev.intersection(set(InvertedIndex[qterm]))
results = list(prev)
#</getting results for a query
#We've got the results. Now lets rank them using Cosine similarity.
i = 0
docComponents = []
for doc in results:
docComponents.append([])
i = 0
for doc in results:
for term in docs[doc]:
vals = getValues4Term(term,doc)#[TermFrequency, DocumentFrequency]
docComponents[i].append(vals)
i = i+1
#Normalization = {}
# forming vectors for each document in the result
i = 0 #document iterator
j = 0 #term iterator
resultDocVectors = []#contains document vector for each result.
for doc in results:
resultDocVectors.append([])
for i in range(0,len(results)):
for j in range(0,len(docs[doc])):
tf = docComponents[i][j][0]['natural']#0:TermFrequency
idf = docComponents[i][j][1]['idf'] #1:DocumentFrequency
resultDocVectors[i].append(tf*idf)
#forming vector for query
qVector = []
qTF = []
qDF = []
for qterm in qterms:
count = 0
idx = qterms1.index(qterm)
while idx < len(qterms1) and qterms1[idx] == qterm:
count= count+1
idx = idx+1
qTF.append(count)
qVector = qTF
#compuing Cosine similarities of all resultDocVectors w.r.t qVector
i = 0
CosineVals = []
for resultDocVector in resultDocVectors:
doc = results[i]
CosineVals.append(Cosine(resultDocVector, qVector, doc))
i = i+1
#ranking as per Cosine Similarities
#this is not "perfect" sorting. As it may not give 100% correct results when it multiple docs have same cosine similarities.
CosineValsCopy = CosineVals
CosineVals.sort()
sortedCosineVals = CosineVals
CosineVals = CosineValsCopy
rankedResults = []
for cval in sortedCosineVals:
rankedResults.append(results[CosineVals.index(cval)])
rankedResults.reverse()
#<Evaluation of the system:>
#parsing qrels.txt & getting relevances
# qrels.txt contains columns of the form:
# qid iter docno rel
#2nd column `iter` can be ignored.
relevances = {}
fh = open("qrels.txt")
lines = fh.readlines()
for line in lines:
cols = line.strip().split()
if relevances.has_key(cols[0]):#queryID
relevances[cols[0]].append(cols[2])#docID
else:
relevances[cols[0]] = [cols[2]]
fh.close()
#precision = no. of relevant docs retrieved/total no. of docs retrieved
no_of_relevant_docs_retrieved = set(rankedResults).intersection( set(relevances[queryID]) )
Precision = no_of_relevant_docs_retrieved/len(rankedResults)
#recall = no. of relevant docs retrieved/ total no. of relevant docs
Recall = no_of_relevant_docs_retrieved/len(relevances[queryID])
A:
It's definitely your code, but since you choose to hide it from us it's impossible for us to help any further. All I can tell you based on the very scarce info you choose to supply is that unpickling a dict (in the right way) is much faster, and indexing into it (assuming that's what you mean by "searches it for query") is blazingly fast. It's from this data that I deduce that the cause of your slowdown must be something else you do, or do wrong, in your code.
Edit: now that you have posted you code I notice, just at a glance, that a lot of nontrivial code is running at module top level. Really horrible practice, and detrimental to performance: put all your nontrivial code into a function, and call that function -- that by itself will give you a tens-of-percent speedup, at zero cost of complication. I must have mentioned that crucial fact at least 20 times just in my Stack Overflow posts, not to mention "Python in a Nutshell" etc -- surely if you care about performance you cannot blithely ignore such easily available and widespread information?!
More easily-fixed wastes of runtime:
import pickle
use cPickle (if you're not on Python 2.6 or 2.7, but rather on 3.1, then there may be other causes of performance issues -- I don't know how finely tuned 3.1 is at this time compared with the awesome performance of 2.6 and 2.7).
All of your global statements are useless except the one in load (not a serious performance hit, but redundant and useless code should be eliminated on principle). You only need a global if you want to bind a module-global variable from within a function, and load is the only one where you're doing that.
More editing:
Now we get to more important stuff: the values in InvertedIndex appear to be lists of docs, so to know how many times a doc appears in one you have to loop throughout it. Why not make each of the values be instead a dict from doc to number of occurrences? No looping (and no looping where you now do the len(set(...)) of a value in InvertedIndex -- just the len will then be equivalent and you save the set(...) operation which implicitly has to loop to perform its job). This is a big-O optimization, not "just" the kind of speedup by 20% or so that things I've mentioned so far may account for -- i.e., this is more important stuff, as I said. Use the right data structures and algorithms and a lot of minor inefficiencies may become relatively unimportant; use the wrong ones, and there's no saving your code's performance as the input size grows, no matter how cleverly you micro-optimize the wrong data structures and algorithms;-).
And more: you repeat your computations a lot, "from scratch" each time -- e.g., look how many times you call TF(term, doc) for each given term and doc (and each call has the crucial inefficiency I just explained too). As the quickest way to fix this huge inefficiency, use memoization -- e.g., with the memoized decorator found here.
OK, it's getting late for me and I'd better head off to bed -- I hope some or all of the above suggestions were of some use to you!
A:
Alex gave you good suggestions for algorithmic change. I'm just going to address writing fast python code. You should do both. If you were to just incorporate my changes, you would still (based on what Alex said) have a broken program, but I'm just not up for understanding your whole program right now and I am in the mood to micro-optimize. Even if you end up throwing a lot of these functions away, comparing a slow implementation with a fast implementation will help you write fast implementations of your new functions.
Take the following function:
def TF(term,doc):
#Term Frequency: No. of times `term` occured in `doc`
global InvertedIndex
idx = InvertedIndex[term].index(doc)
count = 0
while (idx < len(InvertedIndex[term])) and InvertedIndex[term][idx] == doc:
count= count+1
idx = idx+1
return count
rewrite it as
def TF(term, doc):
idx = InvertedIndex[term].index(doc)
return next(i + 1 for i, item in enumerate(InvertedIndex[term][idx:])
if item != doc)
# Above struck out because the count method does the same thing and there was a bug
# in the implementation anyways.
InvertedIndex[term].count(doc)
This creates a generator expression that generates the ordered set of indexes of documents that occur after the first index of doc and that are not equal to it. You can just use the next function to calculate the first element and this will be your count.
Some functions that you definitely want to look up in the docs.
zip
enumerate
some syntax that you want
generator expressions (just like list comprehensions but mo'bettah (unless you need a list ;))
list comprehensions
last but certainly not least, the most important (IMNAHAIPSBO) python module:
itertools
Here's another function
def maxTF(term, doc):
global docs
TFs = []
for term in docs[doc]:
TFs.append(TF(term,doc))
return max(TFs)
you can rewrite this using a generator expression:
def maxTF(term, doc):
return max(TF(term, doc) for term in docs[doc])
generator expressions will often run close to twice as fast as a for loop.
Finally, here's your Cosine function:
def Cosine(resultDocVector, qVector, doc):
#`doc` parameter is the document number corresponding to resultDocVector
global qterms,docs
# Defining Cosine similarity : cos(a) = A.B/|A||B|
dotProduct = 0
commonTerms_q_d = set(qterms).intersection(docs[doc]) #commonTerms in both query & document
for cmnTerm in commonTerms_q_d:
dotProduct = dotProduct + resultDocVector[docs[doc].index(cmnTerm)] * qVector[qterms.index(cmnTerm)]
resultSquares = []
for k in resultDocVector:
resultSquares.append(k*k)
qSquares = []
for k in qVector:
qSquares.append(k*k)
let's rewrite this as:
def Cosine(resultDocVector, qVector, doc):
doc = docs[doc]
commonTerms_q_d = set(qterms).intersection(doc)
dotProduct = sum(resultDocVector[doc.index(cmnTerm)] *qVector[qterms.index(cmnTerm)]
for cmnTerm in commonTerms_q_d)
denominator = sum(k**2 for k in resultDocVector)
denominator *= sum(k**2 for k in qVector)
denominator = math.sqrt(denominator)
return dotProduct/denominator
Here, we've tossed out every for loop in sight. code of the form
lst = []
for item in other_lst:
lst.append(somefunc(item))
is the slowest way to build a list. First off, for/while loops are slow to begin with and appending to a list is slow. You have the worst of both worlds. A good attitude is to code like there's a tax on loops (performance wise, there is). Only pay it if you just can't do something with map or a comprehension or it makes your code more readable and you know it's not a bottle neck. comprehensions are very readable once you get used to them.
A:
This is more micro-optimization, in the spirit of @aaronasterling, above. Still, I think these observations are worth considering.
Use appropriate datatypes
stopwords should be a set. You can't repeatedly search a list repeatedly and expect it to be fast.
Use more sets. They are iterable, like lists, but when you have to search them, they are way faster than lists.
List comprehensions
resultSquares = [k*k for k in resultDocVector]
qSquares = [k*k for k in qVector]
TFs = [TF(term,doc) for term in docs[doc]]
Generators
Turn this:
for qword in qwords:
qword = qword.lower()
if qword in stopwords:
continue
qterm = stemmer.stem(qword,0,len(qword)-1)
qterms1.append(qterm)
qterms = list(set(qterms1))
into this:
qworditer = (qword.lower() for qword in qwords if qword not in stopwords)
qtermiter = (stemmer.stem(qword,0,len(qword)-1) for qword in qworditer)
qterms1 = set([qterm for qterm in qtermiter])
Use generators and reduce():
Turn this:
prev = set()
i = 0
for qterm in qterms:
if InvertedIndex.has_key(qterm):
if i == 0:
prev = set(InvertedIndex[qterm])
i = i+1
continue
prev = prev.intersection(set(InvertedIndex[qterm]))
results = list(prev)
Into this:
qtermiter = (set(InvertedIndex[qterm]) for qterm in qterms if qterm in InvertedIndex)
results = reduce(set.intersection, qtermiter)
Use list comprehensions
Instead of this:
i = 0
docComponents = []
for doc in results:
docComponents.append([])
i = 0
for doc in results:
for term in docs[doc]:
vals = getValues4Term(term,doc)#[TermFrequency, DocumentFrequency]
docComponents[i].append(vals)
i = i+1
Write this:
docComponents = [getValues4Term(term,doc) for doc in results for term in docs[doc]]
This code makes no sense:
for doc in results:
resultDocVectors.append([])
for i in range(0,len(results)):
for j in range(0,len(docs[doc])):
tf = docComponents[i][j][0]['natural']#0:TermFrequency
idf = docComponents[i][j][1]['idf'] #1:DocumentFrequency
resultDocVectors[i].append(tf*idf)
The value len(docs[doc]) depends on doc, and the value of doc is whatever was last reached in the loop for doc in results.
Use collections.defaultdict
Instead of this:
relevances = {}
fh = open("qrels.txt")
lines = fh.readlines()
for line in lines:
cols = line.strip().split()
if relevances.has_key(cols[0]):#queryID
relevances[cols[0]].append(cols[2])#docID
else:
relevances[cols[0]] = [cols[2]]
Write this (assuming your file has only three fields per line):
from collections import defaultdict
relevances = defaultdict(list)
with open("qrels.txt") as fh:
lineiter = (line.strip().split() for line in fh)
for queryID, _, docID in lineiter:
relevances[queryID].append(docID)
As many other people have said, memoize your computations.
2010-10-21: An update on the stopwords thing.
from datetime import datetime
stopwords = ['a' , 'a\'s' , 'able' , 'about' , 'above' , 'according' , 'accordingly' , 'across' , 'actually' , 'after' , 'afterwards' , 'again' , 'against' , 'ain\'t' , 'all' , 'allow' , 'allows' , 'almost' , 'alone' , 'along' , 'already' , 'also' , 'although' , 'always' , 'am' , 'among' , 'amongst' , 'an' , 'and' , 'another' , 'any' , 'anybody' , 'anyhow' , 'anyone' , 'anything' , 'anyway' , 'anyways' , 'anywhere' , 'apart' , 'appear' , 'appreciate' , 'appropriate' , 'are' , 'aren\'t' , 'around' , 'as' , 'aside' , 'ask' , 'asking' , 'associated' , 'at' , 'available' , 'away' , 'awfully' , 'b' , 'be' , 'became' , 'because' , 'become' , 'becomes' , 'becoming' , 'been' , 'before' , 'beforehand' , 'behind' , 'being' , 'believe' , 'below' , 'beside' , 'besides' , 'best' , 'better' , 'between' , 'beyond' , 'both' , 'brief' , 'but' , 'by' , 'c' , 'c\'mon' , 'c\'s' , 'came' , 'can' , 'can\'t' , 'cannot' , 'cant' , 'cause' , 'causes' , 'certain' , 'certainly' , 'changes' , 'clearly' , 'co' , 'com' , 'come' , 'comes' , 'concerning' , 'consequently' , 'consider' , 'considering' , 'contain' , 'containing' , 'contains' , 'corresponding' , 'could' , 'couldn\'t' , 'course' , 'currently' , 'd' , 'definitely' , 'described' , 'despite' , 'did' , 'didn\'t' , 'different' , 'do' , 'does' , 'doesn\'t' , 'doing' , 'don\'t' , 'done' , 'down' , 'downwards' , 'during' , 'e' , 'each' , 'edu' , 'eg' , 'eight' , 'either' , 'else' , 'elsewhere' , 'enough' , 'entirely' , 'especially' , 'et' , 'etc' , 'even' , 'ever' , 'every' , 'everybody' , 'everyone' , 'everything' , 'everywhere' , 'ex' , 'exactly' , 'example' , 'except' , 'f' , 'far' , 'few' , 'fifth' , 'first' , 'five' , 'followed' , 'following' , 'follows' , 'for' , 'former' , 'formerly' , 'forth' , 'four' , 'from' , 'further' , 'furthermore' , 'g' , 'get' , 'gets' , 'getting' , 'given' , 'gives' , 'go' , 'goes' , 'going' , 'gone' , 'got' , 'gotten' , 'greetings' , 'h' , 'had' , 'hadn\'t' , 'happens' , 'hardly' , 'has' , 'hasn\'t' , 'have' , 'haven\'t' , 'having' , 'he' , 'he\'s' , 'hello' , 'help' , 'hence' , 'her' , 'here' , 'here\'s' , 'hereafter' , 'hereby' , 'herein' , 'hereupon' , 'hers' , 'herself' , 'hi' , 'him' , 'himself' , 'his' , 'hither' , 'hopefully' , 'how' , 'howbeit' , 'however' , 'i' , 'i\'d' , 'i\'ll' , 'i\'m' , 'i\'ve' , 'ie' , 'if' , 'ignored' , 'immediate' , 'in' , 'inasmuch' , 'inc' , 'indeed' , 'indicate' , 'indicated' , 'indicates' , 'inner' , 'insofar' , 'instead' , 'into' , 'inward' , 'is' , 'isn\'t' , 'it' , 'it\'d' , 'it\'ll' , 'it\'s' , 'its' , 'itself' , 'j' , 'just' , 'k' , 'keep' , 'keeps' , 'kept' , 'know' , 'knows' , 'known' , 'l' , 'last' , 'lately' , 'later' , 'latter' , 'latterly' , 'least' , 'less' , 'lest' , 'let' , 'let\'s' , 'like' , 'liked' , 'likely' , 'little' , 'look' , 'looking' , 'looks' , 'ltd' , 'm' , 'mainly' , 'many' , 'may' , 'maybe' , 'me' , 'mean' , 'meanwhile' , 'merely' , 'might' , 'more' , 'moreover' , 'most' , 'mostly' , 'much' , 'must' , 'my' , 'myself' , 'n' , 'name' , 'namely' , 'nd' , 'near' , 'nearly' , 'necessary' , 'need' , 'needs' , 'neither' , 'never' , 'nevertheless' , 'new' , 'next' , 'nine' , 'no' , 'nobody' , 'non' , 'none' , 'noone' , 'nor' , 'normally' , 'not' , 'nothing' , 'novel' , 'now' , 'nowhere' , 'o' , 'obviously' , 'of' , 'off' , 'often' , 'oh' , 'ok' , 'okay' , 'old' , 'on' , 'once' , 'one' , 'ones' , 'only' , 'onto' , 'or' , 'other' , 'others' , 'otherwise' , 'ought' , 'our' , 'ours' , 'ourselves' , 'out' , 'outside' , 'over' , 'overall' , 'own' , 'p' , 'particular' , 'particularly' , 'per' , 'perhaps' , 'placed' , 'please' , 'plus' , 'possible' , 'presumably' , 'probably' , 'provides' , 'q' , 'que' , 'quite' , 'qv' , 'r' , 'rather' , 'rd' , 're' , 'really' , 'reasonably' , 'regarding' , 'regardless' , 'regards' , 'relatively' , 'respectively' , 'right' , 's' , 'said' , 'same' , 'saw' , 'say' , 'saying' , 'says' , 'second' , 'secondly' , 'see' , 'seeing' , 'seem' , 'seemed' , 'seeming' , 'seems' , 'seen' , 'self' , 'selves' , 'sensible' , 'sent' , 'serious' , 'seriously' , 'seven' , 'several' , 'shall' , 'she' , 'should' , 'shouldn\'t' , 'since' , 'six' , 'so' , 'some' , 'somebody' , 'somehow' , 'someone' , 'something' , 'sometime' , 'sometimes' , 'somewhat' , 'somewhere' , 'soon' , 'sorry' , 'specified' , 'specify' , 'specifying' , 'still' , 'sub' , 'such' , 'sup' , 'sure' , 't' , 't\'s' , 'take' , 'taken' , 'tell' , 'tends' , 'th' , 'than' , 'thank' , 'thanks' , 'thanx' , 'that' , 'that\'s' , 'thats' , 'the' , 'their' , 'theirs' , 'them' , 'themselves' , 'then' , 'thence' , 'there' , 'there\'s' , 'thereafter' , 'thereby' , 'therefore' , 'therein' , 'theres' , 'thereupon' , 'these' , 'they' , 'they\'d' , 'they\'ll' , 'they\'re' , 'they\'ve' , 'think' , 'third' , 'this' , 'thorough' , 'thoroughly' , 'those' , 'though' , 'three' , 'through' , 'throughout' , 'thru' , 'thus' , 'to' , 'together' , 'too' , 'took' , 'toward' , 'towards' , 'tried' , 'tries' , 'truly' , 'try' , 'trying' , 'twice' , 'two' , 'u' , 'un' , 'under' , 'unfortunately' , 'unless' , 'unlikely' , 'until' , 'unto' , 'up' , 'upon' , 'us' , 'use' , 'used' , 'useful' , 'uses' , 'using' , 'usually' , 'uucp' , 'v' , 'value' , 'various' , 'very' , 'via' , 'viz' , 'vs' , 'w' , 'want' , 'wants' , 'was' , 'wasn\'t' , 'way' , 'we' , 'we\'d' , 'we\'ll' , 'we\'re' , 'we\'ve' , 'welcome' , 'well' , 'went' , 'were' , 'weren\'t' , 'what' , 'what\'s' , 'whatever' , 'when' , 'whence' , 'whenever' , 'where' , 'where\'s' , 'whereafter' , 'whereas' , 'whereby' , 'wherein' , 'whereupon' , 'wherever' , 'whether' , 'which' , 'while' , 'whither' , 'who' , 'who\'s' , 'whoever' , 'whole' , 'whom' , 'whose' , 'why' , 'will' , 'willing' , 'wish' , 'with' , 'within' , 'without' , 'won\'t' , 'wonder' , 'would' , 'would' , 'wouldn\'t' , 'x' , 'y' , 'yes' , 'yet' , 'you' , 'you\'d' , 'you\'ll' , 'you\'re' , 'you\'ve' , 'your' , 'yours' , 'yourself' , 'yourselves' , 'z' , 'zero']
print len(stopwords)
dictfile = '/usr/share/dict/american-english-huge'
with open(dictfile) as f:
words = [line.strip() for line in f]
print len(words)
s = datetime.now()
total = sum(1 for word in words if word in stopwords)
e = datetime.now()
elapsed = e - s
print elapsed, total
s = datetime.now()
stopwords_set = set(stopwords)
total = sum(1 for word in words if word in stopwords_set)
e = datetime.now()
elapsed = e - s
print elapsed, total
I get these results:
# Using list
>>> print elapsed, total
0:00:06.902529 542
# Using set
>>> print elapsed, total
0:00:00.050676 542
Same number of results, but one runs almost 140 times faster. Granted, you probably don't have so many words to compare against your stopwords, and 6 seconds is negligible against your 30+ minute runtime. It does underline, though, that using appropriate data structures speeds up your code.
A:
It's nice that you post your code in response to people asking you to, but unless they run it and profile it, the best they can do is guess. I can guess too, but even if guesses are "good" or "educated", they are not a good way to find performance problems.
I would rather refer you to a technique that will pinpoint the problem. That will work better than guessing or asking others to guess. Once you've found out for yourself where the problem is exactly, you can decide whether to use memoization or whatever to fix it.
Usually there's more than one problem. If you repeat the process of finding and removing performance problems, you'll approach a real optimum.
A:
Does Python cache function results? I don't think so. In this case it might be a bad idea to run a looping function like TF(term,doc) many times in getValues4Term(). When you put the result in a variable, you probably already get a huge speed upgrade. That combined with
for doc in results:
for term in docs[doc]:
vals = getValues4Term(term,doc)
might already be the biggest speed problem, you have.
| Kindly review the python code to boost its performance | I'm doing an Information Retrieval task. I built a simple searchengine. The InvertedIndex is a python dictionary object which is serialized (pickled in python terminology) to a file. Size of this file is InvertedIndex is just 6.5MB.
So, my Code just unpickles it and searches it for query & ranks the matching documents according to TF-IDF score. Doesn't sound anything huge right?
It started running 30min ago and still running. Private Bytes & Virtual Size usage of pythonw.exe running my 100line python script is 88MB and 168MB respectively.
When I tried it with index of smaller size it was fast. Is it the python or my code? Why is it so slow?
stopwords = ['a' , 'a\'s' , 'able' , 'about' , 'above' , 'according' , 'accordingly' , 'across' , 'actually' , 'after' , 'afterwards' , 'again' , 'against' , 'ain\'t' , 'all' , 'allow' , 'allows' , 'almost' , 'alone' , 'along' , 'already' , 'also' , 'although' , 'always' , 'am' , 'among' , 'amongst' , 'an' , 'and' , 'another' , 'any' , 'anybody' , 'anyhow' , 'anyone' , 'anything' , 'anyway' , 'anyways' , 'anywhere' , 'apart' , 'appear' , 'appreciate' , 'appropriate' , 'are' , 'aren\'t' , 'around' , 'as' , 'aside' , 'ask' , 'asking' , 'associated' , 'at' , 'available' , 'away' , 'awfully' , 'b' , 'be' , 'became' , 'because' , 'become' , 'becomes' , 'becoming' , 'been' , 'before' , 'beforehand' , 'behind' , 'being' , 'believe' , 'below' , 'beside' , 'besides' , 'best' , 'better' , 'between' , 'beyond' , 'both' , 'brief' , 'but' , 'by' , 'c' , 'c\'mon' , 'c\'s' , 'came' , 'can' , 'can\'t' , 'cannot' , 'cant' , 'cause' , 'causes' , 'certain' , 'certainly' , 'changes' , 'clearly' , 'co' , 'com' , 'come' , 'comes' , 'concerning' , 'consequently' , 'consider' , 'considering' , 'contain' , 'containing' , 'contains' , 'corresponding' , 'could' , 'couldn\'t' , 'course' , 'currently' , 'd' , 'definitely' , 'described' , 'despite' , 'did' , 'didn\'t' , 'different' , 'do' , 'does' , 'doesn\'t' , 'doing' , 'don\'t' , 'done' , 'down' , 'downwards' , 'during' , 'e' , 'each' , 'edu' , 'eg' , 'eight' , 'either' , 'else' , 'elsewhere' , 'enough' , 'entirely' , 'especially' , 'et' , 'etc' , 'even' , 'ever' , 'every' , 'everybody' , 'everyone' , 'everything' , 'everywhere' , 'ex' , 'exactly' , 'example' , 'except' , 'f' , 'far' , 'few' , 'fifth' , 'first' , 'five' , 'followed' , 'following' , 'follows' , 'for' , 'former' , 'formerly' , 'forth' , 'four' , 'from' , 'further' , 'furthermore' , 'g' , 'get' , 'gets' , 'getting' , 'given' , 'gives' , 'go' , 'goes' , 'going' , 'gone' , 'got' , 'gotten' , 'greetings' , 'h' , 'had' , 'hadn\'t' , 'happens' , 'hardly' , 'has' , 'hasn\'t' , 'have' , 'haven\'t' , 'having' , 'he' , 'he\'s' , 'hello' , 'help' , 'hence' , 'her' , 'here' , 'here\'s' , 'hereafter' , 'hereby' , 'herein' , 'hereupon' , 'hers' , 'herself' , 'hi' , 'him' , 'himself' , 'his' , 'hither' , 'hopefully' , 'how' , 'howbeit' , 'however' , 'i' , 'i\'d' , 'i\'ll' , 'i\'m' , 'i\'ve' , 'ie' , 'if' , 'ignored' , 'immediate' , 'in' , 'inasmuch' , 'inc' , 'indeed' , 'indicate' , 'indicated' , 'indicates' , 'inner' , 'insofar' , 'instead' , 'into' , 'inward' , 'is' , 'isn\'t' , 'it' , 'it\'d' , 'it\'ll' , 'it\'s' , 'its' , 'itself' , 'j' , 'just' , 'k' , 'keep' , 'keeps' , 'kept' , 'know' , 'knows' , 'known' , 'l' , 'last' , 'lately' , 'later' , 'latter' , 'latterly' , 'least' , 'less' , 'lest' , 'let' , 'let\'s' , 'like' , 'liked' , 'likely' , 'little' , 'look' , 'looking' , 'looks' , 'ltd' , 'm' , 'mainly' , 'many' , 'may' , 'maybe' , 'me' , 'mean' , 'meanwhile' , 'merely' , 'might' , 'more' , 'moreover' , 'most' , 'mostly' , 'much' , 'must' , 'my' , 'myself' , 'n' , 'name' , 'namely' , 'nd' , 'near' , 'nearly' , 'necessary' , 'need' , 'needs' , 'neither' , 'never' , 'nevertheless' , 'new' , 'next' , 'nine' , 'no' , 'nobody' , 'non' , 'none' , 'noone' , 'nor' , 'normally' , 'not' , 'nothing' , 'novel' , 'now' , 'nowhere' , 'o' , 'obviously' , 'of' , 'off' , 'often' , 'oh' , 'ok' , 'okay' , 'old' , 'on' , 'once' , 'one' , 'ones' , 'only' , 'onto' , 'or' , 'other' , 'others' , 'otherwise' , 'ought' , 'our' , 'ours' , 'ourselves' , 'out' , 'outside' , 'over' , 'overall' , 'own' , 'p' , 'particular' , 'particularly' , 'per' , 'perhaps' , 'placed' , 'please' , 'plus' , 'possible' , 'presumably' , 'probably' , 'provides' , 'q' , 'que' , 'quite' , 'qv' , 'r' , 'rather' , 'rd' , 're' , 'really' , 'reasonably' , 'regarding' , 'regardless' , 'regards' , 'relatively' , 'respectively' , 'right' , 's' , 'said' , 'same' , 'saw' , 'say' , 'saying' , 'says' , 'second' , 'secondly' , 'see' , 'seeing' , 'seem' , 'seemed' , 'seeming' , 'seems' , 'seen' , 'self' , 'selves' , 'sensible' , 'sent' , 'serious' , 'seriously' , 'seven' , 'several' , 'shall' , 'she' , 'should' , 'shouldn\'t' , 'since' , 'six' , 'so' , 'some' , 'somebody' , 'somehow' , 'someone' , 'something' , 'sometime' , 'sometimes' , 'somewhat' , 'somewhere' , 'soon' , 'sorry' , 'specified' , 'specify' , 'specifying' , 'still' , 'sub' , 'such' , 'sup' , 'sure' , 't' , 't\'s' , 'take' , 'taken' , 'tell' , 'tends' , 'th' , 'than' , 'thank' , 'thanks' , 'thanx' , 'that' , 'that\'s' , 'thats' , 'the' , 'their' , 'theirs' , 'them' , 'themselves' , 'then' , 'thence' , 'there' , 'there\'s' , 'thereafter' , 'thereby' , 'therefore' , 'therein' , 'theres' , 'thereupon' , 'these' , 'they' , 'they\'d' , 'they\'ll' , 'they\'re' , 'they\'ve' , 'think' , 'third' , 'this' , 'thorough' , 'thoroughly' , 'those' , 'though' , 'three' , 'through' , 'throughout' , 'thru' , 'thus' , 'to' , 'together' , 'too' , 'took' , 'toward' , 'towards' , 'tried' , 'tries' , 'truly' , 'try' , 'trying' , 'twice' , 'two' , 'u' , 'un' , 'under' , 'unfortunately' , 'unless' , 'unlikely' , 'until' , 'unto' , 'up' , 'upon' , 'us' , 'use' , 'used' , 'useful' , 'uses' , 'using' , 'usually' , 'uucp' , 'v' , 'value' , 'various' , 'very' , 'via' , 'viz' , 'vs' , 'w' , 'want' , 'wants' , 'was' , 'wasn\'t' , 'way' , 'we' , 'we\'d' , 'we\'ll' , 'we\'re' , 'we\'ve' , 'welcome' , 'well' , 'went' , 'were' , 'weren\'t' , 'what' , 'what\'s' , 'whatever' , 'when' , 'whence' , 'whenever' , 'where' , 'where\'s' , 'whereafter' , 'whereas' , 'whereby' , 'wherein' , 'whereupon' , 'wherever' , 'whether' , 'which' , 'while' , 'whither' , 'who' , 'who\'s' , 'whoever' , 'whole' , 'whom' , 'whose' , 'why' , 'will' , 'willing' , 'wish' , 'with' , 'within' , 'without' , 'won\'t' , 'wonder' , 'would' , 'would' , 'wouldn\'t' , 'x' , 'y' , 'yes' , 'yet' , 'you' , 'you\'d' , 'you\'ll' , 'you\'re' , 'you\'ve' , 'your' , 'yours' , 'yourself' , 'yourselves' , 'z' , 'zero']
import PorterStemmer
import math
import pickle
def TF(term,doc):
#Term Frequency: No. of times `term` occured in `doc`
global InvertedIndex
idx = InvertedIndex[term].index(doc)
count = 0
while (idx < len(InvertedIndex[term])) and InvertedIndex[term][idx] == doc:
count= count+1
idx = idx+1
return count
def DF(term):
#Document Frequency: No. of documents containing `term`
global InvertedIndex
return len(set(InvertedIndex[term]))
def avgTF(term, doc):
global docs
TFs = []
for term in docs[doc]:
TFs.append(TF(term,doc))
return sum(TFs)/len(TFs)
def maxTF(term, doc):
global docs
TFs = []
for term in docs[doc]:
TFs.append(TF(term,doc))
return max(TFs)
def getValues4Term(term, doc):
TermFrequency = {}
TermFrequency['natural'] = TF(term,doc)
TermFrequency['log'] = 1+math.log( TF(term,doc) )
TermFrequency['aug'] = 0.5+float(0.5*TF(term,doc)/maxTF(term,doc))
TermFrequency['bool'] = 1 if TF(term,doc)>0 else 0
TermFrequency['log_avg'] = float(1+math.log( TF(term,doc) ))/(1+math.log( avgTF(term,doc) ))
DocumentFrequency = {}
DocumentFrequency['no'] = 1
DocumentFrequency['idf'] = math.log( len(docs)/DF(term) )
DocumentFrequency['probIDF'] = max( [0, math.log( float(len(docs)-DF(term))/DF(term) )] )
return [TermFrequency, DocumentFrequency]
def Cosine(resultDocVector, qVector, doc):
#`doc` parameter is the document number corresponding to resultDocVector
global qterms,docs
# Defining Cosine similarity : cos(a) = A.B/|A||B|
dotProduct = 0
commonTerms_q_d = set(qterms).intersection(docs[doc]) #commonTerms in both query & document
for cmnTerm in commonTerms_q_d:
dotProduct = dotProduct + resultDocVector[docs[doc].index(cmnTerm)] * qVector[qterms.index(cmnTerm)]
resultSquares = []
for k in resultDocVector:
resultSquares.append(k*k)
qSquares = []
for k in qVector:
qSquares.append(k*k)
denominator = math.sqrt(sum(resultSquares)) * math.sqrt(sum(qSquares))
return dotProduct/denominator
def load():
#load index from a file
global InvertedIndex, docIDs, docs
PICKLE_InvertedIndex_FILE = open("InvertedIndex.db", 'rb')
InvertedIndex = pickle.load(PICKLE_InvertedIndex_FILE)
PICKLE_InvertedIndex_FILE.close()
PICKLE_docIDs_FILE = open("docIDs.db", 'rb')
docIDs = pickle.load(PICKLE_docIDs_FILE)
PICKLE_docIDs_FILE.close()
PICKLE_docs_FILE = open("docs.db", 'rb')
docs = pickle.load(PICKLE_docs_FILE)
PICKLE_docs_FILE.close()
########################
docs = []
docIDs = []
InvertedIndex = {}
load()
stemmer = PorterStemmer.PorterStemmer()
#<getting results for a query
query = 'Antarctica exploration'
qwords = query.strip().split()
qterms = []
qterms1 = []
for qword in qwords:
qword = qword.lower()
if qword in stopwords:
continue
qterm = stemmer.stem(qword,0,len(qword)-1)
qterms1.append(qterm)
qterms = list(set(qterms1))
#getting posting lists for each qterms & merging them
prev = set()
i = 0
for qterm in qterms:
if InvertedIndex.has_key(qterm):
if i == 0:
prev = set(InvertedIndex[qterm])
i = i+1
continue
prev = prev.intersection(set(InvertedIndex[qterm]))
results = list(prev)
#</getting results for a query
#We've got the results. Now lets rank them using Cosine similarity.
i = 0
docComponents = []
for doc in results:
docComponents.append([])
i = 0
for doc in results:
for term in docs[doc]:
vals = getValues4Term(term,doc)#[TermFrequency, DocumentFrequency]
docComponents[i].append(vals)
i = i+1
#Normalization = {}
# forming vectors for each document in the result
i = 0 #document iterator
j = 0 #term iterator
resultDocVectors = []#contains document vector for each result.
for doc in results:
resultDocVectors.append([])
for i in range(0,len(results)):
for j in range(0,len(docs[doc])):
tf = docComponents[i][j][0]['natural']#0:TermFrequency
idf = docComponents[i][j][1]['idf'] #1:DocumentFrequency
resultDocVectors[i].append(tf*idf)
#forming vector for query
qVector = []
qTF = []
qDF = []
for qterm in qterms:
count = 0
idx = qterms1.index(qterm)
while idx < len(qterms1) and qterms1[idx] == qterm:
count= count+1
idx = idx+1
qTF.append(count)
qVector = qTF
#compuing Cosine similarities of all resultDocVectors w.r.t qVector
i = 0
CosineVals = []
for resultDocVector in resultDocVectors:
doc = results[i]
CosineVals.append(Cosine(resultDocVector, qVector, doc))
i = i+1
#ranking as per Cosine Similarities
#this is not "perfect" sorting. As it may not give 100% correct results when it multiple docs have same cosine similarities.
CosineValsCopy = CosineVals
CosineVals.sort()
sortedCosineVals = CosineVals
CosineVals = CosineValsCopy
rankedResults = []
for cval in sortedCosineVals:
rankedResults.append(results[CosineVals.index(cval)])
rankedResults.reverse()
#<Evaluation of the system:>
#parsing qrels.txt & getting relevances
# qrels.txt contains columns of the form:
# qid iter docno rel
#2nd column `iter` can be ignored.
relevances = {}
fh = open("qrels.txt")
lines = fh.readlines()
for line in lines:
cols = line.strip().split()
if relevances.has_key(cols[0]):#queryID
relevances[cols[0]].append(cols[2])#docID
else:
relevances[cols[0]] = [cols[2]]
fh.close()
#precision = no. of relevant docs retrieved/total no. of docs retrieved
no_of_relevant_docs_retrieved = set(rankedResults).intersection( set(relevances[queryID]) )
Precision = no_of_relevant_docs_retrieved/len(rankedResults)
#recall = no. of relevant docs retrieved/ total no. of relevant docs
Recall = no_of_relevant_docs_retrieved/len(relevances[queryID])
| [
"It's definitely your code, but since you choose to hide it from us it's impossible for us to help any further. All I can tell you based on the very scarce info you choose to supply is that unpickling a dict (in the right way) is much faster, and indexing into it (assuming that's what you mean by \"searches it for... | [
18,
5,
4,
2,
1
] | [] | [] | [
"information_retrieval",
"performance",
"python"
] | stackoverflow_0003801072_information_retrieval_performance_python.txt |
Q:
Java framework for social network
Is there a Java analogue to Pinax/Django? (Perhaps an extension to Jboss Seam and/or functionality already built into Seam?)
Please analyse and compare Pinax/Django, Seam, and any other good Java/Python frameworks in the following criteria (ranked in order of importance):
Security (sensitive financial information)
Ability to interact with GWT/JSON-RPC (and possibly Pyjamas, though I'm leaning toward GWT due to the availability of visual design applications)
Scalability
Built-in functionality / social network backend logic (e.g. user management, "tweets", etc.)
Simplicity in setting up and working with
Synergy with Apache, PostgreSQL, GWT, and mobile applications (iPhone and Android)
If possible, a direct recommendation would be appreciated.
Thanks!
A:
you might want to try Apache Shinding
http://incubator.apache.org/projects/shindig.html
And if you want a youtube demostration try
http://www.youtube.com/watch?v=ZcWszaReqXI
taken from this thread
| Java framework for social network | Is there a Java analogue to Pinax/Django? (Perhaps an extension to Jboss Seam and/or functionality already built into Seam?)
Please analyse and compare Pinax/Django, Seam, and any other good Java/Python frameworks in the following criteria (ranked in order of importance):
Security (sensitive financial information)
Ability to interact with GWT/JSON-RPC (and possibly Pyjamas, though I'm leaning toward GWT due to the availability of visual design applications)
Scalability
Built-in functionality / social network backend logic (e.g. user management, "tweets", etc.)
Simplicity in setting up and working with
Synergy with Apache, PostgreSQL, GWT, and mobile applications (iPhone and Android)
If possible, a direct recommendation would be appreciated.
Thanks!
| [
"you might want to try Apache Shinding\nhttp://incubator.apache.org/projects/shindig.html\nAnd if you want a youtube demostration try\nhttp://www.youtube.com/watch?v=ZcWszaReqXI\ntaken from this thread\n"
] | [
2
] | [] | [] | [
"django",
"java",
"pinax",
"python",
"seam"
] | stackoverflow_0003984166_django_java_pinax_python_seam.txt |
Q:
How can I better structure this code?
I have an lxml.objectify data structure I get from a RESTful web service. I need to change a setting if it exists and create it if it doesn't. Right now I have something along the lines of the following, but I feel like it's ugly. The structure I'm looking in has a list of subelements which all have the same structure, so I can't just look for a specific tag unfortunately.
thing_structure = lxml.objectify(get_from_REST_service())
found_thing = False
if thing_structure.find('settings') is not None:
for i, foo in enumerate(thing_structure.settings):
if foo.is_what_I_want:
modify(thing_structure.settings[i])
found_thing = True
if not found_thing:
new = lxml.etree.SubElement(thing_structure, 'setting')
modify(new)
send_to_REST_service(thing_structure)
A:
Overall, the structure isn't too bad (assuming you need to call modify on 1+ items in the settings -- if "just one", i.e., if the is_what_I_want flag is going to be set for one setting at most, that's of course different, as you could and should use a break from the for loop -- but that's not the impression of your intentions that I get from your Q, please clarify if I've mistead!). There's one redundancy:
for i, foo in enumerate(thing_structure.settings):
if foo.is_what_I_want:
modify(thing_structure.settings[i])
found_thing = True
Having i and using it to get again the same foo is no use here, so you can simplify to:
for foo in thing_structure.settings:
if foo.is_what_I_want:
modify(foo)
found_thing = True
You'd only need the index if you were to rebind the item, i.e., perform an assignment such as thing_structure.settings = whatever. (BTW, a name other than foo wouldn't hurt;-).
A:
I would write this:
thing_structure = lxml.objectify(get_from_REST_service())
if thing_structure.find('settings') is not None:
foos = [foo for foo in thing_structure.settings if foo.is_what_I_want]
or [lxml.etree.SubElement(thing_structure, 'setting')]
for foo in foos:
modify(foo)
send_to_REST_service(thing_structure)
I don't care for is not None and eliminate it where I can. It it is possible here, I would write:
if thing_structure.find('settings'):
| How can I better structure this code? | I have an lxml.objectify data structure I get from a RESTful web service. I need to change a setting if it exists and create it if it doesn't. Right now I have something along the lines of the following, but I feel like it's ugly. The structure I'm looking in has a list of subelements which all have the same structure, so I can't just look for a specific tag unfortunately.
thing_structure = lxml.objectify(get_from_REST_service())
found_thing = False
if thing_structure.find('settings') is not None:
for i, foo in enumerate(thing_structure.settings):
if foo.is_what_I_want:
modify(thing_structure.settings[i])
found_thing = True
if not found_thing:
new = lxml.etree.SubElement(thing_structure, 'setting')
modify(new)
send_to_REST_service(thing_structure)
| [
"Overall, the structure isn't too bad (assuming you need to call modify on 1+ items in the settings -- if \"just one\", i.e., if the is_what_I_want flag is going to be set for one setting at most, that's of course different, as you could and should use a break from the for loop -- but that's not the impression of y... | [
2,
0
] | [] | [] | [
"code_cleanup",
"python"
] | stackoverflow_0003824179_code_cleanup_python.txt |
Q:
using python to encapsulate part of a string after 3 commas
I am trying to create a python script that adds quotations around part of a string, after 3 commas
So if the input data looks like this:
1234,1,1/1/2010,This is a test. One, two, three.
I want python to convert the string to:
1234,1,1/1/2010,"This is a test. One, two, three."
The quotes will always need to be added after 3 commas
I am using Python 3.1.2 and have the following so far:
i_file=open("input.csv","r")
o_file=open("output.csv","w")
for line in i_file:
tokens=line.split(",")
count=0
new_line=""
for element in tokens:
if count = "3":
new_line = new_line + '"' + element + '"'
break
else:
new_line = new_line + element + ","
count=count+1
o_file.write(new_line + "\n")
print(line, " -> ", new_line)
i_file.close()
o_file.close()
The script closes immediately when I try to run it and produces no output
Can you see what's wrong?
Thanks
A:
Having addressed the two issues mentioned in my comment above I've just tested that the code below (edit: ALMOST works; see very short code sample below for a fully tested and working version) for your test input.
i_file=open("input.csv","r")
o_file=open("output.csv","w")
for line in i_file:
tokens=line.split(",")
count=0
new_line=""
for element in tokens:
if count == 3:
new_line = new_line + '"' + element + '"'
break
else:
new_line = new_line + element + ","
count=count+1
o_file.write(new_line + "\n")
print(line, " -> ", new_line)
i_file.close()
o_file.close()
Side note:
A relatively new feature in Python is the with statement. Below is an example of how you might take advantage of that more-robust method of coding (note that you don't need to add the close() calls at the end of processing):
with open("input.csv","r") as i_file, open("output.csv","w") as o_file:
for line in i_file:
tokens = line.split(",", 3)
if len(tokens) > 3:
o_file.write(','.join(tokens[0:3]))
o_file.write('"{0}"'.format(tokens[-1].rstrip('\n')))
A:
Shorter but untested:
i_file=open("input.csv","r")
o_file=open("output.csv","w")
comma = ','
for line in i_file:
tokens=line.split(",")
new_line = comma.join(tokens[:3]+['"'+comma.join(tokens[3:])+'"'])
o_file.write(new_line+'\n')
print(line, " -> ", new_line)
i_file.close()
o_file.close()
A:
Perhaps you should consider using a regular expression to do this?
Something like
import re
t = "1234,1,1/1/2010,This is a test. One, two, three."
first,rest = re.search(r'([^,]+,[^,]+,[^,]+,)(.*)',t).groups()
op = '%s"%s"'%(first,rest)
print op
1234,1,1/1/2010,"This is a test. One, two, three."
Does this satisfy your requirements?
A:
>>> import re
>>> s
'1234,1,1/1/2010,This is a test. One, two, three.'
>>> re.sub("(.[^,]*,.[^,]*,.[^,]*,)(.*)" , '\\1\"\\2"' , s)
'1234,1,1/1/2010,"This is a test. One, two, three."'
import re
o=open("output.csv","w")
for line in open("input.csv"):
line=re.sub("(.[^,]*,.[^,]*,.[^,]*,)(.*)" , '\\1\"\\2"' , line)
o.write(line)
o.close()
| using python to encapsulate part of a string after 3 commas | I am trying to create a python script that adds quotations around part of a string, after 3 commas
So if the input data looks like this:
1234,1,1/1/2010,This is a test. One, two, three.
I want python to convert the string to:
1234,1,1/1/2010,"This is a test. One, two, three."
The quotes will always need to be added after 3 commas
I am using Python 3.1.2 and have the following so far:
i_file=open("input.csv","r")
o_file=open("output.csv","w")
for line in i_file:
tokens=line.split(",")
count=0
new_line=""
for element in tokens:
if count = "3":
new_line = new_line + '"' + element + '"'
break
else:
new_line = new_line + element + ","
count=count+1
o_file.write(new_line + "\n")
print(line, " -> ", new_line)
i_file.close()
o_file.close()
The script closes immediately when I try to run it and produces no output
Can you see what's wrong?
Thanks
| [
"Having addressed the two issues mentioned in my comment above I've just tested that the code below (edit: ALMOST works; see very short code sample below for a fully tested and working version) for your test input.\ni_file=open(\"input.csv\",\"r\")\no_file=open(\"output.csv\",\"w\")\n\nfor line in i_file:\n toke... | [
2,
2,
1,
1
] | [] | [] | [
"python"
] | stackoverflow_0003984200_python.txt |
Q:
C# bindings for MEEP (Photonic Simulation Package)
Does anyone know of a way to call MIT's Meep simulation package from C# (probably Mono, god help me).
We're stuck with the #$@%#$^ CTL front-end, which is a productivity killer. Some other apps that we're integrating into our sim pipeline are in C# (.NET). I've seen a Python interface to Meep (light years ahead of CTL), but I'd like to keep the code we're developing as homogeneous as possible.
And, no, writing the rest of the tools in Python isn't an option. Why? Because we hates it. Stupid Bagginses. We hates it forever!
(In reality, the various app targets don't lend themselves to a Python implementation, and the talent pool I have available is far more productive with C#.)
Or, in a more SO-friendly question form:
Is there a convenient/possible way to link GNU C++ libraries into C# on Windows or Mono on Linux?
A:
The straightforward and portable solution is to write a C++ wrapper for libmeep that exposes a C ABI (via extern "C" { ... }), then write a C# wrapper around this API using P/Invoke. This would be roughly equivalent to the Python Meep wrapper, AFAICT.
Of course, mapping C++ classes to C# classes via a flat C API is nontrivial - you're going to have to keep IntPtr handles for the C++ classes in your C# classes, properly implement the Dispose pattern, using GCHandles or a dictionary of IntPtrs to allow referential integrity when resurfacing C++ objects (if needed), etc. Subclassing C++ objects in C# and being able to overriding virtual methods gets really quite complicated.
There is a tool called SWIG that can do this automatically but the results will not be anywhere near as good as a hand-written wrapper.
If you restrict yourself to Windows/.NET, Microsoft has a superset of C++ called C++/CLI, which would enable you to write a wrapper in C++ that exports a .NET API directly.
| C# bindings for MEEP (Photonic Simulation Package) | Does anyone know of a way to call MIT's Meep simulation package from C# (probably Mono, god help me).
We're stuck with the #$@%#$^ CTL front-end, which is a productivity killer. Some other apps that we're integrating into our sim pipeline are in C# (.NET). I've seen a Python interface to Meep (light years ahead of CTL), but I'd like to keep the code we're developing as homogeneous as possible.
And, no, writing the rest of the tools in Python isn't an option. Why? Because we hates it. Stupid Bagginses. We hates it forever!
(In reality, the various app targets don't lend themselves to a Python implementation, and the talent pool I have available is far more productive with C#.)
Or, in a more SO-friendly question form:
Is there a convenient/possible way to link GNU C++ libraries into C# on Windows or Mono on Linux?
| [
"The straightforward and portable solution is to write a C++ wrapper for libmeep that exposes a C ABI (via extern \"C\" { ... }), then write a C# wrapper around this API using P/Invoke. This would be roughly equivalent to the Python Meep wrapper, AFAICT.\nOf course, mapping C++ classes to C# classes via a flat C AP... | [
0
] | [] | [] | [
"c#",
"c++",
"meep",
"mono",
"python"
] | stackoverflow_0003982717_c#_c++_meep_mono_python.txt |
Q:
How do I send single character ASCII data to a serial port with python
I'v looked at pyserial but I can't seem to figure out how to do it. I only need to send one at a time? Please help?
A:
Using pySerial:
Python 2.x:
import serial
byte = 42
out = serial.Serial("/dev/ttyS0") # "COM1" on Windows
out.write(chr(byte))
Python 3.x:
import serial
byte = 42
out = serial.Serial("/dev/ttyS0") # "COM1" on Windows
out.write(bytes(byte))
A:
Google says:
http://pyserial.sourceforge.net/
http://balder.prohosting.com/ibarona/en/python/uspp/uspp_en.html
if you're using Uspp; to write on serial port documentation says
| How do I send single character ASCII data to a serial port with python | I'v looked at pyserial but I can't seem to figure out how to do it. I only need to send one at a time? Please help?
| [
"Using pySerial:\nPython 2.x:\nimport serial\nbyte = 42\nout = serial.Serial(\"/dev/ttyS0\") # \"COM1\" on Windows\nout.write(chr(byte))\n\nPython 3.x:\nimport serial\nbyte = 42\nout = serial.Serial(\"/dev/ttyS0\") # \"COM1\" on Windows\nout.write(bytes(byte))\n\n",
"Google says:\n\nhttp://pyserial.sourceforge.... | [
7,
1
] | [] | [] | [
"arduino",
"python"
] | stackoverflow_0003984602_arduino_python.txt |
Q:
Python 2.7, sqlite3, ValueError: could not convert BLOB to buffer
I am trying to save a BLOB created from an integer array (that is, a packed array of integers) in an SQLite DB. The script shown below gives the following traceback. As far as I can see from the Python 2.7 sqlite3 documentation, it should be possible to insert a buffer object into a table, where it is supposed to be saved as a BLOB. However, I am unable to make this work. (FWIW, isinstance(b,buffer) prints True if inserted into the script, so I'm indeed creating a buffer object.)
Any suggestions?
Thanks,
-P.
Traceback (most recent call last):
File "example.py", line 13, in <module>
conn.execute( 'insert into foo values (?)', (b,) ) # <=== line 14
ValueError: could not convert BLOB to buffer
import sqlite3
import sys
import array
ar = array.array( 'I' )
ar.extend( [1,0,3,11,43] )
b = buffer( ar )
conn = sqlite3.connect( ':memory:' )
conn.execute( 'create table foo( bar BLOB )' )
conn.commit()
conn.execute( 'insert into foo values (?)', (b,) ) # <=== line 14
conn.commit()
A:
Cristian Ciupitu's note about the bug is correct, but bytes(ar) will give you the __str__ representation instead of a serialized output. Therefore, use ar.tostring().
Use array.fromstring to unserialize the array again - you have to create an array object with the same type and then call .fromstring(...).
| Python 2.7, sqlite3, ValueError: could not convert BLOB to buffer | I am trying to save a BLOB created from an integer array (that is, a packed array of integers) in an SQLite DB. The script shown below gives the following traceback. As far as I can see from the Python 2.7 sqlite3 documentation, it should be possible to insert a buffer object into a table, where it is supposed to be saved as a BLOB. However, I am unable to make this work. (FWIW, isinstance(b,buffer) prints True if inserted into the script, so I'm indeed creating a buffer object.)
Any suggestions?
Thanks,
-P.
Traceback (most recent call last):
File "example.py", line 13, in <module>
conn.execute( 'insert into foo values (?)', (b,) ) # <=== line 14
ValueError: could not convert BLOB to buffer
import sqlite3
import sys
import array
ar = array.array( 'I' )
ar.extend( [1,0,3,11,43] )
b = buffer( ar )
conn = sqlite3.connect( ':memory:' )
conn.execute( 'create table foo( bar BLOB )' )
conn.commit()
conn.execute( 'insert into foo values (?)', (b,) ) # <=== line 14
conn.commit()
| [
"Cristian Ciupitu's note about the bug is correct, but bytes(ar) will give you the __str__ representation instead of a serialized output. Therefore, use ar.tostring().\nUse array.fromstring to unserialize the array again - you have to create an array object with the same type and then call .fromstring(...).\n"
] | [
2
] | [] | [] | [
"python",
"sqlite"
] | stackoverflow_0003983587_python_sqlite.txt |
Q:
Python sample programs for FusionCharts to integrate with MySQL
I want to integrate Python with FusionCharts and MySQL.
I have python programs to create/access MySQL DB.
The same data which resides in the MySQL DB has to be projected to the user in the FusionCharts using Python scripts.
Please help me out on this.
A:
There are quite a few examples out there
http://bitbucket.org/schmichael/python-fusioncharts/src/tip/snippets/
| Python sample programs for FusionCharts to integrate with MySQL | I want to integrate Python with FusionCharts and MySQL.
I have python programs to create/access MySQL DB.
The same data which resides in the MySQL DB has to be projected to the user in the FusionCharts using Python scripts.
Please help me out on this.
| [
"There are quite a few examples out there\n\nhttp://bitbucket.org/schmichael/python-fusioncharts/src/tip/snippets/\n\n"
] | [
0
] | [] | [] | [
"fusioncharts",
"python"
] | stackoverflow_0003984849_fusioncharts_python.txt |
Q:
Caching static files in Django
I was profiling the performance of my web application using Google's Page Speed plugin for Firebug and one of the things it says is that I should 'leverage caching' — "The following cacheable resources have a short freshness lifetime. Specify an expiration at least one week in the future for the following resources". When I dug deeper I found that all request for static files to the Django WSGI server lacked the Expires and the Cache-Control headers. Who should add these headers — should Django do it? If so, how?
Thanks.
A:
Any static files you may have for your page should be served by your web server, e.g. Apache. Django should never be involved unless you have to prevent access of some files to certain people.
Here, I found an example of how to do it:
# our production setup includes a caching load balancer in front.
# we tell the load balancer to cache for 5 minutes.
# we can use the commented out lines to tell it to not cache,
# which is useful if we're updating.
<FilesMatch "\.(html|js|png|jpg|css)$">
ExpiresDefault "access plus 5 minutes"
ExpiresActive On
#Header unset cache-control:
#Header append cache-control: "no-cache, must-revalidate"
</FilesMatch>
| Caching static files in Django | I was profiling the performance of my web application using Google's Page Speed plugin for Firebug and one of the things it says is that I should 'leverage caching' — "The following cacheable resources have a short freshness lifetime. Specify an expiration at least one week in the future for the following resources". When I dug deeper I found that all request for static files to the Django WSGI server lacked the Expires and the Cache-Control headers. Who should add these headers — should Django do it? If so, how?
Thanks.
| [
"Any static files you may have for your page should be served by your web server, e.g. Apache. Django should never be involved unless you have to prevent access of some files to certain people.\nHere, I found an example of how to do it:\n# our production setup includes a caching load balancer in front.\n# we tell t... | [
7
] | [] | [] | [
"caching",
"django",
"python"
] | stackoverflow_0003984984_caching_django_python.txt |
Q:
I have a text file of a paragraph of writing, and want to iterate through each word in Python
How would I do this? I want to iterate through each word and see if it fits certain parameters (for example is it longer than 4 letters..etc. not really important though).
The text file is literally a rambling of text with punctuation and white spaces, much like this posting.
A:
Try split()ing the string.
f = open('your_file')
for line in f:
for word in line.split():
# do something
If you want it without punctuation:
f = open('your_file')
for line in f:
for word in line.split():
word = word.strip('.,?!')
# do something
A:
You can simply content.split()
A:
f = open(filename,"r");
lines = f.readlines();
for i in lines:
thisline = i.split(" ");
A:
data=open("file").read().split()
for item in data:
if len(item)>4:
print "longer than 4: ",item
| I have a text file of a paragraph of writing, and want to iterate through each word in Python | How would I do this? I want to iterate through each word and see if it fits certain parameters (for example is it longer than 4 letters..etc. not really important though).
The text file is literally a rambling of text with punctuation and white spaces, much like this posting.
| [
"Try split()ing the string.\nf = open('your_file')\nfor line in f:\n for word in line.split():\n # do something\n\nIf you want it without punctuation:\nf = open('your_file')\nfor line in f:\n for word in line.split():\n word = word.strip('.,?!')\n # do something\n\n",
"You can simply co... | [
2,
0,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0003984910_python.txt |
Q:
Documentation for python-gnomeapplet and friends
I'm attempting to write a Gnome applet using Python and pygtk. Sadly all sources of information that I have been able to find are from 2004 or older, and while the general structures presented are still valid, most of the particulars are out-of-date. Even the example applets I've found through web searches no longer work.
Does anyone know where I can find good, up-to-date documentation and information on how to create gnome panel applets in Python?
A:
There is a step-by-step tutorial, which might be the most up-to-date:
http://www.znasibov.info/blog/post/gnome-applet-with-python-part-1.html
Also check out the various applets listed in Gnome projects.
| Documentation for python-gnomeapplet and friends | I'm attempting to write a Gnome applet using Python and pygtk. Sadly all sources of information that I have been able to find are from 2004 or older, and while the general structures presented are still valid, most of the particulars are out-of-date. Even the example applets I've found through web searches no longer work.
Does anyone know where I can find good, up-to-date documentation and information on how to create gnome panel applets in Python?
| [
"There is a step-by-step tutorial, which might be the most up-to-date:\nhttp://www.znasibov.info/blog/post/gnome-applet-with-python-part-1.html\nAlso check out the various applets listed in Gnome projects.\n"
] | [
4
] | [] | [] | [
"gnome",
"pygtk",
"python"
] | stackoverflow_0003977736_gnome_pygtk_python.txt |
Q:
Why does Python disables MSCRT assertions when built with debug mode?
Python disables MSCRT assertions for debug mode during the initialization of exceptions module when it is built in debug mode. At least from the source code, I can see Python 2.6.5 doing this for _MSC_VER >= 1400 i.e. Visual C++ 2005. Does anyone know why?
A:
See this thread on the bug tracker.
| Why does Python disables MSCRT assertions when built with debug mode? | Python disables MSCRT assertions for debug mode during the initialization of exceptions module when it is built in debug mode. At least from the source code, I can see Python 2.6.5 doing this for _MSC_VER >= 1400 i.e. Visual C++ 2005. Does anyone know why?
| [
"See this thread on the bug tracker.\n"
] | [
2
] | [] | [] | [
"assertion",
"crt",
"python"
] | stackoverflow_0003985059_assertion_crt_python.txt |
Q:
Dictionary based switch-like statement with actions
I'm relatively new to Python and would like to know if I'm reinventing a wheel or do things in a non-pythonic way - read wrong.
I'm rewriting some parser originally written in Lua. There is one function which accepts a field name from imported table and its value, does some actions on value and stores it in target dictionary under a appropriate key name.
In the original code it's solved by long switch-like statement with anonymous functions as actions.
Python code looks like the following:
class TransformTable:
target_dict = {}
...
def mapfield(self, fieldname, value):
try:
{
'productid': self.fn_prodid,
'name': self.fn_name,
'description': self.fn_desc,
...
}[fieldname](value)
except KeyError:
sys.stderr.write('Unknown key !\n')
def fn_name(val):
validity_check(val)
target_dict['Product'] = val.strip().capitalize()
...
Every "field-handler" function does different actions and stores in different keys in target_dict, of course.
Because Python does not support anonymous functions with statements (or did I missed something ?) the functions have to be written separately which does code less readable and unnecessarily complicated.
Any hints how to do such tasks in a more elegant and more pythonic way are appreciated.
Thx
David
A:
If by any means possible, you could name your member functions based on the field names and just do something like this:
getattr(self, "fn_" + fieldname)(value)
Edit: And you can use hasattr to check if the function exists, instead of expecting a KeyError. Or expect an AttributeError. At any rate, you should put only the access inside your try..except, and call it outside, since otherwise a KeyError caused within one of the field methods could get misunderstood.
A:
I applied an approach similar to @Matti Virkkunen'a a while ago in my answer to a question titled "switch case in python doesn't work; need another pattern". It also demonstrates a relatively easy and graceful way of handling unknown fields. Transliterated to terms in your example it would look like this:
class TransformTable:
target_dict = {}
def productid(self, value):
...
def name(self, value):
validity_check(value)
self.target_dict['Product'] = value.strip().capitalize()
def description(self, value):
...
def _default(self, value):
sys.stderr.write('Unknown key!\n')
def __call__(self, fieldname, value):
getattr(self, fieldname, self._default)(value)
transformtable = TransformTable() # create callable instance
transformtable(fieldname, value) # use it
| Dictionary based switch-like statement with actions | I'm relatively new to Python and would like to know if I'm reinventing a wheel or do things in a non-pythonic way - read wrong.
I'm rewriting some parser originally written in Lua. There is one function which accepts a field name from imported table and its value, does some actions on value and stores it in target dictionary under a appropriate key name.
In the original code it's solved by long switch-like statement with anonymous functions as actions.
Python code looks like the following:
class TransformTable:
target_dict = {}
...
def mapfield(self, fieldname, value):
try:
{
'productid': self.fn_prodid,
'name': self.fn_name,
'description': self.fn_desc,
...
}[fieldname](value)
except KeyError:
sys.stderr.write('Unknown key !\n')
def fn_name(val):
validity_check(val)
target_dict['Product'] = val.strip().capitalize()
...
Every "field-handler" function does different actions and stores in different keys in target_dict, of course.
Because Python does not support anonymous functions with statements (or did I missed something ?) the functions have to be written separately which does code less readable and unnecessarily complicated.
Any hints how to do such tasks in a more elegant and more pythonic way are appreciated.
Thx
David
| [
"If by any means possible, you could name your member functions based on the field names and just do something like this:\ngetattr(self, \"fn_\" + fieldname)(value)\n\nEdit: And you can use hasattr to check if the function exists, instead of expecting a KeyError. Or expect an AttributeError. At any rate, you should... | [
1,
0
] | [] | [] | [
"design_patterns",
"lua",
"python"
] | stackoverflow_0003982533_design_patterns_lua_python.txt |
Q:
Python: pattern matching for a string
Im trying to check a file line by line for any_string=any_string. It must be that format, no spaces or anything else. The line must contain a string then a "=" and then another string and nothing else. Could someone help me with the syntax in python to find this please? =]
pattern='*\S\=\S*'
I have this, but im pretty sure its wrong haha.
A:
Don't know if you are looking for lines with the same value on both = sides. If so then use:
the_same_re = re.compile(r'^(\S+)=(\1)$')
if values can differ then use
the_same_re = re.compile(r'^(\S+)=(\S+)$')
In this regexpes:
^ is the beginning of line
$ is the end of line
\S+ is one or more non space character
\1 is first group
r before regex string means "raw" string so you need not escape backslashes in string.
A:
pattern = r'\S+=\S+'
If you want to be able to grab the left and right-hand sides, you could add capture groups:
pattern = r'(\S+)=(\S+)'
If you don't want to allow multiple equals signs in the line (which would do weird things), you could use this:
pattern = r'[^\s=]+=[^\s=]+'
A:
I don't know what the tasks you want make use this pattern. Maybe you want parse configuration file.
If it is true you may use module ConfigParser.
A:
Ok, so you want to find anystring=anystring and nothing else. Then no need regex.
>>> s="anystring=anystring"
>>> sp=s.split("=")
>>> if len(sp)==2:
... print "ok"
...
ok
A:
Since Python 2.5 I prefer this to split. If you don't like spaces, just check.
left, _, right = any_string.partition("=")
if right and " " not in any_string:
# proceed
Also it never hurts to learn regular expressions.
| Python: pattern matching for a string | Im trying to check a file line by line for any_string=any_string. It must be that format, no spaces or anything else. The line must contain a string then a "=" and then another string and nothing else. Could someone help me with the syntax in python to find this please? =]
pattern='*\S\=\S*'
I have this, but im pretty sure its wrong haha.
| [
"Don't know if you are looking for lines with the same value on both = sides. If so then use:\nthe_same_re = re.compile(r'^(\\S+)=(\\1)$')\n\nif values can differ then use\nthe_same_re = re.compile(r'^(\\S+)=(\\S+)$')\n\nIn this regexpes:\n\n^ is the beginning of line\n$ is the end of line\n\\S+ is one or more non ... | [
4,
1,
1,
1,
1
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0003984930_python_regex.txt |
Q:
Parsing a list into a url string
I have a list of tags that I would like to add to a url string, separated by commas ('%2C'). How can I do this ? I was trying :
>>> tags_list
['tag1', ' tag2']
>>> parse_string = "http://www.google.pl/search?q=%s&restofurl" % (lambda x: "%s," %x for x in tags_list)
but received a generator :
>>> parse_string
'http://<generator object <genexpr> at 0x02751F58>'
Also do I need to change commas to %2C? I need it to feedpaarser to parse results. If yes - how can I insert those tags separated by this special sign ?
EDIT:
parse_string = ""
for x in tags_list:
parse_string += "%s," % x
but can I escape this %2C ? Also I'm pretty sure there is a shorter 'lambda' way :)
A:
parse_string = ("http://www.google.pl/search?q=%s&restofurl" %
'%2C'.join(tag.strip() for tag in tags_list))
Results in:
>>> parse_string = ("http://www.google.pl/search?q=%s&restofurl" %
... '%2C'.join(tag.strip() for tag in tags_list))
>>> parse_string
'http://www.google.pl/search?q=tag1%2Ctag2&restofurl'
Side note:
Going forward I think you want to use format() for string interpolation, e.g.:
>>> parse_string = "http://www.google.pl/search?q={0}&restofurl".format(
... '%2C'.join(tag.strip() for tag in tags_list))
>>> parse_string
'http://www.google.pl/search?q=tag1%2Ctag2&restofurl'
A:
"%s" is fine, but urlparse.urlunparse after urllib.urlencode is safer.
str.join is fine, but remember to check your tags for commas and number signs, or use urllib.quote on each one.
| Parsing a list into a url string | I have a list of tags that I would like to add to a url string, separated by commas ('%2C'). How can I do this ? I was trying :
>>> tags_list
['tag1', ' tag2']
>>> parse_string = "http://www.google.pl/search?q=%s&restofurl" % (lambda x: "%s," %x for x in tags_list)
but received a generator :
>>> parse_string
'http://<generator object <genexpr> at 0x02751F58>'
Also do I need to change commas to %2C? I need it to feedpaarser to parse results. If yes - how can I insert those tags separated by this special sign ?
EDIT:
parse_string = ""
for x in tags_list:
parse_string += "%s," % x
but can I escape this %2C ? Also I'm pretty sure there is a shorter 'lambda' way :)
| [
"parse_string = (\"http://www.google.pl/search?q=%s&restofurl\" % \n '%2C'.join(tag.strip() for tag in tags_list))\n\nResults in:\n>>> parse_string = (\"http://www.google.pl/search?q=%s&restofurl\" %\n... '%2C'.join(tag.strip() for tag in tags_list))\n>>> parse_string\n'http://www.googl... | [
4,
1
] | [] | [] | [
"lambda",
"parsing",
"python",
"url",
"url_parsing"
] | stackoverflow_0003984422_lambda_parsing_python_url_url_parsing.txt |
Q:
Slicing a result list by time value
I got a result list and want to keep the elements that are newer than timeline and older than bookmark. Is there a more convenient method than iterating the whole list and removing the elements if they match the conditition? Can you introduce me to how specically how? The way I fetch data and then sort it
results = A.all().search(self.request.get('q')).filter("published =", True)
results = sorted(results, key=lambda x: x.modified, reverse=True)
Then I want to keep elements older than bookmark and newer than timeline where these variables are defined by HTTP GET or if blank defined as
bookmark = datetime.now()
timeline = datetime.now () - timedelta (days = 50)
I hope you understand what I'm trying to do (it's like paging) and thank you in advance for any ideas.
A:
If you're using the SearchableModel and doing a search query (which it would appear you are, from the snippet), you can't apply sort orders or inequality filters without requiring exploding indexes, as established in your previous question on the topic. Thus, you can't apply these filters as part of the query - so filtering the results manually is your best (and only) option.
A:
This?
bookmark = datetime.now()
timeline = datetime.now () - timedelta (days = 50)
results = A.all().search(self.request.get('q')).filter("published =", True)
results = results.filter( modified__gte=bookmark ).filter( modified__lte=timeline )
results = sorted(results, key=lambda x: x.modified, reverse=True)
Or this?
bookmark = datetime.now()
timeline = datetime.now () - timedelta (days = 50)
results = A.all().search(self.request.get('q')).filter("published =", True)
results = [ r for r in results if r.modified >= bookmark and r.modified <= timeLine ]
results = sorted(results, key=lambda x: x.modified, reverse=True)
| Slicing a result list by time value | I got a result list and want to keep the elements that are newer than timeline and older than bookmark. Is there a more convenient method than iterating the whole list and removing the elements if they match the conditition? Can you introduce me to how specically how? The way I fetch data and then sort it
results = A.all().search(self.request.get('q')).filter("published =", True)
results = sorted(results, key=lambda x: x.modified, reverse=True)
Then I want to keep elements older than bookmark and newer than timeline where these variables are defined by HTTP GET or if blank defined as
bookmark = datetime.now()
timeline = datetime.now () - timedelta (days = 50)
I hope you understand what I'm trying to do (it's like paging) and thank you in advance for any ideas.
| [
"If you're using the SearchableModel and doing a search query (which it would appear you are, from the snippet), you can't apply sort orders or inequality filters without requiring exploding indexes, as established in your previous question on the topic. Thus, you can't apply these filters as part of the query - so... | [
1,
0
] | [] | [] | [
"google_app_engine",
"python"
] | stackoverflow_0003983093_google_app_engine_python.txt |
Q:
Matching a datetime against a date in django
I have a django model with a DateTimeField called when which I want to match against a Date object. Is there a way to do that in django's queryset language better than
Samples.objects.filter( when__gte = mydate, when__lt = mydate + datetime.timedelta(1) )
A:
Same like W_P, off the top of my head:
Samples.objects.filter(when__year = mydate.year, when__month = mydate.month, when__day = mydate.day)
You can round that up to year, month, day. This is the way I create posts archive in my code. I have three options: yearly archive, monthly archive and daily archive. The difference between them is the combination of arguments.
A:
Just off the top of my head, I suppose you could do this:
def tomorrow(dt):
return dt + datetime.timedelta(1)
# ...
Samples.objects.filter( when__gte = mydate, when__lt = tomorrow(mydate) )
I know it doesn't really *solve* your problem, but at least it looks nicer...
| Matching a datetime against a date in django | I have a django model with a DateTimeField called when which I want to match against a Date object. Is there a way to do that in django's queryset language better than
Samples.objects.filter( when__gte = mydate, when__lt = mydate + datetime.timedelta(1) )
| [
"Same like W_P, off the top of my head:\nSamples.objects.filter(when__year = mydate.year, when__month = mydate.month, when__day = mydate.day)\n\nYou can round that up to year, month, day. This is the way I create posts archive in my code. I have three options: yearly archive, monthly archive and daily archive. The ... | [
2,
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0003983535_django_python.txt |
Q:
How to Append data to a combobox
I need to insert data to a combobox so i do an append as defined in this lines :
fd = open(files,'rb')
data=fd.readlines()
for i in data[]:
item=i.strip()
if item is not None:
combobox.Append(item)
fd.close
Even data insert the selection still void
please can you tell me how to set a selection a value from the items read.
as like as selection contain first element
A:
combobox.SetSelection(0) # select first item
A:
I know this probably doesn't answer your question, but I recommend you close the file connection once you're done using it.
fd = open(files,'rb')
data=fd.readlines()
#Close the connection, you're done using it!
fd.close
#Now do what you want with data.
for i in data[]:
item=i.strip()
if item is not None:
combobox.Append(item)
| How to Append data to a combobox | I need to insert data to a combobox so i do an append as defined in this lines :
fd = open(files,'rb')
data=fd.readlines()
for i in data[]:
item=i.strip()
if item is not None:
combobox.Append(item)
fd.close
Even data insert the selection still void
please can you tell me how to set a selection a value from the items read.
as like as selection contain first element
| [
"combobox.SetSelection(0) # select first item\n\n",
"I know this probably doesn't answer your question, but I recommend you close the file connection once you're done using it.\nfd = open(files,'rb')\ndata=fd.readlines()\n\n#Close the connection, you're done using it!\nfd.close\n\n#Now do what you want with data... | [
1,
0
] | [] | [] | [
"python",
"wxpython",
"wxwidgets"
] | stackoverflow_0003983583_python_wxpython_wxwidgets.txt |
Q:
Fix depth tree in Python
I want to implement a tree structure which has fixed depth, i.e. when adding children to the leef nodes, the whole tree structure should "move up". This also means that several roots can exist simultaneously. See example beneath:
In this example, the green nodes are added in iteration 1, deleting the top node (grey) and making the two blue nodes at K=0 and Iteration 1 root nodes.
How do I go about implementing this?
A:
Store each node with a reference to its parent. When you add a node to it as a child, walk up the parents (from the node being added to) and delete the third one after you set the parent reference in all of its children to None. Then add the children of the deleted node to your list of trees.
class Node(object):
depth = 4
def __init__(self, parent, contents):
self.parent = parent
self.contents = contents
self.children = []
def create_node(trees, parent, contents):
"""Adds a leaf to a specified node in the set of trees.
Note that it has to have access to the container that holds all of the trees so
that it can delete the appropriate parent node and add its children as independent
trees. Passing it in seems a little ugly. The container of trees could be a class
with this as a method or you could use a global list. Or something completely
different. The important thing is that if you don't delete every reference to the
old root, you'll leak memory.
"""
parent.children.append(Node(parent, contents))
i = 0:
L = Node.depth - 1
while i < L:
parent = parent.parent
if not parent:
break
i += 1
else:
for node in parent.children:
node.parent = None
trees.extend(parent.children)
i = trees.find(parent)
del trees[i]
| Fix depth tree in Python | I want to implement a tree structure which has fixed depth, i.e. when adding children to the leef nodes, the whole tree structure should "move up". This also means that several roots can exist simultaneously. See example beneath:
In this example, the green nodes are added in iteration 1, deleting the top node (grey) and making the two blue nodes at K=0 and Iteration 1 root nodes.
How do I go about implementing this?
| [
"Store each node with a reference to its parent. When you add a node to it as a child, walk up the parents (from the node being added to) and delete the third one after you set the parent reference in all of its children to None. Then add the children of the deleted node to your list of trees.\nclass Node(object):\... | [
2
] | [] | [] | [
"data_structures",
"python",
"tree"
] | stackoverflow_0003985453_data_structures_python_tree.txt |
Q:
Slim down Python wxPython OS X app built with py2app?
I have just made a small little app of a Python wxPython script with py2app. Everything worked as advertised, but the app is pretty big in size. Is there any way to optimize py2app to make the app smaller in size?
A:
This is a workaround.
It will depend on which OS you want to target. Python and wxPython are bundled with every Mac OS X installation (at least starting with Leopard, if I recall correctly)
What you might try, is to add the --alias compilation option. According to the py2app doc:
Alias mode (the -A or --alias option) instructs py2app to build an application bundle that uses your source and data files in-place.
So that the app will try to use the Mac OS X version of wxPython.
CAREFUL
The alias option is called development mode.
If you use another library that is not bundled with Mac OS X, it won't work.
If you want to port your application to Windows or Linux, it won't work.
I've had some successful use with this method but in the end, went back to zipping a 30/40MB file. Which in the end is not that big.
A:
py2app or any other such packager mostly bundles all dependencies and files together so that you could easily distribute them. The size is usually large as it bundles all dependencies , share libraries , packages along with your script file. In most cases, it will be difficult to reduce the size.
How ever, you can ensure the following so that there is lesser cruft in the bundled application.
Do not use --no-strip option, this will ensure that debug symbols are stripped.
Use "--optimize 2 or -O2" optimization option
Use "--strip (-S)" option to strip debug and local symbols from output
Use "--debug-skip-macholib", it will not make it truly stand alone but will reduce the size
I am hoping that you have removed unnecessary files from wxPython like demo, doc etc.
| Slim down Python wxPython OS X app built with py2app? | I have just made a small little app of a Python wxPython script with py2app. Everything worked as advertised, but the app is pretty big in size. Is there any way to optimize py2app to make the app smaller in size?
| [
"This is a workaround.\nIt will depend on which OS you want to target. Python and wxPython are bundled with every Mac OS X installation (at least starting with Leopard, if I recall correctly)\nWhat you might try, is to add the --alias compilation option. According to the py2app doc:\n\nAlias mode (the -A or --alias... | [
2,
1
] | [] | [] | [
"optimization",
"py2app",
"python",
"wxpython"
] | stackoverflow_0003979658_optimization_py2app_python_wxpython.txt |
Q:
Python - go to two lines above match
In a text file like this:
First Name last name #
secone name
Address Line 1
Address Line 2
Work Phone:
Home Phone:
Status:
First Name last name #
....same as above...
I need to match string 'Work Phone:' then go two lines up and insert character '|' in the begining of line. so pseudo code would be:
if "Work Phone:" in line:
go up two lines:
write | + line
write rest of the lines.
File is about 10 mb and there are about 1000 paragraphs like this.
Then i need to write it to another file. So desired result would be:
First Name last name #
secone name
|Address Line 1
Address Line 2
Work Phone:
Home Phone:
Status:
thanks for any help.
A:
This solution doesn't read whole file into memory
p=""
q=""
for line in open("file"):
line=line.rstrip()
if "Work Phone" in line:
p="|"+p
if p: print p
p,q=q,line
print p
print q
output
$ python test.py
First Name last name #
secone name
|Address Line 1
Address Line 2
Work Phone:
Home Phone:
Status:
A:
Something like this?
lines = text.splitlines()
for i, line in enumerate(lines):
if 'Work Phone:' in line:
lines[i-2] = '|' + lines[i-2]
A:
You can use this regex
(.*\n){2}(Work Phone:)
and replace the matches with
|\1\2
You don't even need Python, you can do such a thing in any modern text editor, like Vim.
| Python - go to two lines above match | In a text file like this:
First Name last name #
secone name
Address Line 1
Address Line 2
Work Phone:
Home Phone:
Status:
First Name last name #
....same as above...
I need to match string 'Work Phone:' then go two lines up and insert character '|' in the begining of line. so pseudo code would be:
if "Work Phone:" in line:
go up two lines:
write | + line
write rest of the lines.
File is about 10 mb and there are about 1000 paragraphs like this.
Then i need to write it to another file. So desired result would be:
First Name last name #
secone name
|Address Line 1
Address Line 2
Work Phone:
Home Phone:
Status:
thanks for any help.
| [
"This solution doesn't read whole file into memory\np=\"\"\nq=\"\"\nfor line in open(\"file\"):\n line=line.rstrip()\n if \"Work Phone\" in line:\n p=\"|\"+p\n if p: print p\n p,q=q,line\nprint p\nprint q\n\noutput\n$ python test.py\nFirst Name last name #\nsecone name\n|Address Line 1\nAddress Li... | [
1,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0003985705_python.txt |
Q:
conversion of string representing HH:MM:SS.sss format into HH:MM:SS.ssssss format in python
I have string which is representing time in HH:MM:SS.sss format ,Now i have to convert this sting into HH:MM:SS.ssssss format.Please let me know how to do this?
A:
Can you not just append "000" on the end?
So "13:23:12.345" => "13:23:12.345000"
| conversion of string representing HH:MM:SS.sss format into HH:MM:SS.ssssss format in python | I have string which is representing time in HH:MM:SS.sss format ,Now i have to convert this sting into HH:MM:SS.ssssss format.Please let me know how to do this?
| [
"Can you not just append \"000\" on the end?\nSo \"13:23:12.345\" => \"13:23:12.345000\"\n"
] | [
4
] | [] | [] | [
"python"
] | stackoverflow_0003986003_python.txt |
Q:
Python: Comparing Lists
I have come across a small problem. Say I have two lists:
list_A = ['0','1','2']
list_B = ['2','0','1']
I then have a list of lists:
matrix = [
['56','23','4'],
['45','5','67'],
['1','52','22']
]
I then need to iterate through list_A and list_B and effectively use them as co-ordinates. For example I take the firs number from list A and B which would be '0' and '2', I then use them as co-ordinates: print matrix[0][2]
I then need to do the same for the 2nd number in list A and B and the 3rd number in list A and B and so forth for however long List A and B how would be. How do this in a loop?
A:
matrix = [
['56','23','4'],
['45','5','67'],
['1','52','22']
]
list_A = ['0','1','2']
list_B = ['2','0','1']
for x in zip(list_A,list_B):
a,b=map(int,x)
print(matrix[a][b])
# 4
# 45
# 52
A:
[matrix[int(a)][int(b)] for (a,b) in zip(list_A, list_B)]
A:
The 'zip' function could be of some use here. It will generate a list of pairs from list_A and list_B.
for (x,y) in zip(list_A, list_B):
# do something with the coordinates
| Python: Comparing Lists | I have come across a small problem. Say I have two lists:
list_A = ['0','1','2']
list_B = ['2','0','1']
I then have a list of lists:
matrix = [
['56','23','4'],
['45','5','67'],
['1','52','22']
]
I then need to iterate through list_A and list_B and effectively use them as co-ordinates. For example I take the firs number from list A and B which would be '0' and '2', I then use them as co-ordinates: print matrix[0][2]
I then need to do the same for the 2nd number in list A and B and the 3rd number in list A and B and so forth for however long List A and B how would be. How do this in a loop?
| [
"matrix = [\n['56','23','4'],\n['45','5','67'],\n['1','52','22']\n]\n\nlist_A = ['0','1','2']\nlist_B = ['2','0','1']\n\nfor x in zip(list_A,list_B):\n a,b=map(int,x)\n print(matrix[a][b])\n# 4\n# 45\n# 52\n\n",
"[matrix[int(a)][int(b)] for (a,b) in zip(list_A, list_B)]\n\n",
"The 'zip' function could be ... | [
8,
2,
0
] | [] | [] | [
"list",
"python"
] | stackoverflow_0003986222_list_python.txt |
Q:
How do you wrap the view of a 3rd-party Django app
How do you wrap the view of a 3rd-party app (let's call the view to wrap "view2wrap" and the app "3rd_party_app") so you can do some custom things before the app does its thing?
I've set urls.py to capture the correct url:
url( r'^foo/bar/$', view_wrapper, name='my_wrapper'),
I've created my custom view:
from 3rd_party_app.views import view2wrap
def view_wrapper(request, *args, **kwargs):
# Do some cool custom stuff
return view2wrap(request, *args, **kwargs)
When I try this, I get the error "No module named 3rd_party_app.views". Why?
A:
The third party application is not in your python path.
A:
Is the 3rd Party App listed in INSTALLED_APPS in your settings.py?
A:
Try placing the 3rd party package folder within your project folder. :)
| How do you wrap the view of a 3rd-party Django app | How do you wrap the view of a 3rd-party app (let's call the view to wrap "view2wrap" and the app "3rd_party_app") so you can do some custom things before the app does its thing?
I've set urls.py to capture the correct url:
url( r'^foo/bar/$', view_wrapper, name='my_wrapper'),
I've created my custom view:
from 3rd_party_app.views import view2wrap
def view_wrapper(request, *args, **kwargs):
# Do some cool custom stuff
return view2wrap(request, *args, **kwargs)
When I try this, I get the error "No module named 3rd_party_app.views". Why?
| [
"The third party application is not in your python path.\n",
"Is the 3rd Party App listed in INSTALLED_APPS in your settings.py?\n",
"Try placing the 3rd party package folder within your project folder. :)\n"
] | [
3,
0,
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0003982443_django_python.txt |
Q:
In a Django template for loop, checking if current item different from previous item
I'm new to django and can't find a way to get this to work in django templates. The idea is to check if previous items first letter is equal with current ones, like so:
{% for item in items %}
{% ifequal item.name[0] previous_item.name[0] %}
{{ item.name[0] }}
{% endifequal %}
{{ item.name }}<br />
{% endforeach %}
Maybe i'm trying to do this in wrong way and somebody can point me in right direction.
A:
Use the {% ifchanged %} tag.
{% for item in items %}
{% ifchanged item.name.0 %}
{{ item.name.0 }}
{% endifchanged %}
{% endfor %}
Also remember you have to always use dot syntax - brackets are not valid template syntax.
| In a Django template for loop, checking if current item different from previous item | I'm new to django and can't find a way to get this to work in django templates. The idea is to check if previous items first letter is equal with current ones, like so:
{% for item in items %}
{% ifequal item.name[0] previous_item.name[0] %}
{{ item.name[0] }}
{% endifequal %}
{{ item.name }}<br />
{% endforeach %}
Maybe i'm trying to do this in wrong way and somebody can point me in right direction.
| [
"Use the {% ifchanged %} tag.\n{% for item in items %}\n {% ifchanged item.name.0 %}\n {{ item.name.0 }}\n {% endifchanged %}\n{% endfor %}\n\nAlso remember you have to always use dot syntax - brackets are not valid template syntax.\n"
] | [
60
] | [] | [] | [
"django",
"django_templates",
"python"
] | stackoverflow_0003986183_django_django_templates_python.txt |
Q:
Rename dictionary keys according to another dictionary
(In Python 3)
I have dictionary old. I need to change some of its keys; the keys that need to be changed and the corresponding new keys are stored in a dictionary change. What's a good way to do it? Note that there may be an overlap between old.keys() and change.values(), which requires that I'm careful applying the change.
The following code would (I think) work but I was hoping for something more concise and yet Pythonic:
new = {}
for k, v in old.items():
if k in change:
k = change[k]
new[k] = v
old = new
A:
old = {change.get(k,k):v for k,v in old.items()}
| Rename dictionary keys according to another dictionary | (In Python 3)
I have dictionary old. I need to change some of its keys; the keys that need to be changed and the corresponding new keys are stored in a dictionary change. What's a good way to do it? Note that there may be an overlap between old.keys() and change.values(), which requires that I'm careful applying the change.
The following code would (I think) work but I was hoping for something more concise and yet Pythonic:
new = {}
for k, v in old.items():
if k in change:
k = change[k]
new[k] = v
old = new
| [
"old = {change.get(k,k):v for k,v in old.items()}\n\n"
] | [
10
] | [] | [] | [
"dictionary",
"python",
"python_3.x"
] | stackoverflow_0003986549_dictionary_python_python_3.x.txt |
Q:
Law of Demeter and Python
Is there a tool to check if a Python code conforms to the law of Demeter?
I found a mention of Demeter in pychecker, but it seems that the tool understands this law different to what I expect: http://en.wikipedia.org/wiki/Law_of_Demeter
The definition from wikipedia: the Law of Demeter for functions requires that a method M of an object O may only invoke the methods of the following kinds of objects:
O itself
M's parameters
any objects created/instantiated within M
O's direct component objects
a global variable, accessible by O, in the scope of M
A:
The way this law is explained in the link you provide it is far too vague and subjective to be efficiently checked by any automated tool. You would need to think of specific rules that lead to code that abides by this law. Then you can check for these rules.
A:
law of Demeter ... method M of an object O may only invoke the methods of the following kinds of objects:
O itself -- i.e. self variables. Easy to see.
M's parameters -- i.e., local variables given by the parameters. That is to say, in locals()
any objects created/instantiated within M -- i.e., local variables. That is to say, in locals()
O's direct component objects -- i.e., self variables.
a global variable, accessible by O, in the scope of M -- i.e., variables named in a global statement or implicit references to globals that are not found in the local namespace. That is to say in globals().
Ummm.... There are no other variables accessible to a function, are there? Because of the way namespaces work, I don't see any possibility for breaking this law.
Do you have an example of Python code that breaks one of these rules?
How would you get access to another namespace?
A:
You could probably break that law like this:
class SomeClass:
def someMethod(self):
self.getSomeOtherClass().someOtherMethod() # this breaks the law
def getSomeOtherClass(self):
class_ = SomeOtherClass()
return class_
Or no?
| Law of Demeter and Python | Is there a tool to check if a Python code conforms to the law of Demeter?
I found a mention of Demeter in pychecker, but it seems that the tool understands this law different to what I expect: http://en.wikipedia.org/wiki/Law_of_Demeter
The definition from wikipedia: the Law of Demeter for functions requires that a method M of an object O may only invoke the methods of the following kinds of objects:
O itself
M's parameters
any objects created/instantiated within M
O's direct component objects
a global variable, accessible by O, in the scope of M
| [
"The way this law is explained in the link you provide it is far too vague and subjective to be efficiently checked by any automated tool. You would need to think of specific rules that lead to code that abides by this law. Then you can check for these rules.\n",
"\nlaw of Demeter ... method M of an object O may ... | [
1,
1,
1
] | [] | [] | [
"python"
] | stackoverflow_0003985947_python.txt |
Q:
converting tiff to gif in php
I need to convert tiff file to gif . can some one give me php or python script to do that ?
A:
Try phpThumb in conjunction with the Imagick extension in PHP.
A:
http://php.net/manual/en/book.imagick.php
try
{
$image = '/tmp/image.tiff';
$im = new Imagick();
$im->pingImage( $image );
$im->readImage( $image );
$im->setImageFormat( 'gif' );
$im->writeImage( '/tmp/image.gif' );
}
catch(Exception $e)
{
echo $e->getMessage();
}
| converting tiff to gif in php | I need to convert tiff file to gif . can some one give me php or python script to do that ?
| [
"Try phpThumb in conjunction with the Imagick extension in PHP.\n",
"http://php.net/manual/en/book.imagick.php\ntry\n{\n $image = '/tmp/image.tiff'; \n $im = new Imagick(); \n $im->pingImage( $image ); \n $im->readImage( $image ); \n $im->setImageFormat( 'gif' ); ... | [
1,
1
] | [] | [] | [
"image_manipulation",
"php",
"python",
"tiff"
] | stackoverflow_0003985573_image_manipulation_php_python_tiff.txt |
Q:
Is this usage of python tempfile.NamedTemporaryFile secure?
Is this usage of Python tempfile.NamedTemporaryFile secure (i.e. devoid security issues of deprecated tempfile.mktemp)?
def mktemp2():
"""Create and close an empty temporary file.
Return the temporary filename"""
tf = tempfile.NamedTemporaryFile(delete=False)
tfilename = tf.name
tf.close()
return tfilename
outfilename = mktemp2()
subprocess.call(['program_name','-o',outfilename])
What I need to run external command that requires output file name as one of the arguments. It overwrites the outfilename if that exists without warnings. I want to use temporary file as I just need to read its content, I don't need it later.
A:
Totally unsafe. There is an opportunity for an attacker to create the file with whatever permissions they like (or a symlink) with that name between when it is deleted and opened by the subprocess
If you can instead create the file in a directory other than /tmp that is owned and onnly read/writeable by your process, you don't need to concern yourself with the security of the file as anything in the directory is protected
| Is this usage of python tempfile.NamedTemporaryFile secure? | Is this usage of Python tempfile.NamedTemporaryFile secure (i.e. devoid security issues of deprecated tempfile.mktemp)?
def mktemp2():
"""Create and close an empty temporary file.
Return the temporary filename"""
tf = tempfile.NamedTemporaryFile(delete=False)
tfilename = tf.name
tf.close()
return tfilename
outfilename = mktemp2()
subprocess.call(['program_name','-o',outfilename])
What I need to run external command that requires output file name as one of the arguments. It overwrites the outfilename if that exists without warnings. I want to use temporary file as I just need to read its content, I don't need it later.
| [
"Totally unsafe. There is an opportunity for an attacker to create the file with whatever permissions they like (or a symlink) with that name between when it is deleted and opened by the subprocess\nIf you can instead create the file in a directory other than /tmp that is owned and onnly read/writeable by your proc... | [
4
] | [] | [] | [
"file",
"python",
"temporary_files"
] | stackoverflow_0003986364_file_python_temporary_files.txt |
Q:
How to find the local minima of a smooth multidimensional array in NumPy efficiently?
Say I have an array in NumPy containing evaluations of a continuous differentiable function, and I want to find the local minima. There is no noise, so every point whose value is lower than the values of all its neighbors meets my criterion for a local minimum.
I have the following list comprehension which works for a two-dimensional array, ignoring potential minima on the boundaries:
import numpy as N
def local_minima(array2d):
local_minima = [ index
for index in N.ndindex(array2d.shape)
if index[0] > 0
if index[1] > 0
if index[0] < array2d.shape[0] - 1
if index[1] < array2d.shape[1] - 1
if array2d[index] < array2d[index[0] - 1, index[1] - 1]
if array2d[index] < array2d[index[0] - 1, index[1]]
if array2d[index] < array2d[index[0] - 1, index[1] + 1]
if array2d[index] < array2d[index[0], index[1] - 1]
if array2d[index] < array2d[index[0], index[1] + 1]
if array2d[index] < array2d[index[0] + 1, index[1] - 1]
if array2d[index] < array2d[index[0] + 1, index[1]]
if array2d[index] < array2d[index[0] + 1, index[1] + 1]
]
return local_minima
However, this is quite slow. I would also like to get this to work for any number of dimensions. For example, is there an easy way to get all the neighbors of a point in an array of any dimensions? Or am I approaching this problem the wrong way altogether? Should I be using numpy.gradient() instead?
A:
The location of the local minima can be found for an array of arbitrary dimension
using Ivan's detect_peaks function, with minor modifications:
import numpy as np
import scipy.ndimage.filters as filters
import scipy.ndimage.morphology as morphology
def detect_local_minima(arr):
# https://stackoverflow.com/questions/3684484/peak-detection-in-a-2d-array/3689710#3689710
"""
Takes an array and detects the troughs using the local maximum filter.
Returns a boolean mask of the troughs (i.e. 1 when
the pixel's value is the neighborhood maximum, 0 otherwise)
"""
# define an connected neighborhood
# http://www.scipy.org/doc/api_docs/SciPy.ndimage.morphology.html#generate_binary_structure
neighborhood = morphology.generate_binary_structure(len(arr.shape),2)
# apply the local minimum filter; all locations of minimum value
# in their neighborhood are set to 1
# http://www.scipy.org/doc/api_docs/SciPy.ndimage.filters.html#minimum_filter
local_min = (filters.minimum_filter(arr, footprint=neighborhood)==arr)
# local_min is a mask that contains the peaks we are
# looking for, but also the background.
# In order to isolate the peaks we must remove the background from the mask.
#
# we create the mask of the background
background = (arr==0)
#
# a little technicality: we must erode the background in order to
# successfully subtract it from local_min, otherwise a line will
# appear along the background border (artifact of the local minimum filter)
# http://www.scipy.org/doc/api_docs/SciPy.ndimage.morphology.html#binary_erosion
eroded_background = morphology.binary_erosion(
background, structure=neighborhood, border_value=1)
#
# we obtain the final mask, containing only peaks,
# by removing the background from the local_min mask
detected_minima = local_min ^ eroded_background
return np.where(detected_minima)
which you can use like this:
arr=np.array([[[0,0,0,-1],[0,0,0,0],[0,0,0,0],[0,0,0,0],[-1,0,0,0]],
[[0,0,0,0],[0,-1,0,0],[0,0,0,0],[0,0,0,-1],[0,0,0,0]]])
local_minima_locations = detect_local_minima(arr)
print(arr)
# [[[ 0 0 0 -1]
# [ 0 0 0 0]
# [ 0 0 0 0]
# [ 0 0 0 0]
# [-1 0 0 0]]
# [[ 0 0 0 0]
# [ 0 -1 0 0]
# [ 0 0 0 0]
# [ 0 0 0 -1]
# [ 0 0 0 0]]]
This says the minima occur at indices [0,0,3], [0,4,0], [1,1,1] and [1,3,3]:
print(local_minima_locations)
# (array([0, 0, 1, 1]), array([0, 4, 1, 3]), array([3, 0, 1, 3]))
print(arr[local_minima_locations])
# [-1 -1 -1 -1]
A:
Try this for 2D:
import numpy as N
def local_minima(array2d):
return ((array2d <= N.roll(array2d, 1, 0)) &
(array2d <= N.roll(array2d, -1, 0)) &
(array2d <= N.roll(array2d, 1, 1)) &
(array2d <= N.roll(array2d, -1, 1)))
This will return you an array2d-like array with True/False where local minima (four neighbors) are located.
| How to find the local minima of a smooth multidimensional array in NumPy efficiently? | Say I have an array in NumPy containing evaluations of a continuous differentiable function, and I want to find the local minima. There is no noise, so every point whose value is lower than the values of all its neighbors meets my criterion for a local minimum.
I have the following list comprehension which works for a two-dimensional array, ignoring potential minima on the boundaries:
import numpy as N
def local_minima(array2d):
local_minima = [ index
for index in N.ndindex(array2d.shape)
if index[0] > 0
if index[1] > 0
if index[0] < array2d.shape[0] - 1
if index[1] < array2d.shape[1] - 1
if array2d[index] < array2d[index[0] - 1, index[1] - 1]
if array2d[index] < array2d[index[0] - 1, index[1]]
if array2d[index] < array2d[index[0] - 1, index[1] + 1]
if array2d[index] < array2d[index[0], index[1] - 1]
if array2d[index] < array2d[index[0], index[1] + 1]
if array2d[index] < array2d[index[0] + 1, index[1] - 1]
if array2d[index] < array2d[index[0] + 1, index[1]]
if array2d[index] < array2d[index[0] + 1, index[1] + 1]
]
return local_minima
However, this is quite slow. I would also like to get this to work for any number of dimensions. For example, is there an easy way to get all the neighbors of a point in an array of any dimensions? Or am I approaching this problem the wrong way altogether? Should I be using numpy.gradient() instead?
| [
"The location of the local minima can be found for an array of arbitrary dimension\nusing Ivan's detect_peaks function, with minor modifications:\nimport numpy as np\nimport scipy.ndimage.filters as filters\nimport scipy.ndimage.morphology as morphology\n\ndef detect_local_minima(arr):\n # https://stackoverflow.... | [
20,
5
] | [] | [] | [
"discrete_mathematics",
"mathematical_optimization",
"numpy",
"python"
] | stackoverflow_0003986345_discrete_mathematics_mathematical_optimization_numpy_python.txt |
Q:
What is the Pythonic way to create informative comments in Python 2.x?
For further clarification, C# has the '///' directive which invokes the super-secret-styled Comments which allow you to have nice comments built into intellisense. Java has the '@' directive that allows you to have nice comments as well.
Does Python have something like this? I hope this question is clear enough, thanks!
A:
They are called docstrings in Python. See the documentation.
A nice feature are the code samples (explained here). They allow to put code in the documentation:
>>> 1 + 1
2
>>>
While this doesn't look like much, there is a tool which can scan the docstrings for such patterns and execute this code as unit tests. This makes sure that Python documenation doesn't go out of date (unlike in other languages).
A:
Python does not have "the" tool for generating documentation. One of the available tools I can recommend is epydoc. It supports directives like @type, @param, @rtype, @returns and @raises. The website also has a few examples.
A:
The convention is well documented in PEP 257.
To summarize, add triple-quoted strings as the first statement in any class or function.
There's also some history that's worth a read if you have time in PEP 256.
A:
In python a docstring can be viewed via commands.
class myClass:
"""
This is some documentation for the class.
method1()
method2()
"""
def method1(p1):
....
...
...
def method2():
...
...
v = myClass
you can then view the docstring by using either
v.__doc__
or
help(myClass)
A:
as other people has pointed out in Python the resource you are looking for is called doc string.
other people has suggested read the documentation, alibeit I suggest watch this link
Sphinx Project, this is an system like Sand Castle that help you building and elaborate documentation, this documentation can be build from doc string but is not mandatory.
hope this helps
| What is the Pythonic way to create informative comments in Python 2.x? | For further clarification, C# has the '///' directive which invokes the super-secret-styled Comments which allow you to have nice comments built into intellisense. Java has the '@' directive that allows you to have nice comments as well.
Does Python have something like this? I hope this question is clear enough, thanks!
| [
"They are called docstrings in Python. See the documentation.\nA nice feature are the code samples (explained here). They allow to put code in the documentation:\n>>> 1 + 1\n2\n>>>\n\nWhile this doesn't look like much, there is a tool which can scan the docstrings for such patterns and execute this code as unit tes... | [
7,
2,
2,
0,
0
] | [] | [] | [
"comments",
"python"
] | stackoverflow_0003987163_comments_python.txt |
Q:
efficient way to compress a numpy array (python)
I am looking for an efficient way to compress a numpy array.
I have an array like: dtype=[(name, (np.str_,8), (job, (np.str_,8), (income, np.uint32)] (my favourite example).
if I'm doing something like this: my_array.compress(my_array['income'] > 10000) I'm getting a new array with only incomes > 10000, and it's quite quick.
But if I would like to filter jobs in list: it doesn't work!
my__array.compress(m_y_array['job'] in ['this', 'that'])
Error:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
So I have to do something like this:
np.array([x for x in my_array if x['job'] in ['this', 'that'])
This is both ugly and inefficient!
Do you have an idea to make it efficient?
A:
It's not quite as nice as what you'd like, but I think you can do:
mask = my_array['job'] == 'this'
for condition in ['that', 'other']:
mask = numpy.logical_or(mask,my_array['job'] == condition)
selected_array = my_array[mask]
A:
The best way to compress a numpy array is to use pytables. It is the defacto standard when it comes to handling a large amount of numerical data.
import tables as t
hdf5_file = t.openFile('outfile.hdf5')
hdf5_file.createArray ......
hdf5_file.close()
A:
If you're looking for a numpy-only solution, I don't think you'll get it. Still, although it does lots of work under the covers, consider whether the tabular package might be able to do what you want in a less "ugly" fashion. I'm not sure you'll get more "efficient" without writing a C extension yourself.
By the way, I think this is both efficient enough and pretty enough for just about any real case.
my_array.compress([x in ['this', 'that'] for x in my_array['job']])
As an extra step in making this less ugly and more efficient, you would presumably not have a hardcoded list in the middle, so I would use a set instead, as it's much faster to search than a list if the list has more than a few items:
job_set = set(['this', 'that'])
my_array.compress([x in job_set for x in my_array['job']])
If you don't think this is efficient enough, I'd advise benchmarking so you'll have confidence that you're spending your time wisely as you try to make it even more efficient.
| efficient way to compress a numpy array (python) | I am looking for an efficient way to compress a numpy array.
I have an array like: dtype=[(name, (np.str_,8), (job, (np.str_,8), (income, np.uint32)] (my favourite example).
if I'm doing something like this: my_array.compress(my_array['income'] > 10000) I'm getting a new array with only incomes > 10000, and it's quite quick.
But if I would like to filter jobs in list: it doesn't work!
my__array.compress(m_y_array['job'] in ['this', 'that'])
Error:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
So I have to do something like this:
np.array([x for x in my_array if x['job'] in ['this', 'that'])
This is both ugly and inefficient!
Do you have an idea to make it efficient?
| [
"It's not quite as nice as what you'd like, but I think you can do:\nmask = my_array['job'] == 'this'\nfor condition in ['that', 'other']:\n mask = numpy.logical_or(mask,my_array['job'] == condition)\nselected_array = my_array[mask]\n\n",
"The best way to compress a numpy array is to use pytables. It is the defa... | [
1,
1,
0
] | [] | [] | [
"compression",
"filter",
"numpy",
"python"
] | stackoverflow_0001870871_compression_filter_numpy_python.txt |
Q:
Beautiful Soup: Get the Contents of Sub-Nodes
I have following python code:
def scrapeSite(urlToCheck):
html = urllib2.urlopen(urlToCheck).read()
from BeautifulSoup import BeautifulSoup
soup = BeautifulSoup(html)
tdtags = soup.findAll('td', { "class" : "c" })
for t in tdtags:
print t.encode('latin1')
This will return me following html code:
<td class="c">
<a href="more.asp">FOO</a>
</td>
<td class="c">
<a href="alotmore.asp">BAR</a>
</td>
I'd like to get the text between the a-Node (e.g. FOO or BAR), which would be t.contents.contents. Unfortunately it doesn't work that easy :)
Does anyone have an idea how to solve that?
Thanks a lot, any help is appreciated!
Cheers,
Joseph
A:
In this case, you can use t.contents[1].contents[0] to get FOO and BAR.
The thing is that contents returns a list with all elements (Tags and NavigableStrings), if you print contents, you can see it's something like
[u'\n', <a href="more.asp">FOO</a>, u'\n']
So, to get to the actual tag you need to access contents[1] (if you have the exact same contents, this can vary depending on the source HTML), after you've find the proper index you can use contents[0] afterwards to get the string inside the a tag.
Now, as this depends on the exact contents of the HTML source, it's very fragile. A more generic and robust solution would be to use find() again to find the 'a' tag, via t.find('a') and then use the contents list to get the values in it t.find('a').contents[0] or just t.find('a').contents to get the whole list.
A:
For your specific example, pyparsing's makeHTMLTags can be useful, since they are tolerant of many HTML variabilities in HTML tags, but provide a handy structure to the results:
html = """
<td class="c">
<a href="more.asp">FOO</a>
</td>
<td class="c">
<a href="alotmore.asp">BAR</a>
</td>
<td class="d">
<a href="alotmore.asp">BAZZ</a>
</td>
"""
from pyparsing import *
td,tdEnd = makeHTMLTags("td")
a,aEnd = makeHTMLTags("a")
td.setParseAction(withAttribute(**{"class":"c"}))
pattern = td + a("anchor") + SkipTo(aEnd)("aBody") + aEnd + tdEnd
for t,_,_ in pattern.scanString(html):
print t.aBody, '->', t.anchor.href
prints:
FOO -> more.asp
BAR -> alotmore.asp
| Beautiful Soup: Get the Contents of Sub-Nodes | I have following python code:
def scrapeSite(urlToCheck):
html = urllib2.urlopen(urlToCheck).read()
from BeautifulSoup import BeautifulSoup
soup = BeautifulSoup(html)
tdtags = soup.findAll('td', { "class" : "c" })
for t in tdtags:
print t.encode('latin1')
This will return me following html code:
<td class="c">
<a href="more.asp">FOO</a>
</td>
<td class="c">
<a href="alotmore.asp">BAR</a>
</td>
I'd like to get the text between the a-Node (e.g. FOO or BAR), which would be t.contents.contents. Unfortunately it doesn't work that easy :)
Does anyone have an idea how to solve that?
Thanks a lot, any help is appreciated!
Cheers,
Joseph
| [
"In this case, you can use t.contents[1].contents[0] to get FOO and BAR. \nThe thing is that contents returns a list with all elements (Tags and NavigableStrings), if you print contents, you can see it's something like\n[u'\\n', <a href=\"more.asp\">FOO</a>, u'\\n']\nSo, to get to the actual tag you need to access ... | [
3,
1
] | [] | [] | [
"beautifulsoup",
"python"
] | stackoverflow_0003987732_beautifulsoup_python.txt |
Q:
Redefining python list
is it possible to redefine the behavior of a Python List from Python, I mean, without having to write anything in the Python sourcecode?
A:
You could always create your own subclass inheriting from list.
An example (although you would probably never want to use this):
class new_list(list):
'''A list that will return -1 for non-existent items.'''
def __getitem__(self, i):
if i >= len(self):
return -1
else:
return super(new_list, self).__getitem__(i)
l = new_list([1,2,3])
l[2] #returns 3 just like a normal list
l[3] #returns -1 instead of raising IndexError
| Redefining python list | is it possible to redefine the behavior of a Python List from Python, I mean, without having to write anything in the Python sourcecode?
| [
"You could always create your own subclass inheriting from list.\nAn example (although you would probably never want to use this):\nclass new_list(list):\n '''A list that will return -1 for non-existent items.'''\n def __getitem__(self, i):\n if i >= len(self):\n return -1\n else:\n ... | [
3
] | [] | [] | [
"python"
] | stackoverflow_0003988069_python.txt |
Q:
add multiple columns to an sqlite database in python
I want to create a table with multiple columns, say about 100 columns, in an sqlite database. Is there a better solution than naming each column individually? I am trying the following:
conn = sqlite3.connect('trialDB')
cur = conn.cursor()
listOfVars = ("added0",)
for i in range(1,100):
newVar = ("added" + str(i),)
listOfVars = listOfVars + newVar
print listOfVars
for i in listOfVars:
cur.execute('''ALTER TABLE testTable ADD COLUMN ? TEXT''',(i,))
conn.commit()
cur.close()
conn.close()
But I get the following error:
OperationalError: near "?": syntax error
Can someone please suggest how I can do this? Thanks!
A:
I guess you could do it through string formatting, like this :
for i in listOfVars:
cur.execute('''ALTER TABLE testTable ADD COLUMN %s TEXT''' % i)
But having 100 columns in a sqlite db is certainly not common, are you sure of having a proper db design ?
| add multiple columns to an sqlite database in python | I want to create a table with multiple columns, say about 100 columns, in an sqlite database. Is there a better solution than naming each column individually? I am trying the following:
conn = sqlite3.connect('trialDB')
cur = conn.cursor()
listOfVars = ("added0",)
for i in range(1,100):
newVar = ("added" + str(i),)
listOfVars = listOfVars + newVar
print listOfVars
for i in listOfVars:
cur.execute('''ALTER TABLE testTable ADD COLUMN ? TEXT''',(i,))
conn.commit()
cur.close()
conn.close()
But I get the following error:
OperationalError: near "?": syntax error
Can someone please suggest how I can do this? Thanks!
| [
"I guess you could do it through string formatting, like this :\nfor i in listOfVars:\n cur.execute('''ALTER TABLE testTable ADD COLUMN %s TEXT''' % i)\n\nBut having 100 columns in a sqlite db is certainly not common, are you sure of having a proper db design ?\n"
] | [
6
] | [] | [] | [
"python",
"sqlite"
] | stackoverflow_0003988055_python_sqlite.txt |
Q:
Custom field's to_python not working? - Django
I'm trying to implement an encrypted char field.
I'm using pydes for encryption
This is what I have:
from pyDes import triple_des, PAD_PKCS5
from binascii import unhexlify as unhex
from binascii import hexlify as dohex
class BaseEncryptedField(models.CharField):
def __init__(self, *args, **kwargs):
self.td = triple_des(unhex('c35414909168354f77fe89816c6b625bde4fc9ee51529f2f'))
super(BaseEncryptedField, self).__init__(*args, **kwargs)
def to_python(self, value):
return self.td.decrypt(unhex(value), padmode=PAD_PKCS5)
def get_db_prep_value(self, value):
return dohex(self.td.encrypt(value, padmode=PAD_PKCS5))
The field is saved encrypted in the database succesfully
but when retireved it does not print out the decrypted version
Any ideas?
A:
You've forgotten to set the metaclass:
class BaseEncryptedField(models.CharField):
__metaclass__ = models.SubfieldBase
... etc ...
As the documentation explains, to_python is only called when the SubfieldBase metaclass is used.
| Custom field's to_python not working? - Django | I'm trying to implement an encrypted char field.
I'm using pydes for encryption
This is what I have:
from pyDes import triple_des, PAD_PKCS5
from binascii import unhexlify as unhex
from binascii import hexlify as dohex
class BaseEncryptedField(models.CharField):
def __init__(self, *args, **kwargs):
self.td = triple_des(unhex('c35414909168354f77fe89816c6b625bde4fc9ee51529f2f'))
super(BaseEncryptedField, self).__init__(*args, **kwargs)
def to_python(self, value):
return self.td.decrypt(unhex(value), padmode=PAD_PKCS5)
def get_db_prep_value(self, value):
return dohex(self.td.encrypt(value, padmode=PAD_PKCS5))
The field is saved encrypted in the database succesfully
but when retireved it does not print out the decrypted version
Any ideas?
| [
"You've forgotten to set the metaclass:\nclass BaseEncryptedField(models.CharField):\n\n __metaclass__ = models.SubfieldBase\n\n ... etc ...\n\nAs the documentation explains, to_python is only called when the SubfieldBase metaclass is used.\n"
] | [
16
] | [] | [] | [
"django",
"django_models",
"encryption",
"python"
] | stackoverflow_0003988171_django_django_models_encryption_python.txt |
Q:
Checking for duplicates
I have a small problem. I am trying to check to see if status's value already exists and make sure I do not create another instance of it, but I am having some trouble. Ex. If the project status was once "Quote" I do not want to be able make the status "Quote" again. Right now, I check to make sure if the user selects edit, then clicks submit, the status doesnt duplicate. However, if the User selected another status, like "completed" nothing stops them from going back in and selecting "quote" again.
models.py
class Status(models.Model):
project = models.ForeignKey(Project, related_name='status')
value = models.CharField(max_length=20, choices=STATUS_CHOICES, verbose_name='Status')
date_created= models.DateTimeField(auto_now=True)
class Project(models.Model):
...
views.py
if form.is_valid():
project = form.save(commit=False)
project.created_by = request.user
project.save()
old_status = project.current_status()
if not old_status or old_status.value != form.cleaned_data.get('status', None):
#add status instance
project.status.create(
value = form.cleaned_data.get('status', None)
)
return HttpResponseRedirect('/project/')
Any help, or pointing me in the right direction would be much appreciated.
Thanks everyone!
A:
value = models.CharField(max_length=20, choices=STATUS_CHOICES, verbose_name='Status', unique=True)
| Checking for duplicates | I have a small problem. I am trying to check to see if status's value already exists and make sure I do not create another instance of it, but I am having some trouble. Ex. If the project status was once "Quote" I do not want to be able make the status "Quote" again. Right now, I check to make sure if the user selects edit, then clicks submit, the status doesnt duplicate. However, if the User selected another status, like "completed" nothing stops them from going back in and selecting "quote" again.
models.py
class Status(models.Model):
project = models.ForeignKey(Project, related_name='status')
value = models.CharField(max_length=20, choices=STATUS_CHOICES, verbose_name='Status')
date_created= models.DateTimeField(auto_now=True)
class Project(models.Model):
...
views.py
if form.is_valid():
project = form.save(commit=False)
project.created_by = request.user
project.save()
old_status = project.current_status()
if not old_status or old_status.value != form.cleaned_data.get('status', None):
#add status instance
project.status.create(
value = form.cleaned_data.get('status', None)
)
return HttpResponseRedirect('/project/')
Any help, or pointing me in the right direction would be much appreciated.
Thanks everyone!
| [
"value = models.CharField(max_length=20, choices=STATUS_CHOICES, verbose_name='Status', unique=True)\n\n"
] | [
2
] | [] | [] | [
"django",
"django_forms",
"django_models",
"python"
] | stackoverflow_0003988418_django_django_forms_django_models_python.txt |
Q:
sqlalchemy many-to-many, but inverse?
I'm sorry if inverse is not the preferred nomenclature, which may have hindered my searching. In any case, I'm dealing with two sqlalchemy declarative classes, which is a many-to-many relationship. The first is Account, and the second is Collection. Users "purchase" collections, but I want to show the first 10 collections the user hasn't purchased yet.
from sqlalchemy import *
from sqlalchemy.orm import scoped_session, sessionmaker, relation
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
engine = create_engine('sqlite:///:memory:', echo=True)
Session = sessionmaker(bind=engine)
account_to_collection_map = Table('account_to_collection_map', Base.metadata,
Column('account_id', Integer, ForeignKey('account.id')),
Column('collection_id', Integer, ForeignKey('collection.id')))
class Account(Base):
__tablename__ = 'account'
id = Column(Integer, primary_key=True)
email = Column(String)
collections = relation("Collection", secondary=account_to_collection_map)
# use only for querying?
dyn_coll = relation("Collection", secondary=account_to_collection_map, lazy='dynamic')
def __init__(self, email):
self.email = email
def __repr__(self):
return "<Acc(id=%s email=%s)>" % (self.id, self.email)
class Collection(Base):
__tablename__ = 'collection'
id = Column(Integer, primary_key=True)
slug = Column(String)
def __init__(self, slug):
self.slug = slug
def __repr__(self):
return "<Coll(id=%s slug=%s)>" % (self.id, self.slug)
So, with account.collections, I can get all collections, and with dyn_coll.limit(1).all() I can apply queries to the list of collections...but how do I do the inverse? I'd like to get the first 10 collections that the account does not have mapped.
Any help is incredibly appreciated. Thanks!
A:
I would not use the relationship for the purpose, as technically it it not a relationship you are building (so all the tricks of keeping it synchronized on both sides etc would not work).
IMO, the cleanest way would be to define a simple query which will return you the objects you are looking for:
class Account(Base):
...
# please note added *backref*, which is needed to build the
#query in Account.get_other_collections(...)
collections = relation("Collection", secondary=account_to_collection_map, backref="accounts")
def get_other_collections(self, maxrows=None):
""" Returns the collections this Account does not have yet. """
q = Session.object_session(self).query(Collection)
q = q.filter(~Collection.accounts.any(id=self.id))
# note: you might also want to order the results
return q[:maxrows] if maxrows else q.all()
...
| sqlalchemy many-to-many, but inverse? | I'm sorry if inverse is not the preferred nomenclature, which may have hindered my searching. In any case, I'm dealing with two sqlalchemy declarative classes, which is a many-to-many relationship. The first is Account, and the second is Collection. Users "purchase" collections, but I want to show the first 10 collections the user hasn't purchased yet.
from sqlalchemy import *
from sqlalchemy.orm import scoped_session, sessionmaker, relation
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
engine = create_engine('sqlite:///:memory:', echo=True)
Session = sessionmaker(bind=engine)
account_to_collection_map = Table('account_to_collection_map', Base.metadata,
Column('account_id', Integer, ForeignKey('account.id')),
Column('collection_id', Integer, ForeignKey('collection.id')))
class Account(Base):
__tablename__ = 'account'
id = Column(Integer, primary_key=True)
email = Column(String)
collections = relation("Collection", secondary=account_to_collection_map)
# use only for querying?
dyn_coll = relation("Collection", secondary=account_to_collection_map, lazy='dynamic')
def __init__(self, email):
self.email = email
def __repr__(self):
return "<Acc(id=%s email=%s)>" % (self.id, self.email)
class Collection(Base):
__tablename__ = 'collection'
id = Column(Integer, primary_key=True)
slug = Column(String)
def __init__(self, slug):
self.slug = slug
def __repr__(self):
return "<Coll(id=%s slug=%s)>" % (self.id, self.slug)
So, with account.collections, I can get all collections, and with dyn_coll.limit(1).all() I can apply queries to the list of collections...but how do I do the inverse? I'd like to get the first 10 collections that the account does not have mapped.
Any help is incredibly appreciated. Thanks!
| [
"I would not use the relationship for the purpose, as technically it it not a relationship you are building (so all the tricks of keeping it synchronized on both sides etc would not work).\nIMO, the cleanest way would be to define a simple query which will return you the objects you are looking for:\nclass Account(... | [
5
] | [] | [] | [
"python",
"sqlalchemy"
] | stackoverflow_0003983593_python_sqlalchemy.txt |
Q:
Django Formset management-form validation error
I have a form and a formset on my template. The problem is that the formset is throwing validation error claiming that the management form is "missing or has been tampered with".
Here is my view
@login_required
def home(request):
user = UserProfile.objects.get(pk=request.session['_auth_user_id'])
blogz = list(blog.objects.filter(deleted='0'))
delblog = modelformset_factory(blog, exclude=('poster','date' ,'title','content'))
if request.user.is_staff== True:
staff = 1
else:
staff = 0
staffis = 1
if request.method == 'POST':
delblogformset = delblog(request.POST)
if delblogformset.is_valid():
delblogformset.save()
return HttpResponseRedirect('/home')
else:
delblogformset = delblog(queryset=blog.objects.filter( deleted='0'))
blogform = BlogForm(request.POST)
if blogform.is_valid():
blogform.save()
return HttpResponseRedirect('/home')
else:
blogform = BlogForm(initial = {'poster':user.id})
blogs= zip(blogz,delblogformset.forms)
paginator = Paginator(blogs, 10) # Show 25 contacts per page
# Make sure page request is an int. If not, deliver first page.
try:
page = int(request.GET.get('page', '1'))
except ValueError:
page = 1
# If page request (9999) is out of range, deliver last page of results.
try:
blogs = paginator.page(page)
except (EmptyPage, InvalidPage):
blogs = paginator.page(paginator.num_pages)
return render_to_response('home.html', {'user':user, 'blogform':blogform, 'staff': staff, 'staffis': staffis, 'blog':blogs, 'delblog':delblogformset}, context_instance = RequestContext( request ))
my template
{%block content%}
<h2>Home</h2>
{% ifequal staff staffis %}
{% if form.errors %}
<ul>
{% for field in form %}
<H3 class="title">
<p class="error"> {% if field.errors %}<li>{{ field.errors|striptags }}</li>{% endif %}</p>
</H3>
{% endfor %}
</ul>
{% endif %}
<h3>Post a Blog to the Front Page</h3>
<form method="post" id="form2" action="" class="infotabs accfrm">
{{ blogform.as_p }}
<input type="submit" value="Submit" />
</form>
<br>
<br>
{% endifequal %}
<div class="pagination">
<span class="step-links">
{% if blog.has_previous %}
<a href="?page={{ blog.previous_page_number }}">previous</a>
{% endif %}
<span class="current">
Page {{ blog.number }} of {{ blog.paginator.num_pages }}.
</span>
{% if blog.has_next %}
<a href="?page={{ blog.next_page_number }}">next</a>
{% endif %}
</span>
<form method="post" action="" class="usertabs accfrm">
{{delblog.management_form}}
{% for b, form in blog.object_list %}
<div class="blog">
<h3>{{b.title}}</h3>
<p>{{b.content}}</p>
<p>posted by <strong>{{b.poster}}</strong> on {{b.date}}</p>
{% ifequal staff staffis %}<p>{{form.as_p}}<input type="submit" value="Delete" /></p>{% endifequal %}
</div>
{% endfor %}
</form>
{%endblock%}
Here is the Traceback
ValidationError at /home/
Request Method: POST
Request URL: http://localhost:8000/home/
Exception Type: ValidationError
Exception Value:
Exception Location: /usr/lib/python2.6/site-packages/django/forms/formsets.py in _management_form, line 54
Python Executable: /usr/bin/python
Python Version: 2.6.2
Python Path: ['/home/projects/acms', '/usr/lib/python2.6/site-packages/django_socialregistration-0.2-py2.6.egg', '/usr/lib/python26.zip', '/usr/lib/python2.6', '/usr/lib/python2.6/plat-linux2', '/usr/lib/python2.6/lib-tk', '/usr/lib/python2.6/lib-old', '/usr/lib/python2.6/lib-dynload', '/usr/lib/python2.6/site-packages', '/usr/lib/python2.6/site-packages/Numeric', '/usr/lib/python2.6/site-packages/PIL', '/usr/lib/python2.6/site-packages/gst-0.10', '/usr/lib/python2.6/site-packages/gtk-2.0', '/usr/lib/python2.6/site-packages/webkit-1.0']
Server time: Mon, 29 Mar 2010 12:02:43 +0300
Traceback Switch to copy-and-paste view
* /usr/lib/python2.6/site-packages/django/core/handlers/base.py in get_response
85. # Apply view middleware
86. for middleware_method in self._view_middleware:
87. response = middleware_method(request, callback, callback_args, callback_kwargs)
88. if response:
89. return response
90.
91. try:
92. response = callback(request, *callback_args, **callback_kwargs) ...
93. except Exception, e:
94. # If the view raised an exception, run it through exception
95. # middleware, and if the exception middleware returns a
96. # response, use that. Otherwise, reraise the exception.
97. for middleware_method in self._exception_middleware:
98. response = middleware_method(request, e)
▶ Local vars
Variable Value
callback
<django.contrib.auth.decorators._CheckLogin object at 0xb655ad2c>
callback_args
()
callback_kwargs
{}
e
ValidationError()
exc_info
(<class 'django.forms.util.ValidationError'>, ValidationError(), <traceback object at 0xb6630a2c>)
exceptions
<module 'django.core.exceptions' from '/usr/lib/python2.6/site-packages/django/core/exceptions.pyc'>
middleware_method
<bound method TransactionMiddleware.process_exception of <django.middleware.transaction.TransactionMiddleware object at 0xb676ff6c>>
receivers
[(<function _rollback_on_exception at 0x8c845dc>, None)]
request
<WSGIRequest GET:<QueryDict: {}>, POST:<QueryDict: {u'content': [u'test'], u'poster': [u'4'], u'title': [u'test']}>, COOKIES:{'sessionid': '8f4b4fa8411cc5baa05c2016a8ad00f4'}, META:{'COLORTERM': 'gnome-terminal', 'CONTENT_LENGTH': '32', 'CONTENT_TYPE': 'application/x-www-form-urlencoded', 'CVS_RSH': 'ssh', 'DBUS_SESSION_BUS_ADDRESS': 'unix:abstract=/tmp/dbus-nKl1u8UWGs,guid=fabac1ba0d651ceae76e1d9a4bafa535', 'DESKTOP_SESSION': 'gnome', 'DISPLAY': ':0.0', 'DJANGO_SETTINGS_MODULE': 'acms.settings', 'GATEWAY_INTERFACE': 'CGI/1.1', 'GDMSESSION': 'gnome', 'GDM_KEYBOARD_LAYOUT': 'us', 'GDM_LANG': 'en_GB.UTF-8', 'GNOME_DESKTOP_SESSION_ID': 'this-is-deprecated', 'GNOME_KEYRING_PID': '1511', 'GNOME_KEYRING_SOCKET': '/tmp/keyring-uvdktc/socket', 'GTK_RC_FILES': '/etc/gtk/gtkrc:/home/user/.gtkrc-1.2-gnome2', 'G_BROKEN_FILENAMES': '1', 'HISTCONTROL': 'ignoreboth', 'HISTSIZE': '1000', 'HOME': '/home/user', 'HOSTNAME': 'desktop.theblackout', 'HTTP_ACCEPT': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'HTTP_ACCEPT_CHARSET': 'ISO-8859-1,utf-8;q=0.7,*;q=0.7', 'HTTP_ACCEPT_ENCODING': 'gzip,deflate', 'HTTP_ACCEPT_LANGUAGE': 'en-us,en;q=0.5', 'HTTP_CONNECTION': 'keep-alive', 'HTTP_COOKIE': 'sessionid=8f4b4fa8411cc5baa05c2016a8ad00f4', 'HTTP_HOST': 'localhost:8000', 'HTTP_KEEP_ALIVE': '115', 'HTTP_REFERER': 'http://localhost:8000/home/', 'HTTP_USER_AGENT': 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2) Gecko/20100218 Fedora/3.6.1-1.fc13 Firefox/3.6', 'IMSETTINGS_INTEGRATE_DESKTOP': 'yes', 'IMSETTINGS_MODULE': 'none', 'KDEDIRS': '/usr', 'KDE_IS_PRELINKED': '1', 'KMIX_PULSEAUDIO_DISABLE': '1', 'LANG': 'en_GB.UTF-8', 'LESSOPEN': '|/usr/bin/lesspipe.sh %s', 'LOGNAME': 'user', 'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.tbz=01;31:*.tbz2=01;31:*.bz=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:', 'MAIL': '/var/spool/mail/user', 'OLDPWD': '/home/user', 'ORBIT_SOCKETDIR': '/tmp/orbit-user', 'PATH': '/usr/lib/qt-3.3/bin:/usr/kerberos/sbin:/usr/kerberos/bin:/usr/lib/ccache:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/real/RealPlayer:/home/user/bin:/opt/real/RealPlayer', 'PATH_INFO': u'/home/', 'PWD': '/home/projects/acms', 'QTDIR': '/usr/lib/qt-3.3', 'QTINC': '/usr/lib/qt-3.3/include', 'QTLIB': '/usr/lib/qt-3.3/lib', 'QT_IM_MODULE': 'xim', 'QUERY_STRING': '', 'REMOTE_ADDR': '127.0.0.1', 'REMOTE_HOST': '', 'REQUEST_METHOD': 'POST', 'RUN_MAIN': 'true', 'SCRIPT_NAME': u'', 'SERVER_NAME': 'localhost.localdomain', 'SERVER_PORT': '8000', 'SERVER_PROTOCOL': 'HTTP/1.1', 'SERVER_SOFTWARE': 'WSGIServer/0.1 Python/2.6.2', 'SESSION_MANAGER': 'local/unix:@/tmp/.ICE-unix/1518,unix/unix:/tmp/.ICE-unix/1518', 'SHELL': '/bin/bash', 'SHLVL': '2', 'SSH_ASKPASS': '/usr/libexec/openssh/gnome-ssh-askpass', 'SSH_AUTH_SOCK': '/tmp/keyring-uvdktc/socket.ssh', 'TERM': 'xterm', 'TZ': 'Africa/Nairobi', 'USER': 'user', 'USERNAME': 'user', 'WINDOWID': '79691779', 'XAUTHORITY': '/var/run/gdm/auth-for-user-YHprr5/database', 'XDG_SESSION_COOKIE': 'b52f8ef12c1cf7be85729e5e4ae08729-1269802292.411279-1821712829', 'XMODIFIERS': '@im=none', '_': '/usr/bin/python', 'wsgi.errors': <open file '<stderr>', mode 'w' at 0xb76b70c0>, 'wsgi.file_wrapper': <class 'django.core.servers.basehttp.FileWrapper'>, 'wsgi.input': <socket._fileobject object at 0xb675f8b4>, 'wsgi.multiprocess': False, 'wsgi.multithread': True, 'wsgi.run_once': False, 'wsgi.url_scheme': 'http', 'wsgi.version': (1, 0)}>
resolver
<RegexURLResolver acms.urls (None:None) ^/>
response
None
self
<django.core.handlers.wsgi.WSGIHandler object at 0xb6755acc>
settings
<django.conf.LazySettings object at 0xb740856c>
urlconf
'acms.urls'
urlresolvers
<module 'django.core.urlresolvers' from '/usr/lib/python2.6/site-packages/django/core/urlresolvers.pyc'>
* /usr/lib/python2.6/site-packages/django/contrib/auth/decorators.py in __call__
71.
72. def __get__(self, obj, cls=None):
73. view_func = self.view_func.__get__(obj, cls)
74. return _CheckLogin(view_func, self.test_func, self.login_url, self.redirect_field_name)
75.
76. def __call__(self, request, *args, **kwargs):
77. if self.test_func(request.user):
78. return self.view_func(request, *args, **kwargs) ...
79. path = urlquote(request.get_full_path())
80. tup = self.login_url, self.redirect_field_name, path
81. return HttpResponseRedirect('%s?%s=%s' % tup)
▶ Local vars
Variable Value
args
()
kwargs
{}
request
<WSGIRequest GET:<QueryDict: {}>, POST:<QueryDict: {u'content': [u'test'], u'poster': [u'4'], u'title': [u'test']}>, COOKIES:{'sessionid': '8f4b4fa8411cc5baa05c2016a8ad00f4'}, META:{'COLORTERM': 'gnome-terminal', 'CONTENT_LENGTH': '32', 'CONTENT_TYPE': 'application/x-www-form-urlencoded', 'CVS_RSH': 'ssh', 'DBUS_SESSION_BUS_ADDRESS': 'unix:abstract=/tmp/dbus-nKl1u8UWGs,guid=fabac1ba0d651ceae76e1d9a4bafa535', 'DESKTOP_SESSION': 'gnome', 'DISPLAY': ':0.0', 'DJANGO_SETTINGS_MODULE': 'acms.settings', 'GATEWAY_INTERFACE': 'CGI/1.1', 'GDMSESSION': 'gnome', 'GDM_KEYBOARD_LAYOUT': 'us', 'GDM_LANG': 'en_GB.UTF-8', 'GNOME_DESKTOP_SESSION_ID': 'this-is-deprecated', 'GNOME_KEYRING_PID': '1511', 'GNOME_KEYRING_SOCKET': '/tmp/keyring-uvdktc/socket', 'GTK_RC_FILES': '/etc/gtk/gtkrc:/home/user/.gtkrc-1.2-gnome2', 'G_BROKEN_FILENAMES': '1', 'HISTCONTROL': 'ignoreboth', 'HISTSIZE': '1000', 'HOME': '/home/user', 'HOSTNAME': 'desktop.theblackout', 'HTTP_ACCEPT': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'HTTP_ACCEPT_CHARSET': 'ISO-8859-1,utf-8;q=0.7,*;q=0.7', 'HTTP_ACCEPT_ENCODING': 'gzip,deflate', 'HTTP_ACCEPT_LANGUAGE': 'en-us,en;q=0.5', 'HTTP_CONNECTION': 'keep-alive', 'HTTP_COOKIE': 'sessionid=8f4b4fa8411cc5baa05c2016a8ad00f4', 'HTTP_HOST': 'localhost:8000', 'HTTP_KEEP_ALIVE': '115', 'HTTP_REFERER': 'http://localhost:8000/home/', 'HTTP_USER_AGENT': 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2) Gecko/20100218 Fedora/3.6.1-1.fc13 Firefox/3.6', 'IMSETTINGS_INTEGRATE_DESKTOP': 'yes', 'IMSETTINGS_MODULE': 'none', 'KDEDIRS': '/usr', 'KDE_IS_PRELINKED': '1', 'KMIX_PULSEAUDIO_DISABLE': '1', 'LANG': 'en_GB.UTF-8', 'LESSOPEN': '|/usr/bin/lesspipe.sh %s', 'LOGNAME': 'user', 'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.tbz=01;31:*.tbz2=01;31:*.bz=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:', 'MAIL': '/var/spool/mail/user', 'OLDPWD': '/home/user', 'ORBIT_SOCKETDIR': '/tmp/orbit-user', 'PATH': '/usr/lib/qt-3.3/bin:/usr/kerberos/sbin:/usr/kerberos/bin:/usr/lib/ccache:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/real/RealPlayer:/home/user/bin:/opt/real/RealPlayer', 'PATH_INFO': u'/home/', 'PWD': '/home/projects/acms', 'QTDIR': '/usr/lib/qt-3.3', 'QTINC': '/usr/lib/qt-3.3/include', 'QTLIB': '/usr/lib/qt-3.3/lib', 'QT_IM_MODULE': 'xim', 'QUERY_STRING': '', 'REMOTE_ADDR': '127.0.0.1', 'REMOTE_HOST': '', 'REQUEST_METHOD': 'POST', 'RUN_MAIN': 'true', 'SCRIPT_NAME': u'', 'SERVER_NAME': 'localhost.localdomain', 'SERVER_PORT': '8000', 'SERVER_PROTOCOL': 'HTTP/1.1', 'SERVER_SOFTWARE': 'WSGIServer/0.1 Python/2.6.2', 'SESSION_MANAGER': 'local/unix:@/tmp/.ICE-unix/1518,unix/unix:/tmp/.ICE-unix/1518', 'SHELL': '/bin/bash', 'SHLVL': '2', 'SSH_ASKPASS': '/usr/libexec/openssh/gnome-ssh-askpass', 'SSH_AUTH_SOCK': '/tmp/keyring-uvdktc/socket.ssh', 'TERM': 'xterm', 'TZ': 'Africa/Nairobi', 'USER': 'user', 'USERNAME': 'user', 'WINDOWID': '79691779', 'XAUTHORITY': '/var/run/gdm/auth-for-user-YHprr5/database', 'XDG_SESSION_COOKIE': 'b52f8ef12c1cf7be85729e5e4ae08729-1269802292.411279-1821712829', 'XMODIFIERS': '@im=none', '_': '/usr/bin/python', 'wsgi.errors': <open file '<stderr>', mode 'w' at 0xb76b70c0>, 'wsgi.file_wrapper': <class 'django.core.servers.basehttp.FileWrapper'>, 'wsgi.input': <socket._fileobject object at 0xb675f8b4>, 'wsgi.multiprocess': False, 'wsgi.multithread': True, 'wsgi.run_once': False, 'wsgi.url_scheme': 'http', 'wsgi.version': (1, 0)}>
self
<django.contrib.auth.decorators._CheckLogin object at 0xb655ad2c>
* /home/projects/acms/../acms/cms/views.py in home
45. if request.user.is_staff== True:
46. staff = 1
47. else:
48. staff = 0
49. staffis = 1
50.
51. if request.method == 'POST':
52. delblogformset = delblog(request.POST) ...
53. if delblogformset.is_valid():
54. delblogformset.save()
55. return HttpResponseRedirect('/home')
56.
57. else:
58. delblogformset = delblog(queryset=blog.objects.filter( deleted='0'))
▶ Local vars
Variable Value
blogz
Error in formatting: %d format: a number is required, not unicode
delblog
<class 'django.forms.formsets.blogFormFormSet'>
request
<WSGIRequest GET:<QueryDict: {}>, POST:<QueryDict: {u'content': [u'test'], u'poster': [u'4'], u'title': [u'test']}>, COOKIES:{'sessionid': '8f4b4fa8411cc5baa05c2016a8ad00f4'}, META:{'COLORTERM': 'gnome-terminal', 'CONTENT_LENGTH': '32', 'CONTENT_TYPE': 'application/x-www-form-urlencoded', 'CVS_RSH': 'ssh', 'DBUS_SESSION_BUS_ADDRESS': 'unix:abstract=/tmp/dbus-nKl1u8UWGs,guid=fabac1ba0d651ceae76e1d9a4bafa535', 'DESKTOP_SESSION': 'gnome', 'DISPLAY': ':0.0', 'DJANGO_SETTINGS_MODULE': 'acms.settings', 'GATEWAY_INTERFACE': 'CGI/1.1', 'GDMSESSION': 'gnome', 'GDM_KEYBOARD_LAYOUT': 'us', 'GDM_LANG': 'en_GB.UTF-8', 'GNOME_DESKTOP_SESSION_ID': 'this-is-deprecated', 'GNOME_KEYRING_PID': '1511', 'GNOME_KEYRING_SOCKET': '/tmp/keyring-uvdktc/socket', 'GTK_RC_FILES': '/etc/gtk/gtkrc:/home/user/.gtkrc-1.2-gnome2', 'G_BROKEN_FILENAMES': '1', 'HISTCONTROL': 'ignoreboth', 'HISTSIZE': '1000', 'HOME': '/home/user', 'HOSTNAME': 'desktop.theblackout', 'HTTP_ACCEPT': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'HTTP_ACCEPT_CHARSET': 'ISO-8859-1,utf-8;q=0.7,*;q=0.7', 'HTTP_ACCEPT_ENCODING': 'gzip,deflate', 'HTTP_ACCEPT_LANGUAGE': 'en-us,en;q=0.5', 'HTTP_CONNECTION': 'keep-alive', 'HTTP_COOKIE': 'sessionid=8f4b4fa8411cc5baa05c2016a8ad00f4', 'HTTP_HOST': 'localhost:8000', 'HTTP_KEEP_ALIVE': '115', 'HTTP_REFERER': 'http://localhost:8000/home/', 'HTTP_USER_AGENT': 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2) Gecko/20100218 Fedora/3.6.1-1.fc13 Firefox/3.6', 'IMSETTINGS_INTEGRATE_DESKTOP': 'yes', 'IMSETTINGS_MODULE': 'none', 'KDEDIRS': '/usr', 'KDE_IS_PRELINKED': '1', 'KMIX_PULSEAUDIO_DISABLE': '1', 'LANG': 'en_GB.UTF-8', 'LESSOPEN': '|/usr/bin/lesspipe.sh %s', 'LOGNAME': 'user', 'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.tbz=01;31:*.tbz2=01;31:*.bz=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:', 'MAIL': '/var/spool/mail/user', 'OLDPWD': '/home/user', 'ORBIT_SOCKETDIR': '/tmp/orbit-user', 'PATH': '/usr/lib/qt-3.3/bin:/usr/kerberos/sbin:/usr/kerberos/bin:/usr/lib/ccache:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/real/RealPlayer:/home/user/bin:/opt/real/RealPlayer', 'PATH_INFO': u'/home/', 'PWD': '/home/projects/acms', 'QTDIR': '/usr/lib/qt-3.3', 'QTINC': '/usr/lib/qt-3.3/include', 'QTLIB': '/usr/lib/qt-3.3/lib', 'QT_IM_MODULE': 'xim', 'QUERY_STRING': '', 'REMOTE_ADDR': '127.0.0.1', 'REMOTE_HOST': '', 'REQUEST_METHOD': 'POST', 'RUN_MAIN': 'true', 'SCRIPT_NAME': u'', 'SERVER_NAME': 'localhost.localdomain', 'SERVER_PORT': '8000', 'SERVER_PROTOCOL': 'HTTP/1.1', 'SERVER_SOFTWARE': 'WSGIServer/0.1 Python/2.6.2', 'SESSION_MANAGER': 'local/unix:@/tmp/.ICE-unix/1518,unix/unix:/tmp/.ICE-unix/1518', 'SHELL': '/bin/bash', 'SHLVL': '2', 'SSH_ASKPASS': '/usr/libexec/openssh/gnome-ssh-askpass', 'SSH_AUTH_SOCK': '/tmp/keyring-uvdktc/socket.ssh', 'TERM': 'xterm', 'TZ': 'Africa/Nairobi', 'USER': 'user', 'USERNAME': 'user', 'WINDOWID': '79691779', 'XAUTHORITY': '/var/run/gdm/auth-for-user-YHprr5/database', 'XDG_SESSION_COOKIE': 'b52f8ef12c1cf7be85729e5e4ae08729-1269802292.411279-1821712829', 'XMODIFIERS': '@im=none', '_': '/usr/bin/python', 'wsgi.errors': <open file '<stderr>', mode 'w' at 0xb76b70c0>, 'wsgi.file_wrapper': <class 'django.core.servers.basehttp.FileWrapper'>, 'wsgi.input': <socket._fileobject object at 0xb675f8b4>, 'wsgi.multiprocess': False, 'wsgi.multithread': True, 'wsgi.run_once': False, 'wsgi.url_scheme': 'http', 'wsgi.version': (1, 0)}>
staff
1
staffis
1
user
<UserProfile: Treasurer>
* /usr/lib/python2.6/site-packages/django/forms/models.py in __init__
452. model = None
453.
454. def __init__(self, data=None, files=None, auto_id='id_%s', prefix=None,
455. queryset=None, **kwargs):
456. self.queryset = queryset
457. defaults = {'data': data, 'files': files, 'auto_id': auto_id, 'prefix': prefix}
458. defaults.update(kwargs)
459. super(BaseModelFormSet, self).__init__(**defaults) ...
460.
461. def initial_form_count(self):
462. """Returns the number of forms that are required in this FormSet."""
463. if not (self.data or self.files):
464. return len(self.get_queryset())
465. return super(BaseModelFormSet, self).initial_form_count()
▶ Local vars
Variable Value
auto_id
'id_%s'
data
<QueryDict: {u'content': [u'test'], u'poster': [u'4'], u'title': [u'test']}>
defaults
{'auto_id': 'id_%s', 'data': <QueryDict: {u'content': [u'test'], u'poster': [u'4'], u'title': [u'test']}>, 'files': None, 'prefix': None}
files
None
kwargs
{}
prefix
None
queryset
None
self
<django.forms.formsets.blogFormFormSet object at 0xb659bdec>
* /usr/lib/python2.6/site-packages/django/forms/formsets.py in __init__
37. self.data = data
38. self.files = files
39. self.initial = initial
40. self.error_class = error_class
41. self._errors = None
42. self._non_form_errors = None
43. # construct the forms in the formset
44. self._construct_forms() ...
45.
46. def __unicode__(self):
47. return self.as_table()
48.
49. def _management_form(self):
50. """Returns the ManagementForm instance for this FormSet."""
▶ Local vars
Variable Value
auto_id
'id_%s'
data
<QueryDict: {u'content': [u'test'], u'poster': [u'4'], u'title': [u'test']}>
error_class
<class 'django.forms.util.ErrorList'>
files
None
initial
None
prefix
None
self
<django.forms.formsets.blogFormFormSet object at 0xb659bdec>
* /usr/lib/python2.6/site-packages/django/forms/formsets.py in _construct_forms
80. if initial_forms > self.max_num > 0:
81. initial_forms = self.max_num
82. return initial_forms
83.
84. def _construct_forms(self):
85. # instantiate all the forms and put them in self.forms
86. self.forms = []
87. for i in xrange(self.total_form_count()): ...
88. self.forms.append(self._construct_form(i))
89.
90. def _construct_form(self, i, **kwargs):
91. """
92. Instantiates and returns the i-th form instance in a formset.
93. """
▶ Local vars
Variable Value
self
<django.forms.formsets.blogFormFormSet object at 0xb659bdec>
* /usr/lib/python2.6/site-packages/django/forms/formsets.py in total_form_count
59. })
60. return form
61. management_form = property(_management_form)
62.
63. def total_form_count(self):
64. """Returns the total number of forms in this FormSet."""
65. if self.data or self.files:
66. return self.management_form.cleaned_data[TOTAL_FORM_COUNT] ...
67. else:
68. total_forms = self.initial_form_count() + self.extra
69. if total_forms > self.max_num > 0:
70. total_forms = self.max_num
71. return total_forms
72.
▶ Local vars
Variable Value
self
<django.forms.formsets.blogFormFormSet object at 0xb659bdec>
* /usr/lib/python2.6/site-packages/django/forms/formsets.py in _management_form
47. return self.as_table()
48.
49. def _management_form(self):
50. """Returns the ManagementForm instance for this FormSet."""
51. if self.data or self.files:
52. form = ManagementForm(self.data, auto_id=self.auto_id, prefix=self.prefix)
53. if not form.is_valid():
54. raise ValidationError('ManagementForm data is missing or has been tampered with') ...
55. else:
56. form = ManagementForm(auto_id=self.auto_id, prefix=self.prefix, initial={
57. TOTAL_FORM_COUNT: self.total_form_count(),
58. INITIAL_FORM_COUNT: self.initial_form_count()
59. })
60. return form
▶ Local vars
A:
To avoid this error just wrap your formset POST bounding in a try/except block like so.
from django.core.exceptions import ValidationError # add this to your imports
if request.method == 'POST':
try:
delblogformset = delblog(request.POST)
except ValidationError:
delblogformset = None
if delblogformset and delblogformset.is_valid():
delblogformset.save()
return HttpResponseRedirect('/home')
The POST request from your blogform lacks the 'ManagementForm' hidden input that is required for your delblogformset and hence the validation error is thrown. We wrap in a try/except block because we know that if ValidationError has been raised than the POST was meant for your blogform and not delblogformset.
For more information see django docs: http://docs.djangoproject.com/en/dev/topics/forms/formsets/#understanding-the-managementform
A:
I have figured this out after running into the same error. In situations where you are using a form and a formset in the same view and template, the form should be processed before the formset to avoid raising a formset validation error.
of importance Also, if request.method == 'POST': SHOULD ONLY be used on the formset. So the above will appear as below in the view:
@login_required
def home(request):
user = UserProfile.objects.get(pk=request.session['_auth_user_id'])
blogz = list(blog.objects.filter(deleted='0'))
delblog = modelformset_factory(blog, exclude=('poster','date' ,'title','content'))
if request.user.is_staff== True:
staff = 1
else:
staff = 0
staffis = 1
blogform = BlogForm(request.POST)
if blogform.is_valid():
blogform.save()
return HttpResponseRedirect('/home')
else:
blogform = BlogForm(initial = {'poster':user.id})
if request.method == 'POST':
delblogformset = delblog(request.POST)
if delblogformset.is_valid():
delblogformset.save()
return HttpResponseRedirect('/home')
else:
delblogformset = delblog(queryset=blog.objects.filter( deleted='0'))
blogs= zip(blogz,delblogformset.forms)
paginator = Paginator(blogs, 10) # Show 25 contacts per page
# Make sure page request is an int. If not, deliver first page.
try:
page = int(request.GET.get('page', '1'))
except ValueError:
page = 1
# If page request (9999) is out of range, deliver last page of results.
try:
blogs = paginator.page(page)
except (EmptyPage, InvalidPage):
blogs = paginator.page(paginator.num_pages)
return render_to_response('home.html', {'user':user, 'blogform':blogform, 'staff': staff, 'staffis': staffis, 'blog':blogs, 'delblog':delblogformset}, context_instance = RequestContext( request ))
| Django Formset management-form validation error | I have a form and a formset on my template. The problem is that the formset is throwing validation error claiming that the management form is "missing or has been tampered with".
Here is my view
@login_required
def home(request):
user = UserProfile.objects.get(pk=request.session['_auth_user_id'])
blogz = list(blog.objects.filter(deleted='0'))
delblog = modelformset_factory(blog, exclude=('poster','date' ,'title','content'))
if request.user.is_staff== True:
staff = 1
else:
staff = 0
staffis = 1
if request.method == 'POST':
delblogformset = delblog(request.POST)
if delblogformset.is_valid():
delblogformset.save()
return HttpResponseRedirect('/home')
else:
delblogformset = delblog(queryset=blog.objects.filter( deleted='0'))
blogform = BlogForm(request.POST)
if blogform.is_valid():
blogform.save()
return HttpResponseRedirect('/home')
else:
blogform = BlogForm(initial = {'poster':user.id})
blogs= zip(blogz,delblogformset.forms)
paginator = Paginator(blogs, 10) # Show 25 contacts per page
# Make sure page request is an int. If not, deliver first page.
try:
page = int(request.GET.get('page', '1'))
except ValueError:
page = 1
# If page request (9999) is out of range, deliver last page of results.
try:
blogs = paginator.page(page)
except (EmptyPage, InvalidPage):
blogs = paginator.page(paginator.num_pages)
return render_to_response('home.html', {'user':user, 'blogform':blogform, 'staff': staff, 'staffis': staffis, 'blog':blogs, 'delblog':delblogformset}, context_instance = RequestContext( request ))
my template
{%block content%}
<h2>Home</h2>
{% ifequal staff staffis %}
{% if form.errors %}
<ul>
{% for field in form %}
<H3 class="title">
<p class="error"> {% if field.errors %}<li>{{ field.errors|striptags }}</li>{% endif %}</p>
</H3>
{% endfor %}
</ul>
{% endif %}
<h3>Post a Blog to the Front Page</h3>
<form method="post" id="form2" action="" class="infotabs accfrm">
{{ blogform.as_p }}
<input type="submit" value="Submit" />
</form>
<br>
<br>
{% endifequal %}
<div class="pagination">
<span class="step-links">
{% if blog.has_previous %}
<a href="?page={{ blog.previous_page_number }}">previous</a>
{% endif %}
<span class="current">
Page {{ blog.number }} of {{ blog.paginator.num_pages }}.
</span>
{% if blog.has_next %}
<a href="?page={{ blog.next_page_number }}">next</a>
{% endif %}
</span>
<form method="post" action="" class="usertabs accfrm">
{{delblog.management_form}}
{% for b, form in blog.object_list %}
<div class="blog">
<h3>{{b.title}}</h3>
<p>{{b.content}}</p>
<p>posted by <strong>{{b.poster}}</strong> on {{b.date}}</p>
{% ifequal staff staffis %}<p>{{form.as_p}}<input type="submit" value="Delete" /></p>{% endifequal %}
</div>
{% endfor %}
</form>
{%endblock%}
Here is the Traceback
ValidationError at /home/
Request Method: POST
Request URL: http://localhost:8000/home/
Exception Type: ValidationError
Exception Value:
Exception Location: /usr/lib/python2.6/site-packages/django/forms/formsets.py in _management_form, line 54
Python Executable: /usr/bin/python
Python Version: 2.6.2
Python Path: ['/home/projects/acms', '/usr/lib/python2.6/site-packages/django_socialregistration-0.2-py2.6.egg', '/usr/lib/python26.zip', '/usr/lib/python2.6', '/usr/lib/python2.6/plat-linux2', '/usr/lib/python2.6/lib-tk', '/usr/lib/python2.6/lib-old', '/usr/lib/python2.6/lib-dynload', '/usr/lib/python2.6/site-packages', '/usr/lib/python2.6/site-packages/Numeric', '/usr/lib/python2.6/site-packages/PIL', '/usr/lib/python2.6/site-packages/gst-0.10', '/usr/lib/python2.6/site-packages/gtk-2.0', '/usr/lib/python2.6/site-packages/webkit-1.0']
Server time: Mon, 29 Mar 2010 12:02:43 +0300
Traceback Switch to copy-and-paste view
* /usr/lib/python2.6/site-packages/django/core/handlers/base.py in get_response
85. # Apply view middleware
86. for middleware_method in self._view_middleware:
87. response = middleware_method(request, callback, callback_args, callback_kwargs)
88. if response:
89. return response
90.
91. try:
92. response = callback(request, *callback_args, **callback_kwargs) ...
93. except Exception, e:
94. # If the view raised an exception, run it through exception
95. # middleware, and if the exception middleware returns a
96. # response, use that. Otherwise, reraise the exception.
97. for middleware_method in self._exception_middleware:
98. response = middleware_method(request, e)
▶ Local vars
Variable Value
callback
<django.contrib.auth.decorators._CheckLogin object at 0xb655ad2c>
callback_args
()
callback_kwargs
{}
e
ValidationError()
exc_info
(<class 'django.forms.util.ValidationError'>, ValidationError(), <traceback object at 0xb6630a2c>)
exceptions
<module 'django.core.exceptions' from '/usr/lib/python2.6/site-packages/django/core/exceptions.pyc'>
middleware_method
<bound method TransactionMiddleware.process_exception of <django.middleware.transaction.TransactionMiddleware object at 0xb676ff6c>>
receivers
[(<function _rollback_on_exception at 0x8c845dc>, None)]
request
<WSGIRequest GET:<QueryDict: {}>, POST:<QueryDict: {u'content': [u'test'], u'poster': [u'4'], u'title': [u'test']}>, COOKIES:{'sessionid': '8f4b4fa8411cc5baa05c2016a8ad00f4'}, META:{'COLORTERM': 'gnome-terminal', 'CONTENT_LENGTH': '32', 'CONTENT_TYPE': 'application/x-www-form-urlencoded', 'CVS_RSH': 'ssh', 'DBUS_SESSION_BUS_ADDRESS': 'unix:abstract=/tmp/dbus-nKl1u8UWGs,guid=fabac1ba0d651ceae76e1d9a4bafa535', 'DESKTOP_SESSION': 'gnome', 'DISPLAY': ':0.0', 'DJANGO_SETTINGS_MODULE': 'acms.settings', 'GATEWAY_INTERFACE': 'CGI/1.1', 'GDMSESSION': 'gnome', 'GDM_KEYBOARD_LAYOUT': 'us', 'GDM_LANG': 'en_GB.UTF-8', 'GNOME_DESKTOP_SESSION_ID': 'this-is-deprecated', 'GNOME_KEYRING_PID': '1511', 'GNOME_KEYRING_SOCKET': '/tmp/keyring-uvdktc/socket', 'GTK_RC_FILES': '/etc/gtk/gtkrc:/home/user/.gtkrc-1.2-gnome2', 'G_BROKEN_FILENAMES': '1', 'HISTCONTROL': 'ignoreboth', 'HISTSIZE': '1000', 'HOME': '/home/user', 'HOSTNAME': 'desktop.theblackout', 'HTTP_ACCEPT': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'HTTP_ACCEPT_CHARSET': 'ISO-8859-1,utf-8;q=0.7,*;q=0.7', 'HTTP_ACCEPT_ENCODING': 'gzip,deflate', 'HTTP_ACCEPT_LANGUAGE': 'en-us,en;q=0.5', 'HTTP_CONNECTION': 'keep-alive', 'HTTP_COOKIE': 'sessionid=8f4b4fa8411cc5baa05c2016a8ad00f4', 'HTTP_HOST': 'localhost:8000', 'HTTP_KEEP_ALIVE': '115', 'HTTP_REFERER': 'http://localhost:8000/home/', 'HTTP_USER_AGENT': 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2) Gecko/20100218 Fedora/3.6.1-1.fc13 Firefox/3.6', 'IMSETTINGS_INTEGRATE_DESKTOP': 'yes', 'IMSETTINGS_MODULE': 'none', 'KDEDIRS': '/usr', 'KDE_IS_PRELINKED': '1', 'KMIX_PULSEAUDIO_DISABLE': '1', 'LANG': 'en_GB.UTF-8', 'LESSOPEN': '|/usr/bin/lesspipe.sh %s', 'LOGNAME': 'user', 'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.tbz=01;31:*.tbz2=01;31:*.bz=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:', 'MAIL': '/var/spool/mail/user', 'OLDPWD': '/home/user', 'ORBIT_SOCKETDIR': '/tmp/orbit-user', 'PATH': '/usr/lib/qt-3.3/bin:/usr/kerberos/sbin:/usr/kerberos/bin:/usr/lib/ccache:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/real/RealPlayer:/home/user/bin:/opt/real/RealPlayer', 'PATH_INFO': u'/home/', 'PWD': '/home/projects/acms', 'QTDIR': '/usr/lib/qt-3.3', 'QTINC': '/usr/lib/qt-3.3/include', 'QTLIB': '/usr/lib/qt-3.3/lib', 'QT_IM_MODULE': 'xim', 'QUERY_STRING': '', 'REMOTE_ADDR': '127.0.0.1', 'REMOTE_HOST': '', 'REQUEST_METHOD': 'POST', 'RUN_MAIN': 'true', 'SCRIPT_NAME': u'', 'SERVER_NAME': 'localhost.localdomain', 'SERVER_PORT': '8000', 'SERVER_PROTOCOL': 'HTTP/1.1', 'SERVER_SOFTWARE': 'WSGIServer/0.1 Python/2.6.2', 'SESSION_MANAGER': 'local/unix:@/tmp/.ICE-unix/1518,unix/unix:/tmp/.ICE-unix/1518', 'SHELL': '/bin/bash', 'SHLVL': '2', 'SSH_ASKPASS': '/usr/libexec/openssh/gnome-ssh-askpass', 'SSH_AUTH_SOCK': '/tmp/keyring-uvdktc/socket.ssh', 'TERM': 'xterm', 'TZ': 'Africa/Nairobi', 'USER': 'user', 'USERNAME': 'user', 'WINDOWID': '79691779', 'XAUTHORITY': '/var/run/gdm/auth-for-user-YHprr5/database', 'XDG_SESSION_COOKIE': 'b52f8ef12c1cf7be85729e5e4ae08729-1269802292.411279-1821712829', 'XMODIFIERS': '@im=none', '_': '/usr/bin/python', 'wsgi.errors': <open file '<stderr>', mode 'w' at 0xb76b70c0>, 'wsgi.file_wrapper': <class 'django.core.servers.basehttp.FileWrapper'>, 'wsgi.input': <socket._fileobject object at 0xb675f8b4>, 'wsgi.multiprocess': False, 'wsgi.multithread': True, 'wsgi.run_once': False, 'wsgi.url_scheme': 'http', 'wsgi.version': (1, 0)}>
resolver
<RegexURLResolver acms.urls (None:None) ^/>
response
None
self
<django.core.handlers.wsgi.WSGIHandler object at 0xb6755acc>
settings
<django.conf.LazySettings object at 0xb740856c>
urlconf
'acms.urls'
urlresolvers
<module 'django.core.urlresolvers' from '/usr/lib/python2.6/site-packages/django/core/urlresolvers.pyc'>
* /usr/lib/python2.6/site-packages/django/contrib/auth/decorators.py in __call__
71.
72. def __get__(self, obj, cls=None):
73. view_func = self.view_func.__get__(obj, cls)
74. return _CheckLogin(view_func, self.test_func, self.login_url, self.redirect_field_name)
75.
76. def __call__(self, request, *args, **kwargs):
77. if self.test_func(request.user):
78. return self.view_func(request, *args, **kwargs) ...
79. path = urlquote(request.get_full_path())
80. tup = self.login_url, self.redirect_field_name, path
81. return HttpResponseRedirect('%s?%s=%s' % tup)
▶ Local vars
Variable Value
args
()
kwargs
{}
request
<WSGIRequest GET:<QueryDict: {}>, POST:<QueryDict: {u'content': [u'test'], u'poster': [u'4'], u'title': [u'test']}>, COOKIES:{'sessionid': '8f4b4fa8411cc5baa05c2016a8ad00f4'}, META:{'COLORTERM': 'gnome-terminal', 'CONTENT_LENGTH': '32', 'CONTENT_TYPE': 'application/x-www-form-urlencoded', 'CVS_RSH': 'ssh', 'DBUS_SESSION_BUS_ADDRESS': 'unix:abstract=/tmp/dbus-nKl1u8UWGs,guid=fabac1ba0d651ceae76e1d9a4bafa535', 'DESKTOP_SESSION': 'gnome', 'DISPLAY': ':0.0', 'DJANGO_SETTINGS_MODULE': 'acms.settings', 'GATEWAY_INTERFACE': 'CGI/1.1', 'GDMSESSION': 'gnome', 'GDM_KEYBOARD_LAYOUT': 'us', 'GDM_LANG': 'en_GB.UTF-8', 'GNOME_DESKTOP_SESSION_ID': 'this-is-deprecated', 'GNOME_KEYRING_PID': '1511', 'GNOME_KEYRING_SOCKET': '/tmp/keyring-uvdktc/socket', 'GTK_RC_FILES': '/etc/gtk/gtkrc:/home/user/.gtkrc-1.2-gnome2', 'G_BROKEN_FILENAMES': '1', 'HISTCONTROL': 'ignoreboth', 'HISTSIZE': '1000', 'HOME': '/home/user', 'HOSTNAME': 'desktop.theblackout', 'HTTP_ACCEPT': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'HTTP_ACCEPT_CHARSET': 'ISO-8859-1,utf-8;q=0.7,*;q=0.7', 'HTTP_ACCEPT_ENCODING': 'gzip,deflate', 'HTTP_ACCEPT_LANGUAGE': 'en-us,en;q=0.5', 'HTTP_CONNECTION': 'keep-alive', 'HTTP_COOKIE': 'sessionid=8f4b4fa8411cc5baa05c2016a8ad00f4', 'HTTP_HOST': 'localhost:8000', 'HTTP_KEEP_ALIVE': '115', 'HTTP_REFERER': 'http://localhost:8000/home/', 'HTTP_USER_AGENT': 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2) Gecko/20100218 Fedora/3.6.1-1.fc13 Firefox/3.6', 'IMSETTINGS_INTEGRATE_DESKTOP': 'yes', 'IMSETTINGS_MODULE': 'none', 'KDEDIRS': '/usr', 'KDE_IS_PRELINKED': '1', 'KMIX_PULSEAUDIO_DISABLE': '1', 'LANG': 'en_GB.UTF-8', 'LESSOPEN': '|/usr/bin/lesspipe.sh %s', 'LOGNAME': 'user', 'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.tbz=01;31:*.tbz2=01;31:*.bz=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:', 'MAIL': '/var/spool/mail/user', 'OLDPWD': '/home/user', 'ORBIT_SOCKETDIR': '/tmp/orbit-user', 'PATH': '/usr/lib/qt-3.3/bin:/usr/kerberos/sbin:/usr/kerberos/bin:/usr/lib/ccache:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/real/RealPlayer:/home/user/bin:/opt/real/RealPlayer', 'PATH_INFO': u'/home/', 'PWD': '/home/projects/acms', 'QTDIR': '/usr/lib/qt-3.3', 'QTINC': '/usr/lib/qt-3.3/include', 'QTLIB': '/usr/lib/qt-3.3/lib', 'QT_IM_MODULE': 'xim', 'QUERY_STRING': '', 'REMOTE_ADDR': '127.0.0.1', 'REMOTE_HOST': '', 'REQUEST_METHOD': 'POST', 'RUN_MAIN': 'true', 'SCRIPT_NAME': u'', 'SERVER_NAME': 'localhost.localdomain', 'SERVER_PORT': '8000', 'SERVER_PROTOCOL': 'HTTP/1.1', 'SERVER_SOFTWARE': 'WSGIServer/0.1 Python/2.6.2', 'SESSION_MANAGER': 'local/unix:@/tmp/.ICE-unix/1518,unix/unix:/tmp/.ICE-unix/1518', 'SHELL': '/bin/bash', 'SHLVL': '2', 'SSH_ASKPASS': '/usr/libexec/openssh/gnome-ssh-askpass', 'SSH_AUTH_SOCK': '/tmp/keyring-uvdktc/socket.ssh', 'TERM': 'xterm', 'TZ': 'Africa/Nairobi', 'USER': 'user', 'USERNAME': 'user', 'WINDOWID': '79691779', 'XAUTHORITY': '/var/run/gdm/auth-for-user-YHprr5/database', 'XDG_SESSION_COOKIE': 'b52f8ef12c1cf7be85729e5e4ae08729-1269802292.411279-1821712829', 'XMODIFIERS': '@im=none', '_': '/usr/bin/python', 'wsgi.errors': <open file '<stderr>', mode 'w' at 0xb76b70c0>, 'wsgi.file_wrapper': <class 'django.core.servers.basehttp.FileWrapper'>, 'wsgi.input': <socket._fileobject object at 0xb675f8b4>, 'wsgi.multiprocess': False, 'wsgi.multithread': True, 'wsgi.run_once': False, 'wsgi.url_scheme': 'http', 'wsgi.version': (1, 0)}>
self
<django.contrib.auth.decorators._CheckLogin object at 0xb655ad2c>
* /home/projects/acms/../acms/cms/views.py in home
45. if request.user.is_staff== True:
46. staff = 1
47. else:
48. staff = 0
49. staffis = 1
50.
51. if request.method == 'POST':
52. delblogformset = delblog(request.POST) ...
53. if delblogformset.is_valid():
54. delblogformset.save()
55. return HttpResponseRedirect('/home')
56.
57. else:
58. delblogformset = delblog(queryset=blog.objects.filter( deleted='0'))
▶ Local vars
Variable Value
blogz
Error in formatting: %d format: a number is required, not unicode
delblog
<class 'django.forms.formsets.blogFormFormSet'>
request
<WSGIRequest GET:<QueryDict: {}>, POST:<QueryDict: {u'content': [u'test'], u'poster': [u'4'], u'title': [u'test']}>, COOKIES:{'sessionid': '8f4b4fa8411cc5baa05c2016a8ad00f4'}, META:{'COLORTERM': 'gnome-terminal', 'CONTENT_LENGTH': '32', 'CONTENT_TYPE': 'application/x-www-form-urlencoded', 'CVS_RSH': 'ssh', 'DBUS_SESSION_BUS_ADDRESS': 'unix:abstract=/tmp/dbus-nKl1u8UWGs,guid=fabac1ba0d651ceae76e1d9a4bafa535', 'DESKTOP_SESSION': 'gnome', 'DISPLAY': ':0.0', 'DJANGO_SETTINGS_MODULE': 'acms.settings', 'GATEWAY_INTERFACE': 'CGI/1.1', 'GDMSESSION': 'gnome', 'GDM_KEYBOARD_LAYOUT': 'us', 'GDM_LANG': 'en_GB.UTF-8', 'GNOME_DESKTOP_SESSION_ID': 'this-is-deprecated', 'GNOME_KEYRING_PID': '1511', 'GNOME_KEYRING_SOCKET': '/tmp/keyring-uvdktc/socket', 'GTK_RC_FILES': '/etc/gtk/gtkrc:/home/user/.gtkrc-1.2-gnome2', 'G_BROKEN_FILENAMES': '1', 'HISTCONTROL': 'ignoreboth', 'HISTSIZE': '1000', 'HOME': '/home/user', 'HOSTNAME': 'desktop.theblackout', 'HTTP_ACCEPT': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'HTTP_ACCEPT_CHARSET': 'ISO-8859-1,utf-8;q=0.7,*;q=0.7', 'HTTP_ACCEPT_ENCODING': 'gzip,deflate', 'HTTP_ACCEPT_LANGUAGE': 'en-us,en;q=0.5', 'HTTP_CONNECTION': 'keep-alive', 'HTTP_COOKIE': 'sessionid=8f4b4fa8411cc5baa05c2016a8ad00f4', 'HTTP_HOST': 'localhost:8000', 'HTTP_KEEP_ALIVE': '115', 'HTTP_REFERER': 'http://localhost:8000/home/', 'HTTP_USER_AGENT': 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2) Gecko/20100218 Fedora/3.6.1-1.fc13 Firefox/3.6', 'IMSETTINGS_INTEGRATE_DESKTOP': 'yes', 'IMSETTINGS_MODULE': 'none', 'KDEDIRS': '/usr', 'KDE_IS_PRELINKED': '1', 'KMIX_PULSEAUDIO_DISABLE': '1', 'LANG': 'en_GB.UTF-8', 'LESSOPEN': '|/usr/bin/lesspipe.sh %s', 'LOGNAME': 'user', 'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.tbz=01;31:*.tbz2=01;31:*.bz=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:', 'MAIL': '/var/spool/mail/user', 'OLDPWD': '/home/user', 'ORBIT_SOCKETDIR': '/tmp/orbit-user', 'PATH': '/usr/lib/qt-3.3/bin:/usr/kerberos/sbin:/usr/kerberos/bin:/usr/lib/ccache:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/real/RealPlayer:/home/user/bin:/opt/real/RealPlayer', 'PATH_INFO': u'/home/', 'PWD': '/home/projects/acms', 'QTDIR': '/usr/lib/qt-3.3', 'QTINC': '/usr/lib/qt-3.3/include', 'QTLIB': '/usr/lib/qt-3.3/lib', 'QT_IM_MODULE': 'xim', 'QUERY_STRING': '', 'REMOTE_ADDR': '127.0.0.1', 'REMOTE_HOST': '', 'REQUEST_METHOD': 'POST', 'RUN_MAIN': 'true', 'SCRIPT_NAME': u'', 'SERVER_NAME': 'localhost.localdomain', 'SERVER_PORT': '8000', 'SERVER_PROTOCOL': 'HTTP/1.1', 'SERVER_SOFTWARE': 'WSGIServer/0.1 Python/2.6.2', 'SESSION_MANAGER': 'local/unix:@/tmp/.ICE-unix/1518,unix/unix:/tmp/.ICE-unix/1518', 'SHELL': '/bin/bash', 'SHLVL': '2', 'SSH_ASKPASS': '/usr/libexec/openssh/gnome-ssh-askpass', 'SSH_AUTH_SOCK': '/tmp/keyring-uvdktc/socket.ssh', 'TERM': 'xterm', 'TZ': 'Africa/Nairobi', 'USER': 'user', 'USERNAME': 'user', 'WINDOWID': '79691779', 'XAUTHORITY': '/var/run/gdm/auth-for-user-YHprr5/database', 'XDG_SESSION_COOKIE': 'b52f8ef12c1cf7be85729e5e4ae08729-1269802292.411279-1821712829', 'XMODIFIERS': '@im=none', '_': '/usr/bin/python', 'wsgi.errors': <open file '<stderr>', mode 'w' at 0xb76b70c0>, 'wsgi.file_wrapper': <class 'django.core.servers.basehttp.FileWrapper'>, 'wsgi.input': <socket._fileobject object at 0xb675f8b4>, 'wsgi.multiprocess': False, 'wsgi.multithread': True, 'wsgi.run_once': False, 'wsgi.url_scheme': 'http', 'wsgi.version': (1, 0)}>
staff
1
staffis
1
user
<UserProfile: Treasurer>
* /usr/lib/python2.6/site-packages/django/forms/models.py in __init__
452. model = None
453.
454. def __init__(self, data=None, files=None, auto_id='id_%s', prefix=None,
455. queryset=None, **kwargs):
456. self.queryset = queryset
457. defaults = {'data': data, 'files': files, 'auto_id': auto_id, 'prefix': prefix}
458. defaults.update(kwargs)
459. super(BaseModelFormSet, self).__init__(**defaults) ...
460.
461. def initial_form_count(self):
462. """Returns the number of forms that are required in this FormSet."""
463. if not (self.data or self.files):
464. return len(self.get_queryset())
465. return super(BaseModelFormSet, self).initial_form_count()
▶ Local vars
Variable Value
auto_id
'id_%s'
data
<QueryDict: {u'content': [u'test'], u'poster': [u'4'], u'title': [u'test']}>
defaults
{'auto_id': 'id_%s', 'data': <QueryDict: {u'content': [u'test'], u'poster': [u'4'], u'title': [u'test']}>, 'files': None, 'prefix': None}
files
None
kwargs
{}
prefix
None
queryset
None
self
<django.forms.formsets.blogFormFormSet object at 0xb659bdec>
* /usr/lib/python2.6/site-packages/django/forms/formsets.py in __init__
37. self.data = data
38. self.files = files
39. self.initial = initial
40. self.error_class = error_class
41. self._errors = None
42. self._non_form_errors = None
43. # construct the forms in the formset
44. self._construct_forms() ...
45.
46. def __unicode__(self):
47. return self.as_table()
48.
49. def _management_form(self):
50. """Returns the ManagementForm instance for this FormSet."""
▶ Local vars
Variable Value
auto_id
'id_%s'
data
<QueryDict: {u'content': [u'test'], u'poster': [u'4'], u'title': [u'test']}>
error_class
<class 'django.forms.util.ErrorList'>
files
None
initial
None
prefix
None
self
<django.forms.formsets.blogFormFormSet object at 0xb659bdec>
* /usr/lib/python2.6/site-packages/django/forms/formsets.py in _construct_forms
80. if initial_forms > self.max_num > 0:
81. initial_forms = self.max_num
82. return initial_forms
83.
84. def _construct_forms(self):
85. # instantiate all the forms and put them in self.forms
86. self.forms = []
87. for i in xrange(self.total_form_count()): ...
88. self.forms.append(self._construct_form(i))
89.
90. def _construct_form(self, i, **kwargs):
91. """
92. Instantiates and returns the i-th form instance in a formset.
93. """
▶ Local vars
Variable Value
self
<django.forms.formsets.blogFormFormSet object at 0xb659bdec>
* /usr/lib/python2.6/site-packages/django/forms/formsets.py in total_form_count
59. })
60. return form
61. management_form = property(_management_form)
62.
63. def total_form_count(self):
64. """Returns the total number of forms in this FormSet."""
65. if self.data or self.files:
66. return self.management_form.cleaned_data[TOTAL_FORM_COUNT] ...
67. else:
68. total_forms = self.initial_form_count() + self.extra
69. if total_forms > self.max_num > 0:
70. total_forms = self.max_num
71. return total_forms
72.
▶ Local vars
Variable Value
self
<django.forms.formsets.blogFormFormSet object at 0xb659bdec>
* /usr/lib/python2.6/site-packages/django/forms/formsets.py in _management_form
47. return self.as_table()
48.
49. def _management_form(self):
50. """Returns the ManagementForm instance for this FormSet."""
51. if self.data or self.files:
52. form = ManagementForm(self.data, auto_id=self.auto_id, prefix=self.prefix)
53. if not form.is_valid():
54. raise ValidationError('ManagementForm data is missing or has been tampered with') ...
55. else:
56. form = ManagementForm(auto_id=self.auto_id, prefix=self.prefix, initial={
57. TOTAL_FORM_COUNT: self.total_form_count(),
58. INITIAL_FORM_COUNT: self.initial_form_count()
59. })
60. return form
▶ Local vars
| [
"To avoid this error just wrap your formset POST bounding in a try/except block like so.\nfrom django.core.exceptions import ValidationError # add this to your imports\n\nif request.method == 'POST':\n try:\n delblogformset = delblog(request.POST)\n except ValidationError:\n delblogformset = None\n ... | [
7,
2
] | [] | [] | [
"django",
"django_forms",
"django_templates",
"python"
] | stackoverflow_0002536285_django_django_forms_django_templates_python.txt |
Q:
Set a variable equal to result if result exists
This seems very verbose, particularly with long function names, is there a better way to do this in Python?
if someRandomFunction():
variable = someRandomFunction()
Edit: For more context variable is not already defined, and it will be a new node on a tree. I only want to create this node if someRandomFunction() returns a value. And someRandomFunction() is supposed to return the string representation of some node from a different type of tree.
A:
Could you:
variable = someRandomFunction() or variable
See Boolean Operations in the Python documentation for more information.
A:
temp= someRandomFunction()
if temp:
variable = temp
A:
A bit unorthodox perhaps, but you could modify someRandomFunction() so that it saves its last result in a function attribute before returning it, you could then do this.
def someRandomFunction():
...
someRandomFunction.result = <...>
return someRandomFunction.result
if someRandomFunction():
variable = someRandomFunction.result
A:
I don't think your answer is too verbose. It says exactly what it does. However, since you've already said it's too verbose for your tastes I would opt for the
s = myFunc() or someVariable
approach
| Set a variable equal to result if result exists | This seems very verbose, particularly with long function names, is there a better way to do this in Python?
if someRandomFunction():
variable = someRandomFunction()
Edit: For more context variable is not already defined, and it will be a new node on a tree. I only want to create this node if someRandomFunction() returns a value. And someRandomFunction() is supposed to return the string representation of some node from a different type of tree.
| [
"Could you:\nvariable = someRandomFunction() or variable\n\nSee Boolean Operations in the Python documentation for more information.\n",
"temp= someRandomFunction()\nif temp:\n variable = temp\n\n",
"A bit unorthodox perhaps, but you could modify someRandomFunction() so that it saves its last result in a fun... | [
15,
6,
0,
0
] | [
"(Apparently you can't delete your answers if you haven't registered.)\nThese are not the droids you're looking for... move along... \n"
] | [
-1
] | [
"python"
] | stackoverflow_0003982948_python.txt |
Q:
Python: Use Regular expression to remove something
I've got a string looks like this
ABC(a =2,b=3,c=5,d=5,e=Something)
I want the result to be like
ABC(a =2,b=3,c=5)
What's the best way to do this? I prefer to use regular expression in Python.
Sorry, something changed, the raw string changed to
ABC(a =2,b=3,c=5,dddd=5,eeee=Something)
A:
longer = "ABC(a =2,b=3,c=5,d=5,e=Something)"
shorter = re.sub(r',\s*d=\d+,\s*e=[^)]+', '', longer)
# shorter: 'ABC(a =2,b=3,c=5)'
When the OP finally knows how many elements are there in the list, he can also use:
shorter = re.sub(r',\s*d=[^)]+', '', longer)
it cuts the , d= and everything after it, but not the right parenthesis.
A:
Non regex
>>> s="ABC(a =2,b=3,c=5,d=5,e=Something)"
>>> ','.join(s.split(",")[:-2])+")"
'ABC(a =2,b=3,c=5)'
If you want regex to get rid always the last 2
>>> s="ABC(a =2,b=3,c=5,d=5,e=6,f=7,g=Something)"
>>> re.sub("(.*)(,.[^,]*,.[^,]*)\Z","\\1)",s)
'ABC(a =2,b=3,c=5,d=5,e=6)'
>>> s="ABC(a =2,b=3,c=5,d=5,e=Something)"
>>> re.sub("(.*)(,.[^,]*,.[^,]*)\Z","\\1)",s)
'ABC(a =2,b=3,c=5)'
If its always the first 3,
>>> s="ABC(a =2,b=3,c=5,d=5,e=Something)"
>>> re.sub("([^,]+,[^,]+,[^,]+)(,.*)","\\1)",s)
'ABC(a =2,b=3,c=5)'
>>> s="ABC(q =2,z=3,d=5,d=5,e=Something)"
>>> re.sub("([^,]+,[^,]+,[^,]+)(,.*)","\\1)",s)
'ABC(q =2,z=3,d=5)'
A:
import re
re.sub(r',d=\d*,e=[^\)]*','', your_string)
| Python: Use Regular expression to remove something | I've got a string looks like this
ABC(a =2,b=3,c=5,d=5,e=Something)
I want the result to be like
ABC(a =2,b=3,c=5)
What's the best way to do this? I prefer to use regular expression in Python.
Sorry, something changed, the raw string changed to
ABC(a =2,b=3,c=5,dddd=5,eeee=Something)
| [
"longer = \"ABC(a =2,b=3,c=5,d=5,e=Something)\"\n\nshorter = re.sub(r',\\s*d=\\d+,\\s*e=[^)]+', '', longer)\n\n# shorter: 'ABC(a =2,b=3,c=5)'\n\nWhen the OP finally knows how many elements are there in the list, he can also use:\nshorter = re.sub(r',\\s*d=[^)]+', '', longer)\n\nit cuts the , d= and everything after... | [
3,
2,
0
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0003988632_python_regex.txt |
Q:
How to use ConfigParser with virtualenv?
I wrote a tool that looks in several places for an INI config file: in /usr/share, /usr/local/share, ~/.local/share, and in the current directory.
c = ConfigParser.RawConfigParser()
filenames = ['/usr/share/myconfig.conf',
'/usr/local/share/myconfig.conf',
os.path.expanduser('~/.local/share/myconfig.conf'),
os.path.expanduser('./myconfig.conf')]
parsed_names = c.read(filenames)
for name in parsed_names:
print 'using configuration file: ' + name
I have started using virtualenv and now my setup.py script installs myconfig.conf into /path/to/virtual/env/share/. How can I add this path to the list of paths searched by ConfigParser when the path to the virtualenv will be different each time? Also, if I installed to a virtualenv, should I still search the system /usr/share and /usr/local/share directories?
A:
You should be able to get the venv share path with
os.path.join(sys.prefix, 'share', 'myconfig.conf')
Including /usr/share or /usr/local/share would depend on your application and if multiple installations by different users would be more likely to benefit or be harmed by global machine settings. Using the above code would include '/usr/share/myconfig.conf' when using the system python so it is probably safer to not include it explicitly.
| How to use ConfigParser with virtualenv? | I wrote a tool that looks in several places for an INI config file: in /usr/share, /usr/local/share, ~/.local/share, and in the current directory.
c = ConfigParser.RawConfigParser()
filenames = ['/usr/share/myconfig.conf',
'/usr/local/share/myconfig.conf',
os.path.expanduser('~/.local/share/myconfig.conf'),
os.path.expanduser('./myconfig.conf')]
parsed_names = c.read(filenames)
for name in parsed_names:
print 'using configuration file: ' + name
I have started using virtualenv and now my setup.py script installs myconfig.conf into /path/to/virtual/env/share/. How can I add this path to the list of paths searched by ConfigParser when the path to the virtualenv will be different each time? Also, if I installed to a virtualenv, should I still search the system /usr/share and /usr/local/share directories?
| [
"You should be able to get the venv share path with\nos.path.join(sys.prefix, 'share', 'myconfig.conf')\n\nIncluding /usr/share or /usr/local/share would depend on your application and if multiple installations by different users would be more likely to benefit or be harmed by global machine settings. Using the ab... | [
1
] | [] | [] | [
"configparser",
"python",
"virtualenv"
] | stackoverflow_0003988460_configparser_python_virtualenv.txt |
Q:
How to customize pynotify?
How to set icon size in the notifications?
How to set how much time the notification have to be shown?
Where is a complete pynotify documentation?
Can the notification be clickable? (example: if I click on the notification, print "hello world" in the terminal).
A:
How to set how much time the
notification have to be shown?
Ubuntu uses Notify OSD, which does not allow you to control the timeout. The timeout duration depends on the length of the message.
Where is a complete pynotify
documentation?
On Ubuntu, pynotify is provided by the python-notify package.
According to /usr/share/doc/python-notify/copyright, the package contents come from the galago-project.
The latest tarball was on 2006-10-08, and contains no documentation.
The closest thing I could find to documentation was the "Desktop Notification Specification".
Can the notification be clickable?
(example: if I click on the
notification, print "hello world" in
the terminal).
Unfortunately, no. See "Avoiding actions".
| How to customize pynotify? |
How to set icon size in the notifications?
How to set how much time the notification have to be shown?
Where is a complete pynotify documentation?
Can the notification be clickable? (example: if I click on the notification, print "hello world" in the terminal).
| [
"\nHow to set how much time the\n notification have to be shown?\n\nUbuntu uses Notify OSD, which does not allow you to control the timeout. The timeout duration depends on the length of the message.\n\nWhere is a complete pynotify\n documentation?\n\nOn Ubuntu, pynotify is provided by the python-notify package.\... | [
1
] | [] | [] | [
"pynotify",
"python",
"ubuntu"
] | stackoverflow_0003988292_pynotify_python_ubuntu.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.