code
string | signature
string | docstring
string | loss_without_docstring
float64 | loss_with_docstring
float64 | factor
float64 |
|---|---|---|---|---|---|
'''Compute the error (or erasures if you supply sigma=erasures locator polynomial) evaluator polynomial Omega from the syndrome and the error/erasures/errata locator Sigma. Omega is already computed at the same time as Sigma inside the Berlekamp-Massey implemented above, but in case you modify Sigma, you can recompute Omega afterwards using this method, or just ensure that Omega computed by BM is correct given Sigma (as long as syndrome and sigma are correct, omega will be correct).'''
n = self.n
if not k: k = self.k
# Omega(x) = [ (1 + Synd(x)) * Error_loc(x) ] mod x^(n-k+1)
# NOTE: I don't know why we do 1+Synd(x) here, from docs it seems just Synd(x) is enough (and in practice if you remove the "ONE +" it will still decode correcty) as advised by Blahut in Algebraic Codes for Data Transmission, but it seems it's an implementation detail here.
#ONE = Polynomial([GF2int(1)])
#return ((ONE + synd) * sigma) % Polynomial([GF2int(1)] + [GF2int(0)] * (n-k+1)) # NOT CORRECT: in practice it works flawlessly with this implementation (primitive polynomial = 3), but if you use another primitive like in reedsolo lib, it doesn't work! Thus, I guess that adding ONE is not correct for the general case.
return (synd * sigma) % Polynomial([GF2int(1)] + [GF2int(0)] * (n-k+1))
|
def _find_error_evaluator(self, synd, sigma, k=None)
|
Compute the error (or erasures if you supply sigma=erasures locator polynomial) evaluator polynomial Omega from the syndrome and the error/erasures/errata locator Sigma. Omega is already computed at the same time as Sigma inside the Berlekamp-Massey implemented above, but in case you modify Sigma, you can recompute Omega afterwards using this method, or just ensure that Omega computed by BM is correct given Sigma (as long as syndrome and sigma are correct, omega will be correct).
| 14.717172
| 5.945609
| 2.475301
|
'''Compute the error (or erasures if you supply sigma=erasures locator polynomial) evaluator polynomial Omega from the syndrome and the error/erasures/errata locator Sigma. Omega is already computed at the same time as Sigma inside the Berlekamp-Massey implemented above, but in case you modify Sigma, you can recompute Omega afterwards using this method, or just ensure that Omega computed by BM is correct given Sigma (as long as syndrome and sigma are correct, omega will be correct).'''
n = self.n
if not k: k = self.k
# Omega(x) = [ Synd(x) * Error_loc(x) ] mod x^(n-k+1) -- From Blahut, Algebraic codes for data transmission, 2003
return (synd * sigma)._gffastmod(Polynomial([GF2int(1)] + [GF2int(0)] * (n-k+1)))
|
def _find_error_evaluator_fast(self, synd, sigma, k=None)
|
Compute the error (or erasures if you supply sigma=erasures locator polynomial) evaluator polynomial Omega from the syndrome and the error/erasures/errata locator Sigma. Omega is already computed at the same time as Sigma inside the Berlekamp-Massey implemented above, but in case you modify Sigma, you can recompute Omega afterwards using this method, or just ensure that Omega computed by BM is correct given Sigma (as long as syndrome and sigma are correct, omega will be correct).
| 18.386183
| 3.707962
| 4.958568
|
'''Recall the definition of sigma, it has s roots. To find them, this
function evaluates sigma at all 2^(c_exp-1) (ie: 255 for GF(2^8)) non-zero points to find the roots
The inverse of the roots are X_i, the error locations
Returns a list X of error locations, and a corresponding list j of
error positions (the discrete log of the corresponding X value) The
lists are up to s elements large.
This is essentially an inverse Fourrier transform.
Important technical math note: This implementation is not actually
Chien's search. Chien's search is a way to evaluate the polynomial
such that each evaluation only takes constant time. This here simply
does 255 evaluations straight up, which is much less efficient.
Said differently, we simply do a bruteforce search by trial substitution to find the zeros of this polynomial, which identifies the error locations.
'''
# TODO: find a more efficient algorithm, this is the slowest part of the whole decoding process (~2.5 ms, while any other part is only ~400microsec). Could try the Pruned FFT from "Simple Algorithms for BCH Decoding", by Jonathan Hong and Martin Vetterli, IEEE Transactions on Communications, Vol.43, No.8, August 1995
X = []
j = []
p = GF2int(self.generator)
# Try for each possible location
for l in _range(1, self.gf2_charac+1): # range 1:256 is important: if you use range 0:255, if the last byte of the ecc symbols is corrupted, it won't be correctable! You need to use the range 1,256 to include this last byte.
#l = (i+self.fcr)
# These evaluations could be more efficient, but oh well
if sigma.evaluate( p**l ) == 0: # If it's 0, then bingo! It's an error location
# Compute the error location polynomial X (will be directly used to compute the errors magnitudes inside the Forney algorithm)
X.append( p**(-l) )
# Compute the coefficient position (not the error position, it's actually the reverse: we compute the degree of the term where the error is located. To get the error position, just compute n-1-j).
# This is different than the notes, I think the notes were in error
# Notes said j values were just l, when it's actually 255-l
j.append(self.gf2_charac - l)
# Sanity check: the number of errors/errata positions found should be exactly the same as the length of the errata locator polynomial
errs_nb = len(sigma) - 1 # compute the exact number of errors/errata that this error locator should find
if len(j) != errs_nb:
raise RSCodecError("Too many (or few) errors found by Chien Search for the errata locator polynomial!")
return X, j
|
def _chien_search(self, sigma)
|
Recall the definition of sigma, it has s roots. To find them, this
function evaluates sigma at all 2^(c_exp-1) (ie: 255 for GF(2^8)) non-zero points to find the roots
The inverse of the roots are X_i, the error locations
Returns a list X of error locations, and a corresponding list j of
error positions (the discrete log of the corresponding X value) The
lists are up to s elements large.
This is essentially an inverse Fourrier transform.
Important technical math note: This implementation is not actually
Chien's search. Chien's search is a way to evaluate the polynomial
such that each evaluation only takes constant time. This here simply
does 255 evaluations straight up, which is much less efficient.
Said differently, we simply do a bruteforce search by trial substitution to find the zeros of this polynomial, which identifies the error locations.
| 17.72864
| 6.666386
| 2.659408
|
'''Real chien search, we reuse the previous polynomial evaluation and just multiply by a constant polynomial. This should be faster, but it seems it's just the same speed as the other bruteforce version. However, it should easily be parallelizable.'''
# TODO: doesn't work when fcr is different than 1 (X values are incorrectly "shifted"...)
# TODO: try to mix this approach with the optimized walk on only interesting values, implemented in _chien_search_faster()
X = []
j = []
p = GF2int(self.generator)
if not hasattr(self, 'const_poly'): self.const_poly = [GF2int(self.generator)**(i+self.fcr) for i in _range(self.gf2_charac, -1, -1)] # constant polynomial that will allow us to update the previous polynomial evaluation to get the next one
const_poly = self.const_poly # caching for more efficiency since it never changes
ev_poly, ev = sigma.evaluate_array( p**1 ) # compute the first polynomial evaluation
# Try for each possible location
for l in _range(1, self.gf2_charac+1): # range 1:256 is important: if you use range 0:255, if the last byte of the ecc symbols is corrupted, it won't be correctable! You need to use the range 1,256 to include this last byte.
#l = (i+self.fcr)
# Check if it's a root for the polynomial
if ev == 0: # If it's 0, then bingo! It's an error location
# Compute the error location polynomial X (will be directly used to compute the errors magnitudes inside the Forney algorithm)
X.append( p**(-l) )
# Compute the coefficient position (not the error position, it's actually the reverse: we compute the degree of the term where the error is located. To get the error position, just compute n-1-j).
# This is different than the notes, I think the notes were in error
# Notes said j values were just l, when it's actually 255-l
j.append(self.gf2_charac - l)
# Update the polynomial evaluation for the next iteration
# we simply multiply each term[k] with alpha^k (where here alpha = p = GF2int(generator)).
# For more info, see the presentation by Andrew Brown, or this one: http://web.ntpu.edu.tw/~yshan/BCH_decoding.pdf
# TODO: parallelize this loop
for i in _range(1, len(ev_poly)+1): # TODO: maybe the fcr != 1 fix should be put here?
ev_poly[-i] *= const_poly[-i]
# Compute the new evaluation by just summing
ev = sum(ev_poly)
return X, j
|
def _chien_search_fast(self, sigma)
|
Real chien search, we reuse the previous polynomial evaluation and just multiply by a constant polynomial. This should be faster, but it seems it's just the same speed as the other bruteforce version. However, it should easily be parallelizable.
| 13.452202
| 10.640382
| 1.264259
|
'''Faster chien search, processing only useful coefficients (the ones in the messages) instead of the whole 2^8 range.
Besides the speed boost, this also allows to fix a number of issue: correctly decoding when the last ecc byte is corrupted, and accepting messages of length n > 2^8.'''
n = self.n
X = []
j = []
p = GF2int(self.generator)
# Normally we should try all 2^8 possible values, but here we optimize to just check the interesting symbols
# This also allows to accept messages where n > 2^8.
for l in _range(n):
#l = (i+self.fcr)
# These evaluations could be more efficient, but oh well
if sigma.evaluate( p**(-l) ) == 0: # If it's 0, then bingo! It's an error location
# Compute the error location polynomial X (will be directly used to compute the errors magnitudes inside the Forney algorithm)
X.append( p**l )
# Compute the coefficient position (not the error position, it's actually the reverse: we compute the degree of the term where the error is located. To get the error position, just compute n-1-j).
# This is different than the notes, I think the notes were in error
# Notes said j values were just l, when it's actually 255-l
j.append(l)
# Sanity check: the number of errors/errata positions found should be exactly the same as the length of the errata locator polynomial
errs_nb = len(sigma) - 1 # compute the exact number of errors/errata that this error locator should find
if len(j) != errs_nb:
# Note: decoding messages+ecc with length n > self.gf2_charac does work partially, but it's wrong, because you will get duplicated values, and then Chien Search cannot discriminate which root is correct and which is not. The duplication of values is normally prevented by the prime polynomial reduction when generating the field (see init_lut() in ff.py), but if you overflow the field, you have no guarantee anymore. We may try to use a bruteforce approach: the correct positions ARE in the final array j, but the problem is because we are above the Galois Field's range, there is a wraparound because of overflow so that for example if j should be [0, 1, 2, 3], we will also get [255, 256, 257, 258] (because 258 % 255 == 3, same for the other values), so we can't discriminate. The issue with that bruteforce approach is that fixing any errs_nb errors among those will always give a correct output message (in the sense that the syndrome will be all 0), so we may not even be able to check if that's correct or not, so there's clearly no way to decode a message of greater length than the field.
raise RSCodecError("Too many (or few) errors found by Chien Search for the errata locator polynomial!")
return X, j
|
def _chien_search_faster(self, sigma)
|
Faster chien search, processing only useful coefficients (the ones in the messages) instead of the whole 2^8 range.
Besides the speed boost, this also allows to fix a number of issue: correctly decoding when the last ecc byte is corrupted, and accepting messages of length n > 2^8.
| 16.894384
| 12.190993
| 1.385809
|
'''Computes the error magnitudes (only works with errors or erasures under t = floor((n-k)/2), not with erasures above (n-k)//2)'''
# XXX Is floor division okay here? Should this be ceiling?
if not k: k = self.k
t = (self.n - k) // 2
Y = []
for l, Xl in enumerate(X):
# Compute the sequence product and multiply its inverse in
prod = GF2int(1) # just to init the product (1 is the neutral term for multiplication)
Xl_inv = Xl.inverse()
for ji in _range(t): # do not change to _range(len(X)) as can be seen in some papers, it won't give the correct result! (sometimes yes, but not always)
if ji == l:
continue
if ji < len(X):
Xj = X[ji]
else: # if above the maximum degree of the polynomial, then all coefficients above are just 0 (that's logical...)
Xj = GF2int(0)
prod = prod * (Xl - Xj)
#if (ji != l):
# prod = prod * (GF2int(1) - X[ji]*(Xl.inverse()))
# Compute Yl
Yl = Xl**t * omega.evaluate(Xl_inv) * Xl_inv * prod.inverse()
Y.append(Yl)
return Y
|
def _old_forney(self, omega, X, k=None)
|
Computes the error magnitudes (only works with errors or erasures under t = floor((n-k)/2), not with erasures above (n-k)//2)
| 10.145516
| 7.146179
| 1.419712
|
'''Computes the error magnitudes. Works also with erasures and errors+erasures beyond the (n-k)//2 bound, here the bound is 2*e+v <= (n-k-1) with e the number of errors and v the number of erasures.'''
# XXX Is floor division okay here? Should this be ceiling?
Y = [] # the final result, the error/erasures polynomial (contain the values that we should minus on the received message to get the repaired message)
Xlength = len(X)
for l, Xl in enumerate(X):
Xl_inv = Xl.inverse()
# Compute the formal derivative of the error locator polynomial (see Blahut, Algebraic codes for data transmission, pp 196-197).
# the formal derivative of the errata locator is used as the denominator of the Forney Algorithm, which simply says that the ith error value is given by error_evaluator(gf_inverse(Xi)) / error_locator_derivative(gf_inverse(Xi)). See Blahut, Algebraic codes for data transmission, pp 196-197.
sigma_prime_tmp = [1 - Xl_inv * X[j] for j in _range(Xlength) if j != l] # TODO? maybe a faster way would be to precompute sigma_prime = sigma[len(sigma) & 1:len(sigma):2] and then just do sigma_prime.evaluate(X[j]) ? (like in reedsolo.py)
# compute the product
sigma_prime = 1
for coef in sigma_prime_tmp:
sigma_prime = sigma_prime * coef
# equivalent to: sigma_prime = functools.reduce(mul, sigma_prime, 1)
# Compute Yl
# This is a more faithful translation of the theoretical equation contrary to the old forney method. Here it is exactly copy/pasted from the included presentation decoding_rs.pdf: Yl = omega(Xl.inverse()) / prod(1 - Xj*Xl.inverse()) for j in len(X) (in the paper it's for j in s, but it's useless when len(X) < s because we compute neutral terms 1 for nothing, and wrong when correcting more than s erasures or erasures+errors since it prevents computing all required terms).
# Thus here this method works with erasures too because firstly we fixed the equation to be like the theoretical one (don't know why it was modified in _old_forney(), if it's an optimization, it doesn't enhance anything), and secondly because we removed the product bound on s, which prevented computing errors and erasures above the s=(n-k)//2 bound.
# The best resource I have found for the correct equation is https://en.wikipedia.org/wiki/Forney_algorithm -- note that in the article, fcr is defined as c.
Yl = - (Xl**(1-self.fcr) * omega.evaluate(Xl_inv) / sigma_prime) # sigma_prime is the denominator of the Forney algorithm
Y.append(Yl)
return Y
|
def _forney(self, omega, X)
|
Computes the error magnitudes. Works also with erasures and errors+erasures beyond the (n-k)//2 bound, here the bound is 2*e+v <= (n-k-1) with e the number of errors and v the number of erasures.
| 13.89298
| 10.866335
| 1.278534
|
try:
p = Popen(['/bin/ps', '-p%s' % self.pid, '-o', 'rss,vsz'],
stdout=PIPE, stderr=PIPE)
except OSError: # pragma: no cover
pass
else:
s = p.communicate()[0].split()
if p.returncode == 0 and len(s) >= 2: # pragma: no branch
self.vsz = int(s[-1]) * 1024
self.rss = int(s[-2]) * 1024
return True
return False
|
def update(self)
|
Get virtual and resident size of current process via 'ps'.
This should work for MacOS X, Solaris, Linux. Returns true if it was
successful.
| 2.670131
| 2.235196
| 1.194585
|
try:
stat = open('/proc/self/stat')
status = open('/proc/self/status')
except IOError: # pragma: no cover
return False
else:
stats = stat.read().split()
self.vsz = int( stats[22] )
self.rss = int( stats[23] ) * self.pagesize
self.pagefaults = int( stats[11] )
for entry in status.readlines():
key, value = entry.split(':')
size_in_bytes = lambda x: int(x.split()[0]) * 1024
if key == 'VmData':
self.data_segment = size_in_bytes(value)
elif key == 'VmExe':
self.code_segment = size_in_bytes(value)
elif key == 'VmLib':
self.shared_segment = size_in_bytes(value)
elif key == 'VmStk':
self.stack_segment = size_in_bytes(value)
key = self.key_map.get(key)
if key:
self.os_specific.append((key, value.strip()))
stat.close()
status.close()
return True
|
def update(self)
|
Get virtual size of current process by reading the process' stat file.
This should work for Linux.
| 2.924306
| 2.75983
| 1.059596
|
wx.EVT_LIST_COL_CLICK(self, self.GetId(), self.OnReorder)
wx.EVT_LIST_ITEM_SELECTED(self, self.GetId(), self.OnNodeSelected)
wx.EVT_MOTION(self, self.OnMouseMove)
wx.EVT_LIST_ITEM_ACTIVATED(self, self.GetId(), self.OnNodeActivated)
self.CreateColumns()
|
def CreateControls(self)
|
Create our sub-controls
| 2.571488
| 2.483996
| 1.035222
|
self.SetItemCount(0)
# clear any current columns...
for i in range( self.GetColumnCount())[::-1]:
self.DeleteColumn( i )
# now create
for i, column in enumerate(self.columns):
column.index = i
self.InsertColumn(i, column.name)
if not windows or column.targetWidth is None:
self.SetColumnWidth(i, wx.LIST_AUTOSIZE)
else:
self.SetColumnWidth(i, column.targetWidth)
|
def CreateColumns( self )
|
Create/recreate our column definitions from current self.columns
| 4.097755
| 3.671631
| 1.116058
|
self.columns = columns
self.sortOrder = [(x.defaultOrder,x) for x in self.columns if x.sortDefault]
self.CreateColumns()
|
def SetColumns( self, columns, sortOrder=None )
|
Set columns to a set of values other than the originals and recreates column controls
| 7.661563
| 6.098989
| 1.256202
|
try:
node = self.sorted[event.GetIndex()]
except IndexError, err:
log.warn(_('Invalid index in node activated: %(index)s'),
index=event.GetIndex())
else:
wx.PostEvent(
self,
squaremap.SquareActivationEvent(node=node, point=None,
map=None)
)
|
def OnNodeActivated(self, event)
|
We have double-clicked for hit enter on a node refocus squaremap to this node
| 8.256927
| 6.904789
| 1.195826
|
try:
node = self.sorted[event.GetIndex()]
except IndexError, err:
log.warn(_('Invalid index in node selected: %(index)s'),
index=event.GetIndex())
else:
if node is not self.selected_node:
wx.PostEvent(
self,
squaremap.SquareSelectionEvent(node=node, point=None,
map=None)
)
|
def OnNodeSelected(self, event)
|
We have selected a node with the list control, tell the world
| 6.827814
| 6.545839
| 1.043077
|
self.indicated_node = node
self.indicated = self.NodeToIndex(node)
self.Refresh(False)
return self.indicated
|
def SetIndicated(self, node)
|
Set this node to indicated status
| 5.325164
| 5.743581
| 0.927151
|
self.selected_node = node
index = self.NodeToIndex(node)
if index != -1:
self.Focus(index)
self.Select(index, True)
return index
|
def SetSelected(self, node)
|
Set our selected node
| 4.170883
| 3.984954
| 1.046658
|
column = self.columns[event.GetColumn()]
return self.ReorderByColumn( column )
|
def OnReorder(self, event)
|
Given a request to reorder, tell us to reorder
| 12.785909
| 12.048409
| 1.061211
|
# TODO: store current selection and re-select after sorting...
single_column = self.SetNewOrder( column )
self.reorder( single_column = True )
self.Refresh()
|
def ReorderByColumn( self, column )
|
Reorder the set of records by column
| 17.244329
| 17.052773
| 1.011233
|
if column.sortOn:
# multiple sorts for the click...
columns = [self.columnByAttribute(attr) for attr in column.sortOn]
diff = [(a, b) for a, b in zip(self.sortOrder, columns)
if b is not a[1]]
if not diff:
self.sortOrder[0] = (not self.sortOrder[0][0], column)
else:
self.sortOrder = [
(c.defaultOrder, c) for c in columns
] + [(a, b) for (a, b) in self.sortOrder if b not in columns]
return False
else:
if column is self.sortOrder[0][1]:
# reverse current major order
self.sortOrder[0] = (not self.sortOrder[0][0], column)
else:
self.sortOrder = [(column.defaultOrder, column)] + [
(a, b)
for (a, b) in self.sortOrder if b is not column
]
return True
|
def SetNewOrder( self, column )
|
Set new sorting order based on column, return whether a simple single-column (True) or multiple (False)
| 3.267644
| 3.197762
| 1.021853
|
if single_column:
columns = self.sortOrder[:1]
else:
columns = self.sortOrder
for ascending,column in columns[::-1]:
# Python 2.2+ guarantees stable sort, so sort by each column in reverse
# order will order by the assigned columns
self.sorted.sort( key=column.get, reverse=(not ascending))
|
def reorder(self, single_column=False)
|
Force a reorder of the displayed items
| 10.695555
| 10.518239
| 1.016858
|
self.SetItemCount(len(functions))
self.sorted = functions[:]
self.reorder()
self.Refresh()
|
def integrateRecords(self, functions)
|
Integrate records from the loader
| 11.255372
| 10.675228
| 1.054345
|
if self.indicated > -1 and item == self.indicated:
return self.indicated_attribute
return None
|
def OnGetItemAttr(self, item)
|
Retrieve ListItemAttr for the given item (index)
| 8.953276
| 8.435946
| 1.061325
|
# TODO: need to format for rjust and the like...
try:
column = self.columns[col]
value = column.get(self.sorted[item])
except IndexError, err:
return None
else:
if value is None:
return u''
if column.percentPossible and self.percentageView and self.total:
value = value / float(self.total) * 100.00
if column.format:
try:
return column.format % (value,)
except Exception, err:
log.warn('Column %s could not format %r value: %r',
column.name, type(value), value
)
value = column.get(self.sorted[item] )
if isinstance(value,(unicode,str)):
return value
return unicode(value)
else:
if isinstance(value,(unicode,str)):
return value
return unicode(value)
|
def OnGetItemText(self, item, col)
|
Retrieve text for the item and column respectively
| 4.094922
| 4.193792
| 0.976425
|
coder = rs.RSCoder(255,223)
output = []
while True:
block = input.read(223)
if not block: break
code = coder.encode_fast(block)
output.append(code)
sys.stderr.write(".")
sys.stderr.write("\n")
out = Image.new("L", (rowstride,len(output)))
out.putdata("".join(output))
out.save(output_filename)
|
def encode(input, output_filename)
|
Encodes the input data with reed-solomon error correction in 223 byte
blocks, and outputs each block along with 32 parity bytes to a new file by
the given filename.
input is a file-like object
The outputted image will be in png format, and will be 255 by x pixels with
one color channel. X is the number of 255 byte blocks from the input. Each
block of data will be one row, therefore, the data can be recovered if no
more than 16 pixels per row are altered.
| 4.798587
| 3.907846
| 1.227937
|
scriptname = request.environ.get('SCRIPT_NAME', '').rstrip('/') + '/'
location = urljoin(request.url, urljoin(scriptname, url))
raise HTTPResponse("", status=code, header=dict(Location=location))
|
def redirect(url, code=303)
|
Aborts execution and causes a 303 redirect
| 4.835042
| 5.088739
| 0.950145
|
try:
ts = email.utils.parsedate_tz(ims)
return time.mktime(ts[:8] + (0,)) - (ts[9] or 0) - time.timezone
except (TypeError, ValueError, IndexError):
return None
|
def parse_date(ims)
|
Parse rfc1123, rfc850 and asctime timestamps and return UTC epoch.
| 3.113176
| 2.433165
| 1.279476
|
app = app if app else default_app()
quiet = bool(kargs.get('quiet', False))
# Instantiate server, if it is a class instead of an instance
if isinstance(server, type):
server = server(host=host, port=port, **kargs)
if not isinstance(server, ServerAdapter):
raise RuntimeError("Server must be a subclass of WSGIAdapter")
if not quiet and isinstance(server, ServerAdapter): # pragma: no cover
if not reloader or os.environ.get('BOTTLE_CHILD') == 'true':
print("Bottle server starting up (using %s)..." % repr(server))
print("Listening on http://%s:%d/" % (server.host, server.port))
print("Use Ctrl-C to quit.")
print()
else:
print("Bottle auto reloader starting up...")
try:
if reloader and interval:
reloader_run(server, app, interval)
else:
server.run(app)
except KeyboardInterrupt:
if not quiet: # pragma: no cover
print("Shutting Down...")
|
def run(app=None, server=WSGIRefServer, host='127.0.0.1', port=8080,
interval=1, reloader=False, **kargs)
|
Runs bottle as a web server.
| 3.33427
| 3.145801
| 1.059911
|
'''
Get a rendered template as a string iterator.
You can use a name, a filename or a template string as first parameter.
'''
if tpl not in TEMPLATES or DEBUG:
settings = kwargs.get('template_settings',{})
lookup = kwargs.get('template_lookup', TEMPLATE_PATH)
if isinstance(tpl, template_adapter):
TEMPLATES[tpl] = tpl
if settings: TEMPLATES[tpl].prepare(settings)
elif "\n" in tpl or "{" in tpl or "%" in tpl or '$' in tpl:
TEMPLATES[tpl] = template_adapter(source=tpl, lookup=lookup, settings=settings)
else:
TEMPLATES[tpl] = template_adapter(name=tpl, lookup=lookup, settings=settings)
if not TEMPLATES[tpl]:
abort(500, 'Template (%s) not found' % tpl)
kwargs['abort'] = abort
kwargs['request'] = request
kwargs['response'] = response
return TEMPLATES[tpl].render(**kwargs)
|
def template(tpl, template_adapter=SimpleTemplate, **kwargs)
|
Get a rendered template as a string iterator.
You can use a name, a filename or a template string as first parameter.
| 3.528089
| 2.768573
| 1.274335
|
if not self._tokens:
self._tokens = list(self.tokenise(self.route))
return self._tokens
|
def tokens(self)
|
Return a list of (type, value) tokens.
| 5.814641
| 4.641203
| 1.252831
|
''' Split a string into an iterator of (type, value) tokens. '''
match = None
for match in cls.syntax.finditer(route):
pre, name, rex = match.groups()
if pre: yield ('TXT', pre.replace('\\:',':'))
if rex and name: yield ('VAR', (rex, name))
elif name: yield ('VAR', (cls.default, name))
elif rex: yield ('ANON', rex)
if not match:
yield ('TXT', route.replace('\\:',':'))
elif match.end() < len(route):
yield ('TXT', route[match.end():].replace('\\:',':'))
|
def tokenise(cls, route)
|
Split a string into an iterator of (type, value) tokens.
| 4.080812
| 3.622855
| 1.126408
|
''' Return a regexp pattern with named groups '''
out = ''
for token, data in self.tokens():
if token == 'TXT': out += re.escape(data)
elif token == 'VAR': out += '(?P<%s>%s)' % (data[1], data[0])
elif token == 'ANON': out += '(?:%s)' % data
return out
|
def group_re(self)
|
Return a regexp pattern with named groups
| 4.074936
| 3.48277
| 1.170027
|
''' Return a format string with named fields. '''
if self.static:
return self.route.replace('%','%%')
out, i = '', 0
for token, value in self.tokens():
if token == 'TXT': out += value.replace('%','%%')
elif token == 'ANON': out += '%%(anon%d)s' % i; i+=1
elif token == 'VAR': out += '%%(%s)s' % value[1]
return out
|
def format_str(self)
|
Return a format string with named fields.
| 5.450281
| 4.763824
| 1.144098
|
''' Return true if the route contains dynamic parts '''
if not self._static:
for token, value in self.tokens():
if token != 'TXT':
return True
self._static = True
return False
|
def is_dynamic(self)
|
Return true if the route contains dynamic parts
| 10.154998
| 7.604009
| 1.33548
|
''' Matches an URL and returns a (handler, target) tuple '''
if uri in self.static:
return self.static[uri], {}
for combined, subroutes in self.dynamic:
match = combined.match(uri)
if not match: continue
target, groups = subroutes[match.lastindex - 1]
groups = groups.match(uri).groupdict() if groups else {}
return target, groups
return None, {}
|
def match(self, uri)
|
Matches an URL and returns a (handler, target) tuple
| 4.546925
| 3.922546
| 1.159177
|
''' Builds an URL out of a named route and some parameters.'''
try:
return self.named[route_name] % args
except KeyError:
raise RouteBuildError("No route found with name '%s'." % route_name)
|
def build(self, route_name, **args)
|
Builds an URL out of a named route and some parameters.
| 6.347925
| 4.618864
| 1.374348
|
''' Register a new output filter. Whenever bottle hits a handler output
matching `ftype`, `func` is applyed to it. '''
if not isinstance(ftype, type):
raise TypeError("Expected type object, got %s" % type(ftype))
self.castfilter = [(t, f) for (t, f) in self.castfilter if t != ftype]
self.castfilter.append((ftype, func))
self.castfilter.sort()
|
def add_filter(self, ftype, func)
|
Register a new output filter. Whenever bottle hits a handler output
matching `ftype`, `func` is applyed to it.
| 5.974344
| 2.832384
| 2.109299
|
path = path.strip().lstrip('/')
handler, param = self.routes.match(method + ';' + path)
if handler: return handler, param
if method == 'HEAD':
handler, param = self.routes.match('GET;' + path)
if handler: return handler, param
handler, param = self.routes.match('ANY;' + path)
if handler: return handler, param
return None, {}
|
def match_url(self, path, method='GET')
|
Find a callback bound to a path and a specific HTTP method.
Return (callback, param) tuple or (None, {}).
method: HEAD falls back to GET. HEAD and GET fall back to ALL.
| 2.890428
| 2.648315
| 1.091421
|
return '/' + self.routes.build(routename, **kargs).split(';', 1)[1]
|
def get_url(self, routename, **kargs)
|
Return a string that matches a named route
| 9.760915
| 8.919204
| 1.094371
|
if isinstance(method, str): #TODO: Test this
method = method.split(';')
def wrapper(callback):
paths = [] if path is None else [path.strip().lstrip('/')]
if not paths: # Lets generate the path automatically
paths = yieldroutes(callback)
for p in paths:
for m in method:
route = m.upper() + ';' + p
self.routes.add(route, callback, **kargs)
return callback
return wrapper
|
def route(self, path=None, method='GET', **kargs)
|
Decorator: Bind a function to a GET request path.
If the path parameter is None, the signature of the decorated
function is used to generate the path. See yieldroutes()
for details.
The method parameter (default: GET) specifies the HTTP request
method to listen to. You can specify a list of methods.
| 5.844456
| 5.034378
| 1.160909
|
if isinstance(environ, Request): # Recycle already parsed content
for key in self.__dict__: #TODO: Test this
setattr(self, key, getattr(environ, key))
self.app = app
return
self._GET = self._POST = self._GETPOST = self._COOKIES = None
self._body = self._header = None
self.environ = environ
self.app = app
# These attributes are used anyway, so it is ok to compute them here
self.path = '/' + environ.get('PATH_INFO', '/').lstrip('/')
self.method = environ.get('REQUEST_METHOD', 'GET').upper()
|
def bind(self, environ, app=None)
|
Bind a new WSGI enviroment and clear out all previously computed
attributes.
This is done automatically for the global `bottle.request`
instance on every request.
| 4.843466
| 4.648223
| 1.042004
|
''' Shift some levels of PATH_INFO into SCRIPT_NAME and return the
moved part. count defaults to 1'''
#/a/b/ /c/d --> 'a','b' 'c','d'
if count == 0: return ''
pathlist = self.path.strip('/').split('/')
scriptlist = self.environ.get('SCRIPT_NAME','/').strip('/').split('/')
if pathlist and pathlist[0] == '': pathlist = []
if scriptlist and scriptlist[0] == '': scriptlist = []
if count > 0 and count <= len(pathlist):
moved = pathlist[:count]
scriptlist = scriptlist + moved
pathlist = pathlist[count:]
elif count < 0 and count >= -len(scriptlist):
moved = scriptlist[count:]
pathlist = moved + pathlist
scriptlist = scriptlist[:count]
else:
empty = 'SCRIPT_NAME' if count < 0 else 'PATH_INFO'
raise AssertionError("Cannot shift. Nothing left from %s" % empty)
self['PATH_INFO'] = self.path = '/' + '/'.join(pathlist) \
+ ('/' if self.path.endswith('/') and pathlist else '')
self['SCRIPT_NAME'] = '/' + '/'.join(scriptlist)
return '/'.join(moved)
|
def path_shift(self, count=1)
|
Shift some levels of PATH_INFO into SCRIPT_NAME and return the
moved part. count defaults to 1
| 3.525951
| 2.873372
| 1.227112
|
scheme = self.environ.get('wsgi.url_scheme', 'http')
host = self.environ.get('HTTP_X_FORWARDED_HOST', self.environ.get('HTTP_HOST', None))
if not host:
host = self.environ.get('SERVER_NAME')
port = self.environ.get('SERVER_PORT', '80')
if scheme + port not in ('https443', 'http80'):
host += ':' + port
parts = (scheme, host, urlquote(self.fullpath), self.query_string, '')
return urlunsplit(parts)
|
def url(self)
|
Full URL as requested by the client (computed).
This value is constructed out of different environment variables
and includes scheme, host, port, scriptname, path and query string.
| 2.653369
| 2.688166
| 0.987056
|
if self._POST is None:
save_env = dict() # Build a save environment for cgi
for key in ('REQUEST_METHOD', 'CONTENT_TYPE', 'CONTENT_LENGTH'):
if key in self.environ:
save_env[key] = self.environ[key]
save_env['QUERY_STRING'] = '' # Without this, sys.argv is called!
if TextIOWrapper:
fb = TextIOWrapper(self.body, encoding='ISO-8859-1')
else:
fb = self.body
data = cgi.FieldStorage(fp=fb, environ=save_env)
self._POST = MultiDict()
for item in data.list:
self._POST[item.name] = item if item.filename else item.value
return self._POST
|
def POST(self)
|
The HTTP POST body parsed into a MultiDict.
This supports urlencoded and multipart POST requests. Multipart
is commonly used for file uploads and may result in some of the
values beeing cgi.FieldStorage objects instead of strings.
Multiple values per key are possible. See MultiDict for details.
| 3.502179
| 3.345771
| 1.046748
|
if self._GETPOST is None:
self._GETPOST = MultiDict(self.GET)
self._GETPOST.update(dict(self.POST))
return self._GETPOST
|
def params(self)
|
A combined MultiDict with POST and GET parameters.
| 4.767863
| 3.116785
| 1.529737
|
value = self.COOKIES.get(*args)
sec = self.app.config['securecookie.key']
dec = cookie_decode(value, sec)
return dec or value
|
def get_cookie(self, *args)
|
Return the (decoded) value of a cookie.
| 9.067172
| 7.632889
| 1.187908
|
''' Returns a copy of self '''
copy = Response(self.app)
copy.status = self.status
copy.headers = self.headers.copy()
copy.content_type = self.content_type
return copy
|
def copy(self)
|
Returns a copy of self
| 3.897236
| 4.213048
| 0.925039
|
''' Returns a wsgi conform list of header/value pairs. '''
for c in list(self.COOKIES.values()):
if c.OutputString() not in self.headers.getall('Set-Cookie'):
self.headers.append('Set-Cookie', c.OutputString())
return list(self.headers.iterallitems())
|
def wsgiheader(self)
|
Returns a wsgi conform list of header/value pairs.
| 6.860708
| 4.978298
| 1.378123
|
if not isinstance(value, str):
sec = self.app.config['securecookie.key']
value = cookie_encode(value, sec).decode('ascii') #2to3 hack
self.COOKIES[key] = value
for k, v in kargs.items():
self.COOKIES[key][k.replace('_', '-')] = v
|
def set_cookie(self, key, value, **kargs)
|
Add a new cookie with various options.
If the cookie value is not a string, a secure cookie is created.
Possible options are:
expires, path, comment, domain, max_age, secure, version, httponly
See http://de.wikipedia.org/wiki/HTTP-Cookie#Aufbau for details
| 5.297169
| 5.168912
| 1.024813
|
nodes = ast.parse(_openfile(file_name))
module_imports = get_nodes_by_instance_type(nodes, _ast.Import)
specific_imports = get_nodes_by_instance_type(nodes, _ast.ImportFrom)
assignment_objs = get_nodes_by_instance_type(nodes, _ast.Assign)
call_objects = get_nodes_by_instance_type(nodes, _ast.Call)
argparse_assignments = get_nodes_by_containing_attr(assignment_objs, 'ArgumentParser')
add_arg_assignments = get_nodes_by_containing_attr(call_objects, 'add_argument')
parse_args_assignment = get_nodes_by_containing_attr(call_objects, 'parse_args')
ast_argparse_source = chain(
module_imports,
specific_imports,
argparse_assignments,
add_arg_assignments
# parse_args_assignment
)
return ast_argparse_source
|
def parse_source_file(file_name)
|
Parses the AST of Python file for lines containing
references to the argparse module.
returns the collection of ast objects found.
Example client code:
1. parser = ArgumentParser(desc="My help Message")
2. parser.add_argument('filename', help="Name of the file to load")
3. parser.add_argument('-f', '--format', help='Format of output \nOptions: ['md', 'html']
4. args = parser.parse_args()
Variables:
* nodes Primary syntax tree object
* argparse_assignments The assignment of the ArgumentParser (line 1 in example code)
* add_arg_assignments Calls to add_argument() (lines 2-3 in example code)
* parser_var_name The instance variable of the ArgumentParser (line 1 in example code)
* ast_source The curated collection of all parser related nodes in the client code
| 3.119188
| 2.666038
| 1.169971
|
if os.path.isfile(script_name):
return script_name
path = os.getenv('PATH', os.defpath).split(os.pathsep)
for folder in path:
if not folder:
continue
fn = os.path.join(folder, script_name)
if os.path.isfile(fn):
return fn
sys.stderr.write('Could not find script {0}\n'.format(script_name))
raise SystemExit(1)
|
def _find_script(script_name)
|
Find the script.
If the input is not a file, then $PATH will be searched.
| 2.044689
| 1.985183
| 1.029975
|
try:
from StringIO import StringIO
except ImportError: # Python 3.x
from io import StringIO
# Local imports to avoid hard dependency.
from distutils.version import LooseVersion
import IPython
ipython_version = LooseVersion(IPython.__version__)
if ipython_version < '0.11':
from IPython.genutils import page
from IPython.ipstruct import Struct
from IPython.ipapi import UsageError
else:
from IPython.core.page import page
from IPython.utils.ipstruct import Struct
from IPython.core.error import UsageError
# Escape quote markers.
opts_def = Struct(T=[''], f=[])
parameter_s = parameter_s.replace('"', r'\"').replace("'", r"\'")
opts, arg_str = self.parse_options(parameter_s, 'rf:T:c', list_all=True)
opts.merge(opts_def)
global_ns = self.shell.user_global_ns
local_ns = self.shell.user_ns
# Get the requested functions.
funcs = []
for name in opts.f:
try:
funcs.append(eval(name, global_ns, local_ns))
except Exception as e:
raise UsageError('Could not find function %r.\n%s: %s' % (name,
e.__class__.__name__, e))
include_children = 'c' in opts
profile = LineProfiler(include_children=include_children)
for func in funcs:
profile(func)
# Add the profiler to the builtins for @profile.
try:
import builtins
except ImportError: # Python 3x
import __builtin__ as builtins
if 'profile' in builtins.__dict__:
had_profile = True
old_profile = builtins.__dict__['profile']
else:
had_profile = False
old_profile = None
builtins.__dict__['profile'] = profile
try:
try:
profile.runctx(arg_str, global_ns, local_ns)
message = ''
except SystemExit:
message = "*** SystemExit exception caught in code being profiled."
except KeyboardInterrupt:
message = ("*** KeyboardInterrupt exception caught in code being "
"profiled.")
finally:
if had_profile:
builtins.__dict__['profile'] = old_profile
# Trap text output.
stdout_trap = StringIO()
show_results(profile, stdout_trap)
output = stdout_trap.getvalue()
output = output.rstrip()
if ipython_version < '0.11':
page(output, screen_lines=self.shell.rc.screen_length)
else:
page(output)
print(message,)
text_file = opts.T[0]
if text_file:
with open(text_file, 'w') as pfile:
pfile.write(output)
print('\n*** Profile printout saved to text file %s. %s' % (text_file,
message))
return_value = None
if 'r' in opts:
return_value = profile
return return_value
|
def magic_mprun(self, parameter_s='')
|
Execute a statement under the line-by-line memory profiler from the
memory_profiler module.
Usage:
%mprun -f func1 -f func2 <statement>
The given statement (which doesn't require quote marks) is run via the
LineProfiler. Profiling is enabled for the functions specified by the -f
options. The statistics will be shown side-by-side with the code through
the pager once the statement has completed.
Options:
-f <function>: LineProfiler only profiles functions and methods it is told
to profile. This option tells the profiler about these functions. Multiple
-f options may be used. The argument may be any expression that gives
a Python function or method object. However, one must be careful to avoid
spaces that may confuse the option parser. Additionally, functions defined
in the interpreter at the In[] prompt or via %run currently cannot be
displayed. Write these functions out to a separate file and import them.
One or more -f options are required to get any useful results.
-T <filename>: dump the text-formatted statistics with the code
side-by-side out to a text file.
-r: return the LineProfiler object after it has completed profiling.
-c: If present, add the memory usage of any children process to the report.
| 3.03329
| 2.79023
| 1.087111
|
opts, stmt = self.parse_options(line, 'r:t:i:c', posix=False, strict=False)
repeat = int(getattr(opts, 'r', 1))
if repeat < 1:
repeat == 1
timeout = int(getattr(opts, 't', 0))
if timeout <= 0:
timeout = None
interval = float(getattr(opts, 'i', 0.1))
include_children = 'c' in opts
# I've noticed we get less noisier measurements if we run
# a garbage collection first
import gc
gc.collect()
mem_usage = 0
counter = 0
baseline = memory_usage()[0]
while counter < repeat:
counter += 1
tmp = memory_usage((_func_exec, (stmt, self.shell.user_ns)),
timeout=timeout, interval=interval, max_usage=True,
include_children=include_children)
mem_usage = max(mem_usage, tmp[0])
if mem_usage:
print('peak memory: %.02f MiB, increment: %.02f MiB' %
(mem_usage, mem_usage - baseline))
else:
print('ERROR: could not read memory usage, try with a lower interval '
'or more iterations')
|
def magic_memit(self, line='')
|
Measure memory usage of a Python statement
Usage, in line mode:
%memit [-r<R>t<T>i<I>] statement
Options:
-r<R>: repeat the loop iteration <R> times and take the best result.
Default: 1
-t<T>: timeout after <T> seconds. Default: None
-i<I>: Get time information at an interval of I times per second.
Defaults to 0.1 so that there is ten measurements per second.
-c: If present, add the memory usage of any children process to the report.
Examples
--------
::
In [1]: import numpy as np
In [2]: %memit np.zeros(1e7)
maximum of 1: 76.402344 MiB per loop
In [3]: %memit np.ones(1e6)
maximum of 1: 7.820312 MiB per loop
In [4]: %memit -r 10 np.empty(1e8)
maximum of 10: 0.101562 MiB per loop
| 4.690208
| 4.33295
| 1.082452
|
def wrapper(*args, **kwargs):
prof = LineProfiler()
val = prof(func)(*args, **kwargs)
show_results(prof, stream=stream)
return val
return wrapper
|
def profile(func, stream=None)
|
Decorator that will run the function and print a line-by-line profile
| 3.191382
| 3.283434
| 0.971965
|
# Make a fake function
func = lambda x: x
func.__module__ = ""
func.__name__ = name
self.add_function(func)
timestamps = []
self.functions[func].append(timestamps)
# A new object is required each time, since there can be several
# nested context managers.
return _TimeStamperCM(timestamps)
|
def timestamp(self, name="<block>")
|
Returns a context manager for timestamping a block of code.
| 9.765952
| 9.100657
| 1.073104
|
def f(*args, **kwds):
# Start time
timestamps = [_get_memory(os.getpid(), timestamps=True)]
self.functions[func].append(timestamps)
try:
result = func(*args, **kwds)
finally:
# end time
timestamps.append(_get_memory(os.getpid(), timestamps=True))
return result
return f
|
def wrap_function(self, func)
|
Wrap a function to timestamp it.
| 4.210369
| 3.909978
| 1.076827
|
try:
# func_code does not exist in Python3
code = func.__code__
except AttributeError:
warnings.warn("Could not extract a code object for the object %r"
% func)
else:
self.add_code(code)
|
def add_function(self, func)
|
Record line profiling information for the given Python function.
| 5.862978
| 5.870824
| 0.998663
|
# TODO: can this be removed ?
import __main__
main_dict = __main__.__dict__
return self.runctx(cmd, main_dict, main_dict)
|
def run(self, cmd)
|
Profile a single executable statement in the main namespace.
| 6.819892
| 5.162902
| 1.320942
|
if (event in ('call', 'line', 'return')
and frame.f_code in self.code_map):
if event != 'call':
# "call" event just saves the lineno but not the memory
mem = _get_memory(-1, include_children=self.include_children)
# if there is already a measurement for that line get the max
old_mem = self.code_map[frame.f_code].get(self.prevline, 0)
self.code_map[frame.f_code][self.prevline] = max(mem, old_mem)
self.prevline = frame.f_lineno
if self._original_trace_function is not None:
(self._original_trace_function)(frame, event, arg)
return self.trace_memory_usage
|
def trace_memory_usage(self, frame, event, arg)
|
Callback for sys.settrace
| 3.703824
| 3.682633
| 1.005754
|
if key not in self.roots:
function = getattr( self, 'load_%s'%(key,) )()
self.roots[key] = function
return self.roots[key]
|
def get_root( self, key )
|
Retrieve a given declared root by root-type-key
| 4.676908
| 4.326897
| 1.080892
|
if key not in self.roots:
self.get_root( key )
if key == 'location':
return self.location_rows
else:
return self.rows
|
def get_rows( self, key )
|
Get the set of rows for the type-key
| 5.815804
| 5.278369
| 1.101818
|
rows = self.rows
for func, raw in stats.iteritems():
try:
rows[func] = row = PStatRow( func,raw )
except ValueError, err:
log.info( 'Null row: %s', func )
for row in rows.itervalues():
row.weave( rows )
return self.find_root( rows )
|
def load( self, stats )
|
Build a squaremap-compatible model from a pstats class
| 7.979507
| 7.246982
| 1.10108
|
maxes = sorted( rows.values(), key = lambda x: x.cumulative )
if not maxes:
raise RuntimeError( )
root = maxes[-1]
roots = [root]
for key,value in rows.items():
if not value.parents:
log.debug( 'Found node root: %s', value )
if value not in roots:
roots.append( value )
if len(roots) > 1:
root = PStatGroup(
directory='*',
filename='*',
name=_("<profiling run>"),
children= roots,
)
root.finalize()
self.rows[ root.key ] = root
self.roots['functions'] = root
return root
|
def find_root( self, rows )
|
Attempt to find/create a reasonable root node from list/set of rows
rows -- key: PStatRow mapping
TODO: still need more robustness here, particularly in the case of
threaded programs. Should be tracing back each row to root, breaking
cycles by sorting on cumulative time, and then collecting the traced
roots (or, if they are all on the same root, use that).
| 7.341507
| 6.313819
| 1.162768
|
directories = {}
files = {}
root = PStatLocation( '/', 'PYTHONPATH' )
self.location_rows = self.rows.copy()
for child in self.rows.values():
current = directories.get( child.directory )
directory, filename = child.directory, child.filename
if current is None:
if directory == '':
current = root
else:
current = PStatLocation( directory, '' )
self.location_rows[ current.key ] = current
directories[ directory ] = current
if filename == '~':
filename = '<built-in>'
file_current = files.get( (directory,filename) )
if file_current is None:
file_current = PStatLocation( directory, filename )
self.location_rows[ file_current.key ] = file_current
files[ (directory,filename) ] = file_current
current.children.append( file_current )
file_current.children.append( child )
# now link the directories...
for key,value in directories.items():
if value is root:
continue
found = False
while key:
new_key,rest = os.path.split( key )
if new_key == key:
break
key = new_key
parent = directories.get( key )
if parent:
if value is not parent:
parent.children.append( value )
found = True
break
if not found:
root.children.append( value )
# lastly, finalize all of the directory records...
root.finalize()
return root
|
def _load_location( self )
|
Build a squaremap-compatible model for location-based hierarchy
| 3.150551
| 3.161282
| 0.996606
|
if already_done is None:
already_done = {}
if already_done.has_key( self ):
return True
already_done[self] = True
self.filter_children()
children = self.children
for child in children:
if hasattr( child, 'finalize' ):
child.finalize( already_done)
child.parents.append( self )
self.calculate_totals( self.children, self.local_children )
|
def finalize( self, already_done=None )
|
Finalize our values (recursively) taken from our children
| 3.722179
| 3.438693
| 1.08244
|
for field,local_field in (('recursive','calls'),('cumulative','local')):
values = []
for child in children:
if isinstance( child, PStatGroup ) or not self.LOCAL_ONLY:
values.append( getattr( child, field, 0 ) )
elif isinstance( child, PStatRow ) and self.LOCAL_ONLY:
values.append( getattr( child, local_field, 0 ) )
value = sum( values )
setattr( self, field, value )
if self.recursive:
self.cumulativePer = self.cumulative/float(self.recursive)
else:
self.recursive = 0
if local_children:
for field in ('local','calls'):
value = sum([ getattr( child, field, 0 ) for child in children] )
setattr( self, field, value )
if self.calls:
self.localPer = self.local / self.calls
else:
self.local = 0
self.calls = 0
self.localPer = 0
|
def calculate_totals( self, children, local_children=None )
|
Calculate our cumulative totals from children and/or local children
| 3.465912
| 3.3664
| 1.02956
|
real_children = []
for child in self.children:
if child.name == '<module>':
self.local_children.append( child )
else:
real_children.append( child )
self.children = real_children
|
def filter_children( self )
|
Filter our children into regular and local children sets
| 3.455932
| 2.547878
| 1.356396
|
if w >= h:
new_w = int(w*fraction)
if new_w:
return (x,y,new_w,h),(x+new_w,y,w-new_w,h)
else:
return None,None
else:
new_h = int(h*fraction)
if new_h:
return (x,y,w,new_h),(x,y+new_h,w,h-new_h)
else:
return None,None
|
def split_box( fraction, x,y, w,h )
|
Return set of two boxes where first is the fraction given
| 1.674268
| 1.674961
| 0.999586
|
head_sum,tail_sum = 0,0
divider = 0
for node in nodes[::-1]:
if head_sum < total/headdivisor:
head_sum += node[0]
divider -= 1
else:
break
return (head_sum,nodes[divider:]),(total-head_sum,nodes[:divider])
|
def split_by_value( total, nodes, headdivisor=2.0 )
|
Produce, (sum,head),(sum,tail) for nodes to attempt binary partition
| 3.352624
| 3.021542
| 1.109574
|
''' Find the target node in the hot_map. '''
for index, (rect, node, children) in enumerate(hot_map):
if node == targetNode:
return parentNode, hot_map, index
result = class_.findNode(children, targetNode, node)
if result:
return result
return None
|
def findNode(class_, hot_map, targetNode, parentNode=None)
|
Find the target node in the hot_map.
| 3.99205
| 4.096636
| 0.97447
|
''' Return the first child of the node indicated by index. '''
children = hot_map[index][2]
if children:
return children[0][1]
else:
return hot_map[index][1]
|
def firstChild(hot_map, index)
|
Return the first child of the node indicated by index.
| 3.760855
| 3.431812
| 1.09588
|
''' Return the next sibling of the node indicated by index. '''
nextChildIndex = min(index + 1, len(hotmap) - 1)
return hotmap[nextChildIndex][1]
|
def nextChild(hotmap, index)
|
Return the next sibling of the node indicated by index.
| 4.440754
| 3.900794
| 1.138423
|
''' Return the very last node (recursively) in the hot map. '''
children = hot_map[-1][2]
if children:
return class_.lastNode(children)
else:
return hot_map[-1][1]
|
def lastNode(class_, hot_map)
|
Return the very last node (recursively) in the hot map.
| 5.370416
| 3.860987
| 1.390944
|
node = HotMapNavigator.findNodeAtPosition(self.hot_map, event.GetPosition())
self.SetHighlight( node, event.GetPosition() )
|
def OnMouse( self, event )
|
Handle mouse-move event by selecting a given element
| 15.472817
| 12.799093
| 1.208899
|
node = HotMapNavigator.findNodeAtPosition(self.hot_map, event.GetPosition())
self.SetSelected( node, event.GetPosition() )
|
def OnClickRelease( self, event )
|
Release over a given square in the map
| 15.831494
| 11.541183
| 1.371739
|
node = HotMapNavigator.findNodeAtPosition(self.hot_map, event.GetPosition())
if node:
wx.PostEvent( self, SquareActivationEvent( node=node, point=event.GetPosition(), map=self ) )
|
def OnDoubleClick(self, event)
|
Double click on a given square in the map
| 10.918435
| 8.669694
| 1.25938
|
if node == self.selectedNode:
return
self.selectedNode = node
self.UpdateDrawing()
if node:
wx.PostEvent( self, SquareSelectionEvent( node=node, point=point, map=self ) )
|
def SetSelected( self, node, point=None, propagate=True )
|
Set the given node selected in the square-map
| 5.502754
| 4.15998
| 1.322784
|
if node == self.highlightedNode:
return
self.highlightedNode = node
# TODO: restrict refresh to the squares for previous node and new node...
self.UpdateDrawing()
if node and propagate:
wx.PostEvent( self, SquareHighlightEvent( node=node, point=point, map=self ) )
|
def SetHighlight( self, node, point=None, propagate=True )
|
Set the currently-highlighted node
| 9.359261
| 8.460971
| 1.106169
|
self.model = model
if adapter is not None:
self.adapter = adapter
self.UpdateDrawing()
|
def SetModel( self, model, adapter=None )
|
Set our model object (root of the tree)
| 4.74937
| 3.879287
| 1.224289
|
''' Draw the tree map on the device context. '''
self.hot_map = []
dc.BeginDrawing()
brush = wx.Brush( self.BackgroundColour )
dc.SetBackground( brush )
dc.Clear()
if self.model:
self.max_depth_seen = 0
font = self.FontForLabels(dc)
dc.SetFont(font)
self._em_size_ = dc.GetFullTextExtent( 'm', font )[0]
w, h = dc.GetSize()
self.DrawBox( dc, self.model, 0,0,w,h, hot_map = self.hot_map )
dc.EndDrawing()
|
def Draw(self, dc)
|
Draw the tree map on the device context.
| 5.437985
| 4.859533
| 1.119034
|
''' Return the default GUI font, scaled for printing if necessary. '''
font = wx.SystemSettings_GetFont(wx.SYS_DEFAULT_GUI_FONT)
scale = dc.GetPPI()[0] / wx.ScreenDC().GetPPI()[0]
font.SetPointSize(scale*font.GetPointSize())
return font
|
def FontForLabels(self, dc)
|
Return the default GUI font, scaled for printing if necessary.
| 3.79028
| 2.579908
| 1.469153
|
if node == self.selectedNode:
color = wx.SystemSettings_GetColour(wx.SYS_COLOUR_HIGHLIGHT)
elif node == self.highlightedNode:
color = wx.Colour( red=0, green=255, blue=0 )
else:
color = self.adapter.background_color(node, depth)
if not color:
red = (depth * 10)%255
green = 255-((depth * 5)%255)
blue = (depth * 25)%255
color = wx.Colour( red, green, blue )
return wx.Brush( color )
|
def BrushForNode( self, node, depth=0 )
|
Create brush to use to display the given node
| 2.833997
| 2.793285
| 1.014575
|
if node == self.selectedNode:
return self.SELECTED_PEN
return self.DEFAULT_PEN
|
def PenForNode( self, node, depth=0 )
|
Determine the pen to use to display the given node
| 6.536179
| 5.172905
| 1.263541
|
if node == self.selectedNode:
fg_color = wx.SystemSettings_GetColour(wx.SYS_COLOUR_HIGHLIGHTTEXT)
else:
fg_color = self.adapter.foreground_color(node, depth)
if not fg_color:
fg_color = wx.SystemSettings_GetColour(wx.SYS_COLOUR_WINDOWTEXT)
return fg_color
|
def TextForegroundForNode(self, node, depth=0)
|
Determine the text foreground color to use to display the label of
the given node
| 2.276656
| 2.351302
| 0.968253
|
log.debug( 'Draw: %s to (%s,%s,%s,%s) depth %s',
node, x,y,w,h, depth,
)
if self.max_depth and depth > self.max_depth:
return
self.max_depth_seen = max( (self.max_depth_seen,depth))
dc.SetBrush( self.BrushForNode( node, depth ) )
dc.SetPen( self.PenForNode( node, depth ) )
# drawing offset by margin within the square...
dx,dy,dw,dh = x+self.margin,y+self.margin,w-(self.margin*2),h-(self.margin*2)
if sys.platform == 'darwin':
# Macs don't like drawing small rounded rects...
if w < self.padding*2 or h < self.padding*2:
dc.DrawRectangle( dx,dy,dw,dh )
else:
dc.DrawRoundedRectangle( dx,dy,dw,dh, self.padding )
else:
dc.DrawRoundedRectangle( dx,dy,dw,dh, self.padding*3 )
# self.DrawIconAndLabel(dc, node, x, y, w, h, depth)
children_hot_map = []
hot_map.append( (wx.Rect( int(x),int(y),int(w),int(h)), node, children_hot_map ) )
x += self.padding
y += self.padding
w -= self.padding*2
h -= self.padding*2
empty = self.adapter.empty( node )
icon_drawn = False
if self.max_depth and depth == self.max_depth:
self.DrawIconAndLabel(dc, node, x, y, w, h, depth)
icon_drawn = True
elif empty:
# is a fraction of the space which is empty...
log.debug( ' empty space fraction: %s', empty )
new_h = h * (1.0-empty)
self.DrawIconAndLabel(dc, node, x, y, w, h-new_h, depth)
icon_drawn = True
y += (h-new_h)
h = new_h
if w >self.padding*2 and h> self.padding*2:
children = self.adapter.children( node )
if children:
log.debug( ' children: %s', children )
self.LayoutChildren( dc, children, node, x,y,w,h, children_hot_map, depth+1 )
else:
log.debug( ' no children' )
if not icon_drawn:
self.DrawIconAndLabel(dc, node, x, y, w, h, depth)
else:
log.debug( ' not enough space: children skipped' )
|
def DrawBox( self, dc, node, x,y,w,h, hot_map, depth=0 )
|
Draw a model-node's box and all children nodes
| 2.717042
| 2.696659
| 1.007559
|
''' Draw the icon, if any, and the label, if any, of the node. '''
if w-2 < self._em_size_//2 or h-2 < self._em_size_ //2:
return
dc.SetClippingRegion(x+1, y+1, w-2, h-2) # Don't draw outside the box
try:
icon = self.adapter.icon(node, node==self.selectedNode)
if icon and h >= icon.GetHeight() and w >= icon.GetWidth():
iconWidth = icon.GetWidth() + 2
dc.DrawIcon(icon, x+2, y+2)
else:
iconWidth = 0
if self.labels and h >= dc.GetTextExtent('ABC')[1]:
dc.SetTextForeground(self.TextForegroundForNode(node, depth))
dc.DrawText(self.adapter.label(node), x + iconWidth + 2, y+2)
finally:
dc.DestroyClippingRegion()
|
def DrawIconAndLabel(self, dc, node, x, y, w, h, depth)
|
Draw the icon, if any, and the label, if any, of the node.
| 3.553747
| 3.371441
| 1.054074
|
if node_sum is None:
nodes = [ (self.adapter.value(node,parent),node) for node in children ]
nodes.sort(key=operator.itemgetter(0))
total = self.adapter.children_sum( children,parent )
else:
nodes = children
total = node_sum
if total:
if self.square_style and len(nodes) > 5:
# new handling to make parents with large numbers of parents a little less
# "sliced" looking (i.e. more square)
(head_sum,head),(tail_sum,tail) = split_by_value( total, nodes )
if head and tail:
# split into two sub-boxes and render each...
head_coord,tail_coord = split_box( head_sum/float(total), x,y,w,h )
if head_coord:
self.LayoutChildren(
dc, head, parent, head_coord[0],head_coord[1],head_coord[2],head_coord[3],
hot_map, depth,
node_sum = head_sum,
)
if tail_coord and coord_bigger_than_padding( tail_coord, self.padding+self.margin ):
self.LayoutChildren(
dc, tail, parent, tail_coord[0],tail_coord[1],tail_coord[2],tail_coord[3],
hot_map, depth,
node_sum = tail_sum,
)
return
(firstSize,firstNode) = nodes[-1]
fraction = firstSize/float(total)
head_coord,tail_coord = split_box( firstSize/float(total), x,y,w,h )
if head_coord:
self.DrawBox(
dc, firstNode, head_coord[0],head_coord[1],head_coord[2],head_coord[3],
hot_map, depth+1
)
else:
return # no other node will show up as non-0 either
if len(nodes) > 1 and tail_coord and coord_bigger_than_padding( tail_coord, self.padding+self.margin ):
self.LayoutChildren(
dc, nodes[:-1], parent,
tail_coord[0],tail_coord[1],tail_coord[2],tail_coord[3],
hot_map, depth,
node_sum = total - firstSize,
)
|
def LayoutChildren( self, dc, children, parent, x,y,w,h, hot_map, depth=0, node_sum=None )
|
Layout the set of children in the given rectangle
node_sum -- if provided, we are a recursive call that already has sizes and sorting,
so skip those operations
| 3.399405
| 3.373193
| 1.007771
|
return sum( [self.value(value,node) for value in self.children(node)] )
|
def overall( self, node )
|
Calculate overall size of the node including children and empty space
| 7.988488
| 6.607059
| 1.209084
|
return sum( [self.value(value,node) for value in children] )
|
def children_sum( self, children,node )
|
Calculate children's total sum
| 6.947877
| 5.553824
| 1.251008
|
overall = self.overall( node )
if overall:
return (overall - self.children_sum( self.children(node), node))/float(overall)
return 0
|
def empty( self, node )
|
Calculate empty space as a fraction of total space
| 9.479733
| 7.014464
| 1.351455
|
'''Readable size format, courtesy of Sridhar Ratnakumar'''
for unit in ['','K','M','G','T','P','E','Z']:
if abs(num) < 1000.0:
return "%3.1f%s%s" % (num, unit, suffix)
num /= 1000.0
return "%.1f%s%s" % (num, 'Y', suffix)
|
def format_sizeof(num, suffix='bytes')
|
Readable size format, courtesy of Sridhar Ratnakumar
| 2.411115
| 1.586616
| 1.519658
|
if n < 1:
n = 1
self.n += n
delta_it = self.n - self.last_print_n
if delta >= self.miniters:
# We check the counter first, to reduce the overhead of time.time()
cur_t = time.time()
if cur_t - self.last_print_t >= self.mininterval:
self.sp.print_status(format_meter(self.n, self.total, cur_t-self.start_t, self.ncols, self.prefix, self.unit, self.unit_format, self.ascii))
if self.dynamic_miniters: self.miniters = max(self.miniters, delta_it)
self.last_print_n = self.n
self.last_print_t = cur_t
|
def update(self, n=1)
|
Manually update the progress bar, useful for streams such as reading files (set init(total=filesize) and then in the reading loop, use update(len(current_buffer)) )
Parameters
----------
n : int
Increment to add to the internal counter of iterations.
| 4.287746
| 3.926328
| 1.09205
|
if self.leave:
if self.last_print_n < self.n:
cur_t = time.time()
self.sp.print_status(format_meter(self.n, self.total, cur_t-self.start_t, self.ncols, self.prefix, self.unit, self.unit_format, self.ascii))
self.file.write('\n')
else:
self.sp.print_status('')
self.file.write('\r')
|
def close(self)
|
Call this method to force print the last progress bar update based on the latest n value
| 5.675711
| 4.737284
| 1.198094
|
if value not in self._set:
self._set.add(value)
self._list.add(value)
|
def add(self, value)
|
Add the element *value* to the set.
| 3.611907
| 2.836122
| 1.273537
|
return self.__class__(key=self._key, load=self._load, _set=set(self._set))
|
def copy(self)
|
Create a shallow copy of the sorted set.
| 9.535844
| 6.702809
| 1.422664
|
diff = self._set.difference(*iterables)
new_set = self.__class__(key=self._key, load=self._load, _set=diff)
return new_set
|
def difference(self, *iterables)
|
Return a new set with elements in the set that are not in the
*iterables*.
| 4.345424
| 4.292125
| 1.012418
|
comb = self._set.intersection(*iterables)
new_set = self.__class__(key=self._key, load=self._load, _set=comb)
return new_set
|
def intersection(self, *iterables)
|
Return a new set with elements common to the set and all *iterables*.
| 4.907929
| 4.110765
| 1.193921
|
diff = self._set.symmetric_difference(that)
new_set = self.__class__(key=self._key, load=self._load, _set=diff)
return new_set
|
def symmetric_difference(self, that)
|
Return a new set with elements in either *self* or *that* but not both.
| 4.506699
| 3.719363
| 1.211686
|
''' Verify and decode an encoded string. Return an object or None'''
if isinstance(data, unicode): data = data.encode('ascii') #2to3 hack
if cookie_is_encoded(data):
sig, msg = data.split(u'?'.encode('ascii'),1) #2to3 hack
if sig[1:] == base64.b64encode(hmac.new(key, msg).digest()):
return pickle.loads(base64.b64decode(msg))
return None
|
def cookie_decode(data, key)
|
Verify and decode an encoded string. Return an object or None
| 5.146791
| 4.112186
| 1.251595
|
''' Verify and decode an encoded string. Return an object or None'''
return bool(data.startswith(u'!'.encode('ascii')) and u'?'.encode('ascii') in data)
|
def cookie_is_encoded(data)
|
Verify and decode an encoded string. Return an object or None
| 13.10668
| 5.919425
| 2.214181
|
''' Returns a function that turns everything into 'native' strings using enc '''
if sys.version_info >= (3,0,0):
return lambda x: x.decode(enc) if isinstance(x, bytes) else str(x)
return lambda x: x.encode(enc) if isinstance(x, unicode) else str(x)
|
def tonativefunc(enc='utf-8')
|
Returns a function that turns everything into 'native' strings using enc
| 2.995299
| 2.18989
| 1.367785
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.