code string | signature string | docstring string | loss_without_docstring float64 | loss_with_docstring float64 | factor float64 |
|---|---|---|---|---|---|
# TODO: rename to `mr_dim_idxs` or better yet get rid of need for
# this as it's really a cube internal characteristic.
# TODO: Make this return a tuple in all cases, like (), (1,), or (0, 2).
indices = tuple(
idx
for idx, d in enumerate(self.dimensions)
if d.dimension_type == DT.MR_SUBVAR
)
if indices == ():
return None
if len(indices) == 1:
return indices[0]
return indices | def mr_dim_ind(self) | Return int, tuple of int, or None, representing MR indices.
The return value represents the index of each multiple-response (MR)
dimension in this cube. Return value is None if there are no MR
dimensions, and int if there is one MR dimension, and a tuple of int
when there are more than one. The index is the (zero-based) position
of the MR dimensions in the _ApparentDimensions sequence returned by
the :attr"`.dimensions` property. | 8.873476 | 7.666524 | 1.157431 |
population_counts = [
slice_.population_counts(
population_size,
weighted=weighted,
include_missing=include_missing,
include_transforms_for_dims=include_transforms_for_dims,
prune=prune,
)
for slice_ in self.slices
]
if len(population_counts) > 1:
return np.array(population_counts)
return population_counts[0] | def population_counts(
self,
population_size,
weighted=True,
include_missing=False,
include_transforms_for_dims=None,
prune=False,
) | Return counts scaled in proportion to overall population.
The return value is a numpy.ndarray object. Count values are scaled
proportionally to approximate their value if the entire population
had been sampled. This calculation is based on the estimated size of
the population provided as *population size*. The remaining arguments
have the same meaning as they do for the `.proportions()` method.
Example::
>>> cube = CrunchCube(fixt_cat_x_cat)
>>> cube.as_array()
np.array([
[5, 2],
[5, 3],
])
>>> cube.population_counts(9000)
np.array([
[3000, 1200],
[3000, 1800],
]) | 1.929382 | 2.630751 | 0.733396 |
# Calculate numerator from table (include all H&S dimensions).
table = self._measure(weighted).raw_cube_array
num = self._apply_subtotals(
self._apply_missings(table), include_transforms_for_dims
)
proportions = num / self._denominator(
weighted, include_transforms_for_dims, axis
)
if not include_mr_cat:
proportions = self._drop_mr_cat_dims(proportions)
# Apply correct mask (based on the as_array shape)
arr = self.as_array(
prune=prune, include_transforms_for_dims=include_transforms_for_dims
)
if isinstance(arr, np.ma.core.MaskedArray):
proportions = np.ma.masked_array(proportions, arr.mask)
return proportions | def proportions(
self,
axis=None,
weighted=True,
include_transforms_for_dims=None,
include_mr_cat=False,
prune=False,
) | Return percentage values for cube as `numpy.ndarray`.
This function calculates the proportions across the selected axis
of a crunch cube. For most variable types, it means the value divided
by the margin value. For a multiple-response variable, the value is
divided by the sum of selected and non-selected slices.
*axis* (int): base axis of proportions calculation. If no axis is
provided, calculations are done across the entire table.
*weighted* (bool): Specifies weighted or non-weighted proportions.
*include_transforms_for_dims* (list): Also include headings and
subtotals transformations for the provided dimensions. If the
dimensions have the transformations, they'll be included in the
resulting numpy array. If the dimensions don't have the
transformations, nothing will happen (the result will be the same as
if the argument weren't provided).
*include_transforms_for_dims* (list): Include headers and subtotals
(H&S) across various dimensions. The dimensions are provided as list
elements. For example: "include_transforms_for_dims=[0, 1]" instructs
the CrunchCube to return H&S for both rows and columns (if it's a 2D
cube).
*include_mr_cat* (bool): Include MR categories.
*prune* (bool): Instructs the CrunchCube to prune empty rows/cols.
Emptiness is determined by the state of the margin (if it's either
0 or nan at certain index). If it is, the corresponding row/col is
not included in the result.
Example 1::
>>> cube = CrunchCube(fixt_cat_x_cat)
np.array([
[5, 2],
[5, 3],
])
>>> cube.proportions()
np.array([
[0.3333333, 0.1333333],
[0.3333333, 0.2000000],
])
Example 2::
>>> cube = CrunchCube(fixt_cat_x_cat)
np.array([
[5, 2],
[5, 3],
])
>>> cube.proportions(axis=0)
np.array([
[0.5, 0.4],
[0.5, 0.6],
]) | 4.833557 | 4.919864 | 0.982457 |
table = self._measure(weighted).raw_cube_array
new_axis = self._adjust_axis(axis)
index = tuple(
None if i in new_axis else slice(None) for i, _ in enumerate(table.shape)
)
hs_dims = self._hs_dims_for_den(include_transforms_for_dims, axis)
den = self._apply_subtotals(self._apply_missings(table), hs_dims)
return np.sum(den, axis=new_axis)[index] | def _denominator(self, weighted, include_transforms_for_dims, axis) | Calculate denominator for percentages.
Only include those H&S dimensions, across which we DON'T sum. These H&S
are needed because of the shape, when dividing. Those across dims
which are summed across MUST NOT be included, because they would
change the result. | 6.263795 | 5.878859 | 1.065478 |
slices_means = [ScaleMeans(slice_).data for slice_ in self.slices]
if hs_dims and self.ndim > 1:
# Intersperse scale means with nans if H&S specified, and 2D. No
# need to modify 1D, as only one mean will ever be inserted.
inserted_indices = self.inserted_hs_indices()[-2:]
for scale_means in slices_means:
# Scale means 0 corresonds to the column dimension (is
# calculated by using its values). The result of it, however,
# is a row. That's why we need to check the insertions on the
# row dim (inserted columns).
if scale_means[0] is not None and 1 in hs_dims and inserted_indices[1]:
for i in inserted_indices[1]:
scale_means[0] = np.insert(scale_means[0], i, np.nan)
# Scale means 1 is a column, so we need to check
# for row insertions.
if scale_means[1] is not None and 0 in hs_dims and inserted_indices[0]:
for i in inserted_indices[0]:
scale_means[1] = np.insert(scale_means[1], i, np.nan)
if prune:
# Apply pruning
arr = self.as_array(include_transforms_for_dims=hs_dims, prune=True)
if isinstance(arr, np.ma.core.MaskedArray):
mask = arr.mask
for i, scale_means in enumerate(slices_means):
if scale_means[0] is not None:
row_mask = (
mask.all(axis=0) if self.ndim < 3 else mask.all(axis=1)[i]
)
scale_means[0] = scale_means[0][~row_mask]
if self.ndim > 1 and scale_means[1] is not None:
col_mask = (
mask.all(axis=1) if self.ndim < 3 else mask.all(axis=2)[i]
)
scale_means[1] = scale_means[1][~col_mask]
return slices_means | def scale_means(self, hs_dims=None, prune=False) | Get cube means. | 3.829886 | 3.790684 | 1.010342 |
res = [s.zscore(weighted, prune, hs_dims) for s in self.slices]
return np.array(res) if self.ndim == 3 else res[0] | def zscore(self, weighted=True, prune=False, hs_dims=None) | Return ndarray with cube's zscore measurements.
Zscore is a measure of statistical significance of observed vs.
expected counts. It's only applicable to a 2D contingency tables.
For 3D cubes, the measures of separate slices are stacked together
and returned as the result.
:param weighted: Use weighted counts for zscores
:param prune: Prune based on unweighted counts
:param hs_dims: Include headers and subtotals (as NaN values)
:returns zscore: ndarray representing zscore measurements | 3.718393 | 4.388909 | 0.847225 |
return [slice_.wishart_pairwise_pvals(axis=axis) for slice_ in self.slices] | def wishart_pairwise_pvals(self, axis=0) | Return matrices of column-comparison p-values as list of numpy.ndarrays.
Square, symmetric matrix along *axis* of pairwise p-values for the
null hypothesis that col[i] = col[j] for each pair of columns.
*axis* (int): axis along which to perform comparison. Only columns (0)
are implemented currently. | 5.559543 | 9.195735 | 0.604578 |
if not self._is_axis_allowed(axis):
ca_error_msg = "Direction {} not allowed (items dimension)"
raise ValueError(ca_error_msg.format(axis))
if isinstance(axis, int):
# If single axis was provided, create a list out of it, so that
# we can do the subsequent iteration.
axis = list([axis])
elif axis is None:
# If axis was None, create what user would expect in terms of
# finding out the Total(s). In case of 2D cube, this will be the
# axis of all the dimensions that the user can see, that is (0, 1),
# because the selections dimension is invisible to the user. In
# case of 3D cube, this will be the "total" across each slice, so
# we need to drop the 0th dimension, and only take last two (1, 2).
axis = range(self.ndim)[-2:]
else:
# In case of a tuple, just keep it as a list.
axis = list(axis)
axis = np.array(axis)
# Create new array for storing updated values of axis. It's necessary
# because it's hard to update the values in place.
new_axis = np.array(axis)
# Iterate over user-visible dimensions, and update axis when MR is
# detected. For each detected MR, we need to increment all subsequent
# axis (that were provided by the user). But we don't need to update
# the axis that are "behind" the current MR.
for i, dim in enumerate(self.dimensions):
if dim.dimension_type == DT.MR_SUBVAR:
# This formula updates only the axis that come "after" the
# current MR (items) dimension.
new_axis[axis >= i] += 1
return tuple(new_axis) | def _adjust_axis(self, axis) | Return raw axis/axes corresponding to apparent axis/axes.
This method adjusts user provided 'axis' parameter, for some of the
cube operations, mainly 'margin'. The user never sees the MR selections
dimension, and treats all MRs as single dimensions. Thus we need to
adjust the values of axis (to sum across) to what the user would've
specified if he were aware of the existence of the MR selections
dimension. The reason for this adjustment is that all of the operations
performed troughout the margin calculations will be carried on an
internal array, containing all the data (together with all selections).
For more info on how it needs to operate, check the unit tests. | 7.692146 | 7.129601 | 1.078903 |
# Created a copy, to preserve cached property
updated_inserted = [[i for i in dim_inds] for dim_inds in inserted_indices_list]
pruned_and_inserted = zip(prune_indices_list, updated_inserted)
for prune_inds, inserted_inds in pruned_and_inserted:
# Only prune indices if they're not H&S (inserted)
prune_inds = prune_inds[~np.in1d(prune_inds, inserted_inds)]
for i, ind in enumerate(inserted_inds):
ind -= np.sum(prune_inds < ind)
inserted_inds[i] = ind
return updated_inserted | def _adjust_inserted_indices(inserted_indices_list, prune_indices_list) | Adjust inserted indices, if there are pruned elements. | 4.569804 | 4.381442 | 1.042991 |
# --element idxs that satisfy `include_missing` arg. Note this
# --includes MR_CAT elements so is essentially all-or-valid-elements
element_idxs = tuple(
(
d.all_elements.element_idxs
if include_missing
else d.valid_elements.element_idxs
)
for d in self._all_dimensions
)
return res[np.ix_(*element_idxs)] if element_idxs else res | def _apply_missings(self, res, include_missing=False) | Return ndarray with missing and insertions as specified.
The return value is the result of the following operations on *res*,
which is a raw cube value array (raw meaning it has shape of original
cube response).
* Remove vectors (rows/cols) for missing elements if *include_missin*
is False.
Note that it does *not* include pruning. | 12.664193 | 12.763754 | 0.9922 |
if not include_transforms_for_dims:
return res
suppressed_dim_count = 0
for (dim_idx, dim) in enumerate(self._all_dimensions):
if dim.dimension_type == DT.MR_CAT:
suppressed_dim_count += 1
# ---only marginable dimensions can be subtotaled---
if not dim.is_marginable:
continue
apparent_dim_idx = dim_idx - suppressed_dim_count
transform = (
dim.has_transforms and apparent_dim_idx in include_transforms_for_dims
)
if not transform:
continue
# ---insert subtotals into result array---
insertions = self._insertions(res, dim, dim_idx)
res = self._update_result(res, insertions, dim_idx)
return res | def _apply_subtotals(self, res, include_transforms_for_dims) | * Insert subtotals (and perhaps other insertions later) for
dimensions having their apparent dimension-idx in
*include_transforms_for_dims*. | 4.763191 | 4.303227 | 1.106888 |
return self._apply_subtotals(
self._apply_missings(
self._measure(weighted).raw_cube_array, include_missing=include_missing
),
include_transforms_for_dims,
) | def _as_array(
self,
include_missing=False,
get_non_selected=False,
weighted=True,
include_transforms_for_dims=False,
) | Get crunch cube as ndarray.
Args
include_missing (bool): Include rows/cols for missing values.
get_non_selected (bool): Get non-selected slices for MR vars.
weighted (bool): Take weighted or unweighted counts.
include_transforms_for_dims (list): For which dims to
include headings & subtotals (H&S) transformations.
Returns
res (ndarray): Tabular representation of crunch cube | 9.243688 | 9.058848 | 1.020404 |
if axis not in [0, 1]:
raise ValueError("Unexpected value for `axis`: {}".format(axis))
V = prop_table * (1 - prop_table)
if axis == 0:
# If axis is 0, sumation is performed across the 'i' index, which
# requires the matrix to be multiplied from the right
# (because of the inner matrix dimensions).
return np.dot(V, prop_margin)
elif axis == 1:
# If axis is 1, sumation is performed across the 'j' index, which
# requires the matrix to be multiplied from the left
# (because of the inner matrix dimensions).
return np.dot(prop_margin, V) | def _calculate_constraints_sum(cls, prop_table, prop_margin, axis) | Calculate sum of constraints (part of the standard error equation).
This method calculates the sum of the cell proportions multiplied by
row (or column) marginal proportions (margins divide by the total
count). It does this by utilizing the matrix multiplication, which
directly translates to the mathematical definition (the sum
across i and j indices). | 3.624884 | 3.355461 | 1.080294 |
return (
self._measures.weighted_counts
if weighted
else self._measures.unweighted_counts
) | def _counts(self, weighted) | Return _BaseMeasure subclass for *weighted* counts.
The return value is a _WeightedCountMeasure object if *weighted* is
True and the cube response is weighted. Otherwise it is an
_UnweightedCountMeasure object. Any means measure that may be present
is not considered. Contrast with `._measure()` below. | 5.812038 | 4.798891 | 1.211121 |
try:
cube_response = self._cube_response_arg
# ---parse JSON to a dict when constructed with JSON---
cube_dict = (
cube_response
if isinstance(cube_response, dict)
else json.loads(cube_response)
)
# ---cube is 'value' item in a shoji response---
return cube_dict.get("value", cube_dict)
except TypeError:
raise TypeError(
"Unsupported type <%s> provided. Cube response must be JSON "
"(str) or dict." % type(self._cube_response_arg).__name__
) | def _cube_dict(self) | dict containing raw cube response, parsed from JSON payload. | 7.143432 | 6.179585 | 1.155973 |
# TODO: We cannot arbitrarily drop any dimension simply because it
# has a length (shape) of 1. We must target MR_CAT dimensions
# specifically. Otherwise unexpected results can occur based on
# accidents of cube category count etc. If "user-friendly" reshaping
# needs be done, it should be as a very last step and much safer to
# leave that to the cr.cube client; software being "helpful" almost
# never is.
if not array.shape or len(array.shape) != len(self._all_dimensions):
# This condition covers two cases:
# 1. In case of no dimensions, the shape of the array is empty
# 2. If the shape was already fixed, we don't need to fix it again.
# This might happen while constructing the masked arrays. In case
# of MR, we will have the selections dimension included thoughout
# the calculations, and will only remove it before returning the
# result to the user.
return array
# We keep MR selections (MR_CAT) dimensions in the array, all the way
# up to here. At this point, we need to remove the non-selected part of
# selections dimension (and subsequently purge the dimension itself).
display_ind = (
tuple(
0 if dim.dimension_type == DT.MR_CAT else slice(None)
for dim, n in zip(self._all_dimensions, array.shape)
)
if not fix_valids
else np.ix_(
*[
dim.valid_elements.element_idxs if n > 1 else [0]
for dim, n in zip(self._all_dimensions, array.shape)
]
)
)
array = array[display_ind]
# If a first dimension only has one element, we don't want to
# remove it from the shape. Hence the i == 0 part. For other dimensions
# that have one element, it means that these are the remnants of the MR
# selections, which we don't need as separate dimensions.
new_shape = [
length for (i, length) in enumerate(array.shape) if length != 1 or i == 0
]
return array.reshape(new_shape) | def _drop_mr_cat_dims(self, array, fix_valids=False) | Return ndarray reflecting *array* with MR_CAT dims dropped.
If any (except 1st) dimension has a single element, it is
flattened in the resulting array (which is more convenient for the
users of the CrunchCube).
If the original shape of the cube is needed (e.g. to calculate the
margins with correct axis arguments), this needs to happen before the
call to this method '_drop_mr_cat_dims'. | 9.240646 | 9.013028 | 1.025254 |
# TODO: make this accept an immutable sequence for valid_indices
# (a tuple) and return an immutable sequence rather than mutating an
# argument.
indices = np.array(sorted(valid_indices[dim]))
slice_index = np.sum(indices <= insertion_index)
indices[slice_index:] += 1
indices = np.insert(indices, slice_index, insertion_index + 1)
valid_indices[dim] = indices.tolist()
return valid_indices | def _fix_valid_indices(cls, valid_indices, insertion_index, dim) | Add indices for H&S inserted elements. | 4.763848 | 4.570493 | 1.042305 |
def iter_insertions():
for anchor_idx, addend_idxs in dimension.hs_indices:
insertion_idx = (
-1
if anchor_idx == "top"
else result.shape[dimension_index] - 1
if anchor_idx == "bottom"
else anchor_idx
)
addend_fancy_idx = tuple(
[slice(None) for _ in range(dimension_index)]
+ [np.array(addend_idxs)]
)
yield (
insertion_idx,
np.sum(result[addend_fancy_idx], axis=dimension_index),
)
return [insertion for insertion in iter_insertions()] | def _insertions(self, result, dimension, dimension_index) | Return list of (idx, sum) pairs representing subtotals.
*idx* is the int offset at which to insert the ndarray subtotal
in *sum*. | 4.141159 | 3.937025 | 1.05185 |
if axis is None:
# If table direction was requested, we must ensure that each slice
# doesn't have the CA items dimension (thus the [-2:] part). It's
# OK for the 0th dimension to be items, since no calculation is
# performed over it.
if DT.CA_SUBVAR in self.dim_types[-2:]:
return False
return True
if isinstance(axis, int):
if self.ndim == 1 and axis == 1:
# Special allowed case of a 1D cube, where "row"
# directions is requested.
return True
axis = [axis]
# ---axis is a tuple---
for dim_idx in axis:
if self.dim_types[dim_idx] == DT.CA_SUBVAR:
# If any of the directions explicitly asked for directly
# corresponds to the CA items dimension, the requested
# calculation is not valid.
return False
return True | def _is_axis_allowed(self, axis) | Check if axis are allowed.
In case the calculation is requested over CA items dimension, it is not
valid. It's valid in all other cases. | 9.997746 | 8.307426 | 1.203471 |
return (
self._measures.means
if self._measures.means is not None
else self._measures.weighted_counts
if weighted
else self._measures.unweighted_counts
) | def _measure(self, weighted) | _BaseMeasure subclass representing primary measure for this cube.
If the cube response includes a means measure, the return value is
means. Otherwise it is counts, with the choice between weighted or
unweighted determined by *weighted*.
Note that weighted counts are provided on an "as-available" basis.
When *weighted* is True and the cube response is not weighted,
unweighted counts are returned. | 4.512526 | 3.575178 | 1.262182 |
mask = np.zeros(res.shape)
mr_dim_idxs = self.mr_dim_ind
for i, prune_inds in enumerate(self.prune_indices(transforms)):
rows_pruned = prune_inds[0]
cols_pruned = prune_inds[1]
rows_pruned = np.repeat(rows_pruned[:, None], len(cols_pruned), axis=1)
cols_pruned = np.repeat(cols_pruned[None, :], len(rows_pruned), axis=0)
slice_mask = np.logical_or(rows_pruned, cols_pruned)
# In case of MRs we need to "inflate" mask
if mr_dim_idxs == (1, 2):
slice_mask = slice_mask[:, np.newaxis, :, np.newaxis]
elif mr_dim_idxs == (0, 1):
slice_mask = slice_mask[np.newaxis, :, np.newaxis, :]
elif mr_dim_idxs == (0, 2):
slice_mask = slice_mask[np.newaxis, :, :, np.newaxis]
elif mr_dim_idxs == 1 and self.ndim == 3:
slice_mask = slice_mask[:, np.newaxis, :]
elif mr_dim_idxs == 2 and self.ndim == 3:
slice_mask = slice_mask[:, :, np.newaxis]
mask[i] = slice_mask
res = np.ma.masked_array(res, mask=mask)
return res | def _prune_3d_body(self, res, transforms) | Return masked array where mask indicates pruned vectors.
*res* is an ndarray (result). *transforms* is a list of ... | 2.324731 | 2.290228 | 1.015065 |
if self.ndim > 2:
return self._prune_3d_body(res, transforms)
res = self._drop_mr_cat_dims(res)
# ---determine which rows should be pruned---
row_margin = self._pruning_base(
hs_dims=transforms, axis=self.row_direction_axis
)
# ---adjust special-case row-margin values---
item_types = (DT.MR, DT.CA_SUBVAR)
if self.ndim > 1 and self.dim_types[1] in item_types and len(res.shape) > 1:
# ---when row-dimension has only one category it gets squashed---
axis = 1 if res.shape[0] > 1 else None
# ---in CAT x MR case (or if it has CA subvars) we get
# a 2D margin (denom really)---
row_margin = np.sum(row_margin, axis=axis)
row_prune_inds = self._margin_pruned_indices(
row_margin, self._inserted_dim_inds(transforms, 0), 0
)
# ---a 1D only has rows, so mask only with row-prune-idxs---
if self.ndim == 1 or len(res.shape) == 1:
# For 1D, margin is calculated as the row margin.
return np.ma.masked_array(res, mask=row_prune_inds)
# ---determine which columns should be pruned---
col_margin = self._pruning_base(
hs_dims=transforms, axis=self._col_direction_axis
)
if col_margin.ndim > 1:
# In case of MR x CAT, we have 2D margin
col_margin = np.sum(col_margin, axis=0)
col_prune_inds = self._margin_pruned_indices(
col_margin, self._inserted_dim_inds(transforms, 1), 1
)
# ---create rows x cols mask and mask the result array---
mask = self._create_mask(res, row_prune_inds, col_prune_inds)
res = np.ma.masked_array(res, mask=mask)
# ---return the masked array---
return res | def _prune_body(self, res, transforms=None) | Return a masked version of *res* where pruned rows/cols are masked.
Return value is an `np.ma.MaskedArray` object. Pruning is the removal
of rows or columns whose corresponding marginal elements are either
0 or not defined (np.nan). | 5.162539 | 5.004422 | 1.031595 |
if self.ndim >= 3:
# In case of a 3D cube, return list of tuples
# (of row and col pruned indices).
return self._prune_3d_indices(transforms)
def prune_non_3d_indices(transforms):
row_margin = self._pruning_base(
hs_dims=transforms, axis=self.row_direction_axis
)
row_indices = self._margin_pruned_indices(
row_margin, self._inserted_dim_inds(transforms, 0), 0
)
if row_indices.ndim > 1:
# In case of MR, we'd have 2D prune indices
row_indices = row_indices.all(axis=1)
if self.ndim == 1:
return [row_indices]
col_margin = self._pruning_base(
hs_dims=transforms, axis=self._col_direction_axis
)
col_indices = self._margin_pruned_indices(
col_margin, self._inserted_dim_inds(transforms, 1), 1
)
if col_indices.ndim > 1:
# In case of MR, we'd have 2D prune indices
col_indices = col_indices.all(axis=0)
return [row_indices, col_indices]
# In case of 1 or 2 D cubes, return a list of
# row indices (or row and col indices)
return prune_non_3d_indices(transforms) | def prune_indices(self, transforms=None) | Return indices of pruned rows and columns as list.
The return value has one of three possible forms:
* a 1-element list of row indices (in case of 1D cube)
* 2-element list of row and col indices (in case of 2D cube)
* n-element list of tuples of 2 elements (if it's 3D cube).
For each case, the 2 elements are the ROW and COL indices of the
elements that need to be pruned. If it's a 3D cube, these indices are
calculated "per slice", that is NOT on the 0th dimension (as the 0th
dimension represents the slices). | 3.655079 | 3.375757 | 1.082744 |
if not self._is_axis_allowed(axis):
# In case we encountered axis that would go across items dimension,
# we need to return at least some result, to prevent explicitly
# checking for this condition, wherever self._margin is used
return self.as_array(weighted=False, include_transforms_for_dims=hs_dims)
# In case of allowed axis, just return the normal API margin. This call
# would throw an exception when directly invoked with bad axis. This is
# intended, because we want to be as explicit as possible. Margins
# across items are not allowed.
return self.margin(
axis=axis, weighted=False, include_transforms_for_dims=hs_dims
) | def _pruning_base(self, axis=None, hs_dims=None) | Gets margin if across CAT dimension. Gets counts if across items.
Categorical variables are pruned based on their marginal values. If the
marginal is a 0 or a NaN, the corresponding row/column is pruned. In
case of a subvars (items) dimension, we only prune if all the counts
of the corresponding row/column are zero. | 14.057162 | 13.461461 | 1.044252 |
for j, (ind_insertion, value) in enumerate(insertions):
result = np.insert(
result, ind_insertion + j + 1, value, axis=dimension_index
)
return result | def _update_result(self, result, insertions, dimension_index) | Insert subtotals into resulting ndarray. | 4.469639 | 3.843829 | 1.162809 |
cube_dict = self._cube_dict
if cube_dict.get("query", {}).get("weight") is not None:
return True
if cube_dict.get("weight_var") is not None:
return True
if cube_dict.get("weight_url") is not None:
return True
unweighted_counts = cube_dict["result"]["counts"]
count_data = cube_dict["result"]["measures"].get("count", {}).get("data")
if unweighted_counts != count_data:
return True
return False | def is_weighted(self) | True if weights have been applied to the measure(s) for this cube.
Unweighted counts are available for all cubes. Weighting applies to
any other measures provided by the cube. | 3.792822 | 3.525736 | 1.075753 |
mean_measure_dict = (
self._cube_dict.get("result", {}).get("measures", {}).get("mean")
)
if mean_measure_dict is None:
return None
return _MeanMeasure(self._cube_dict, self._all_dimensions) | def means(self) | _MeanMeasure object providing access to means values.
None when the cube response does not contain a mean measure. | 6.356893 | 4.092938 | 1.553137 |
if self.means:
return self.means.missing_count
return self._cube_dict["result"].get("missing", 0) | def missing_count(self) | numeric representing count of missing rows in cube response. | 13.549359 | 8.73512 | 1.551136 |
numerator = self._cube_dict["result"].get("filtered", {}).get("weighted_n")
denominator = self._cube_dict["result"].get("unfiltered", {}).get("weighted_n")
try:
return numerator / denominator
except ZeroDivisionError:
return np.nan
except Exception:
return 1.0 | def population_fraction(self) | The filtered/unfiltered ratio for cube response.
This value is required for properly calculating population on a cube
where a filter has been applied. Returns 1.0 for an unfiltered cube.
Returns `np.nan` if the unfiltered count is zero, which would
otherwise result in a divide-by-zero error. | 4.983001 | 3.376183 | 1.475927 |
if not self.is_weighted:
return self.unweighted_counts
return _WeightedCountMeasure(self._cube_dict, self._all_dimensions) | def weighted_counts(self) | _WeightedCountMeasure object for this cube.
This object provides access to weighted counts for this cube, if
available. If the cube response is not weighted, the
_UnweightedCountMeasure object for this cube is returned. | 12.220836 | 5.575602 | 2.191842 |
if not self.is_weighted:
return float(self.unweighted_n)
return float(sum(self._cube_dict["result"]["measures"]["count"]["data"])) | def weighted_n(self) | float count of returned rows adjusted for weighting. | 12.251472 | 9.386472 | 1.305226 |
array = np.array(self._flat_values).reshape(self._all_dimensions.shape)
# ---must be read-only to avoid hard-to-find bugs---
array.flags.writeable = False
return array | def raw_cube_array(self) | Return read-only ndarray of measure values from cube-response.
The shape of the ndarray mirrors the shape of the (raw) cube
response. Specifically, it includes values for missing elements, any
MR_CAT dimensions, and any prunable rows and columns. | 9.450114 | 8.784028 | 1.075829 |
return tuple(
np.nan if type(x) is dict else x
for x in self._cube_dict["result"]["measures"]["mean"]["data"]
) | def _flat_values(self) | Return tuple of mean values as found in cube response.
Mean data may include missing items represented by a dict like
{'?': -1} in the cube response. These are replaced by np.nan in the
returned value. | 13.106736 | 6.402826 | 2.047024 |
input_dataframe_by_entity = dict()
person_entity = [entity for entity in tax_benefit_system.entities if entity.is_person][0]
person_id = np.arange(nb_persons)
input_dataframe_by_entity = dict()
input_dataframe_by_entity[person_entity.key] = pd.DataFrame({
person_entity.key + '_id': person_id,
})
input_dataframe_by_entity[person_entity.key].set_index('person_id')
#
adults = [0] + sorted(random.sample(range(1, nb_persons), nb_groups - 1))
members_entity_id = np.empty(nb_persons, dtype = int)
# A legacy role is an index that every person within an entity has.
# For instance, the 'first_parent' has legacy role 0, the 'second_parent' 1, the first 'child' 2, the second 3, etc.
members_legacy_role = np.empty(nb_persons, dtype = int)
id_group = -1
for id_person in range(nb_persons):
if id_person in adults:
id_group += 1
legacy_role = 0
else:
legacy_role = 2 if legacy_role == 0 else legacy_role + 1
members_legacy_role[id_person] = legacy_role
members_entity_id[id_person] = id_group
for entity in tax_benefit_system.entities:
if entity.is_person:
continue
key = entity.key
person_dataframe = input_dataframe_by_entity[person_entity.key]
person_dataframe[key + '_id'] = members_entity_id
person_dataframe[key + '_legacy_role'] = members_legacy_role
person_dataframe[key + '_role'] = np.where(
members_legacy_role == 0, entity.flattened_roles[0].key, entity.flattened_roles[-1].key)
input_dataframe_by_entity[key] = pd.DataFrame({
key + '_id': range(nb_groups)
})
input_dataframe_by_entity[key].set_index(key + '_id', inplace = True)
return input_dataframe_by_entity | def make_input_dataframe_by_entity(tax_benefit_system, nb_persons, nb_groups) | Generate a dictionnary of dataframes containing nb_persons persons spread in nb_groups groups.
:param TaxBenefitSystem tax_benefit_system: the tax_benefit_system to use
:param int nb_persons: the number of persons in the system
:param int nb_groups: the number of collective entities in the system
:returns: A dictionary whose keys are entities and values the corresponding data frames
Example:
>>> from openfisca_survey_manager.input_dataframe_generator import make_input_dataframe_by_entity
>>> from openfisca_country_template import CountryTaxBenefitSystem
>>> tbs = CountryTaxBenefitSystem()
>>> input_dataframe_by_entity = make_input_dataframe_by_entity(tbs, 400, 100)
>>> sorted(input_dataframe_by_entity['person'].columns.tolist())
['household_id', 'household_legacy_role', 'household_role', 'person_id']
>>> sorted(input_dataframe_by_entity['household'].columns.tolist())
[] | 2.65708 | 2.552596 | 1.040932 |
variable = tax_benefit_system.variables[variable_name]
entity = variable.entity
if condition is None:
condition = True
else:
condition = input_dataframe_by_entity[entity.key].eval(condition).values
if seed is None:
seed = 42
np.random.seed(seed)
count = len(input_dataframe_by_entity[entity.key])
value = (np.random.rand(count) * max_value * condition).astype(variable.dtype)
input_dataframe_by_entity[entity.key][variable_name] = value | def randomly_init_variable(tax_benefit_system, input_dataframe_by_entity, variable_name, max_value, condition = None, seed = None) | Initialise a variable with random values (from 0 to max_value).
If a condition vector is provided, only set the value of persons or groups for which condition is True.
Exemple:
>>> from openfisca_survey_manager.input_dataframe_generator import make_input_dataframe_by_entity
>>> from openfisca_country_template import CountryTaxBenefitSystem
>>> tbs = CountryTaxBenefitSystem()
>>> input_dataframe_by_entity = make_input_dataframe_by_entity(tbs, 400, 100)
>>> randomly_init_variable(tbs, input_dataframe_by_entity, 'salary', max_value = 50000, condition = "household_role == 'first_parent'") # Randomly set a salaire_net for all persons between 0 and 50000?
>>> sorted(input_dataframe_by_entity['person'].columns.tolist())
['household_id', 'household_legacy_role', 'household_role', 'person_id', 'salary']
>>> input_dataframe_by_entity['person'].salary.max() <= 50000
True
>>> len(input_dataframe_by_entity['person'].salary)
400
>>> randomly_init_variable(tbs, input_dataframe_by_entity, 'rent', max_value = 1000)
>>> sorted(input_dataframe_by_entity['household'].columns.tolist())
['rent']
>>> input_dataframe_by_entity['household'].rent.max() <= 1000
True
>>> input_dataframe_by_entity['household'].rent.max() >= 1
True
>>> len(input_dataframe_by_entity['household'].rent)
100 | 2.719612 | 2.912966 | 0.933623 |
assert variable is not None, "A variable is needed"
if table not in self.tables:
log.error("Table {} is not found in survey tables".format(table))
df = self.get_values([variable], table)
return df | def get_value(self, variable = None, table = None) | Get value
Parameters
----------
variable : string
name of the variable
table : string, default None
name of the table hosting the variable
Returns
-------
df : DataFrame, default None
A DataFrame containing the variable | 5.545197 | 6.756849 | 0.820678 |
assert self.hdf5_file_path is not None
assert os.path.exists(self.hdf5_file_path), '{} is not a valid path'.format(
self.hdf5_file_path)
store = pandas.HDFStore(self.hdf5_file_path)
try:
df = store.select(table)
except KeyError:
log.error('No table {} in the file {}'.format(table, self.hdf5_file_path))
log.error('Table(s) available are: {}'.format(store.keys()))
store.close()
raise
if lowercase:
columns = dict((column_name, column_name.lower()) for column_name in df)
df.rename(columns = columns, inplace = True)
if rename_ident is True:
for column_name in df:
if ident_re.match(str(column_name)) is not None:
df.rename(columns = {column_name: "ident"}, inplace = True)
log.info("{} column have been replaced by ident".format(column_name))
break
if variables is None:
return df
else:
diff = set(variables) - set(df.columns)
if diff:
raise Exception("The following variable(s) {} are missing".format(diff))
variables = list(set(variables).intersection(df.columns))
df = df[variables]
return df | def get_values(self, variables = None, table = None, lowercase = False, rename_ident = True) | Get values
Parameters
----------
variables : list of strings, default None
list of variables names, if None return the whole table
table : string, default None
name of the table hosting the variables
lowercase : boolean, deflault True
put variables of the table into lowercase
rename_ident : boolean, deflault True
rename variables ident+yr (e.g. ident08) into ident
Returns
-------
df : DataFrame, default None
A DataFrame containing the variables | 2.532968 | 2.588837 | 0.978419 |
data_frame = kwargs.pop('data_frame', None)
if data_frame is None:
data_frame = kwargs.pop('dataframe', None)
to_hdf_kwargs = kwargs.pop('to_hdf_kwargs', dict())
if data_frame is not None:
assert isinstance(data_frame, pandas.DataFrame)
if data_frame is not None:
if label is None:
label = name
table = Table(label = label, name = name, survey = self)
assert table.survey.hdf5_file_path is not None
log.debug("Saving table {} in {}".format(name, table.survey.hdf5_file_path))
table.save_data_frame(data_frame, **to_hdf_kwargs)
if name not in self.tables:
self.tables[name] = dict()
for key, val in kwargs.items():
self.tables[name][key] = val | def insert_table(self, label = None, name = None, **kwargs) | Insert a table in the Survey object | 2.541255 | 2.439422 | 1.041745 |
def formula(entity, period):
value = entity(variable, period)
if weight_variable is not None:
weight = entity(weight_variable, period)
weight = entity.filled_array(1)
if filter_variable is not None:
filter_value = entity(filter_variable, period)
weight = filter_value * weight
labels = arange(1, q + 1)
quantile, _ = weightedcalcs_quantiles(
value,
labels,
weight,
return_quantiles = True,
)
if filter_variable is not None:
quantile = where(weight > 0, quantile, -1)
return quantile
return formula | def quantile(q, variable, weight_variable = None, filter_variable = None) | Return quantile of a variable with weight provided by a specific wieght variable potentially filtered | 4.364223 | 4.636698 | 0.941235 |
with open("../waliki/__init__.py") as fh:
for line in fh:
if line.startswith("__version__ = "):
return line.split("=")[-1].strip().strip("'").strip('"') | def _get_version() | Get the version from package itself. | 3.180027 | 3.098999 | 1.026147 |
rst = rst_content.split('\n')
for i, line in enumerate(rst):
if line.startswith('#'):
continue
break
return '\n'.join(rst[i:]) | def clean_meta(rst_content) | remove moinmoin metada from the top of the file | 3.240746 | 3.063663 | 1.057801 |
from waliki.plugins import get_plugins
includes = []
for plugin in get_plugins():
template_name = 'waliki/%s_%s.html' % (plugin.slug, block_name)
try:
# template exists
template.loader.get_template(template_name)
includes.append(template_name)
except template.TemplateDoesNotExist:
continue
context.update({'includes': includes})
return context | def entry_point(context, block_name) | include an snippet at the bottom of a block, if it exists
For example, if the plugin with slug 'attachments' is registered
waliki/attachments_edit_content.html will be included with
{% entry_point 'edit_content' %}
which is declared at the bottom of the block 'content' in edit.html | 3.433248 | 2.912082 | 1.178967 |
bits = token.split_contents()
format = '{% check_perms "perm1[, perm2, ...]" for user in slug as "context_var" %}'
if len(bits) != 8 or bits[2] != 'for' or bits[4] != "in" or bits[6] != 'as':
raise template.TemplateSyntaxError("get_obj_perms tag should be in "
"format: %s" % format)
perms = bits[1]
user = bits[3]
slug = bits[5]
context_var = bits[7]
if perms[0] != perms[-1] or perms[0] not in ('"', "'"):
raise template.TemplateSyntaxError("check_perms tag's perms "
"argument should be in quotes")
if context_var[0] != context_var[-1] or context_var[0] not in ('"', "'"):
raise template.TemplateSyntaxError("check_perms tag's context_var "
"argument should be in quotes")
context_var = context_var[1:-1]
return CheckPermissionsNode(perms, user, slug, context_var) | def check_perms(parser, token) | Returns a list of permissions (as ``codename`` strings) for a given
``user``/``group`` and ``obj`` (Model instance).
Parses ``check_perms`` tag which should be in format::
{% check_perms "perm1[, perm2, ...]" for user in slug as "context_var" %}
or
{% check_perms "perm1[, perm2, ...]" for user in "slug" as "context_var" %}
.. note::
Make sure that you set and use those permissions in same template
block (``{% block %}``).
Example of usage (assuming ``page` objects are available from *context*)::
{% check_perms "delete_page" for request.user in page.slug as "can_delete" %}
{% if can_delete %}
...
{% endif %} | 2.574607 | 2.144891 | 1.200344 |
request = context["request"]
try:
page = Page.objects.get(slug=slug)
except Page.DoesNotExist:
page = None
if (page and check_perms_helper('change_page', request.user, slug)
or (not page and check_perms_helper('add_page', request.user, slug))):
form = PageForm(instance=page, initial={'slug': slug})
form_action = reverse("waliki_edit", args=[slug])
else:
form = None
form_action = None
return {
"request": request,
"slug": slug,
"label": slug.replace('/', '_'),
"page": page,
"form": form,
"form_action": form_action,
} | def waliki_box(context, slug, show_edit=True, *args, **kwargs) | A templatetag to render a wiki page content as a box in any webpage,
and allow rapid edition if you have permission.
It's inspired in `django-boxes`_
.. _django-boxes: https://github.com/eldarion/django-boxes | 2.62531 | 2.620841 | 1.001705 |
if isinstance(perms, string_types):
perms = {perms}
else:
perms = set(perms)
allowed_users = ACLRule.get_users_for(perms, slug)
if allowed_users:
return user in allowed_users
if perms.issubset(set(WALIKI_ANONYMOUS_USER_PERMISSIONS)):
return True
if is_authenticated(user) and perms.issubset(set(WALIKI_LOGGED_USER_PERMISSIONS)):
return True
# First check if the user has the permission (even anon users)
if user.has_perms(['waliki.%s' % p for p in perms]):
return True
# In case the 403 handler should be called raise the exception
if raise_exception:
raise PermissionDenied
# As the last resort, show the login form
return False | def check_perms(perms, user, slug, raise_exception=False) | a helper user to check if a user has the permissions
for a given slug | 3.868529 | 3.966706 | 0.97525 |
def decorator(view_func):
@wraps(view_func, assigned=available_attrs(view_func))
def _wrapped_view(request, *args, **kwargs):
if check_perms(perms, request.user, kwargs['slug'], raise_exception=raise_exception):
return view_func(request, *args, **kwargs)
if is_authenticated(request.user):
if WALIKI_RENDER_403:
return render(request, 'waliki/403.html', kwargs, status=403)
else:
raise PermissionDenied
path = request.build_absolute_uri()
# urlparse chokes on lazy objects in Python 3, force to str
resolved_login_url = force_str(
resolve_url(login_url or settings.LOGIN_URL))
# If the login url is the same scheme and net location then just
# use the path as the "next" url.
login_scheme, login_netloc = urlparse(resolved_login_url)[:2]
current_scheme, current_netloc = urlparse(path)[:2]
if ((not login_scheme or login_scheme == current_scheme) and
(not login_netloc or login_netloc == current_netloc)):
path = request.get_full_path()
from django.contrib.auth.views import redirect_to_login
return redirect_to_login(
path, resolved_login_url, redirect_field_name)
return _wrapped_view
return decorator | def permission_required(perms, login_url=None, raise_exception=False, redirect_field_name=REDIRECT_FIELD_NAME) | this is analog to django's builtin ``permission_required`` decorator, but
improved to check per slug ACLRules and default permissions for
anonymous and logged in users
if there is a rule affecting a slug, the user needs to be part of the
rule's allowed users. If there isn't a matching rule, defaults permissions
apply. | 1.922377 | 1.945922 | 0.9879 |
module_name = '%s.%s' % (app, modname)
try:
module = import_module(module_name)
except ImportError as e:
if failfast:
raise e
elif verbose:
print("Could not load %r from %r: %s" % (modname, app, e))
return None
if verbose:
print("Loaded %r from %r" % (modname, app))
return module | def get_module(app, modname, verbose=False, failfast=False) | Internal function to load a module from a single app.
taken from https://github.com/ojii/django-load. | 1.849018 | 1.877132 | 0.985023 |
for app in settings.INSTALLED_APPS:
get_module(app, modname, verbose, failfast) | def load(modname, verbose=False, failfast=False) | Loads all modules with name 'modname' from all installed apps.
If verbose is True, debug information will be printed to stdout.
If failfast is True, import errors will not be surpressed. | 5.17541 | 3.836318 | 1.349057 |
if PluginClass in _cache.keys():
raise Exception("Plugin class already registered")
plugin = PluginClass()
_cache[PluginClass] = plugin
if getattr(PluginClass, 'extra_page_actions', False):
for key in plugin.extra_page_actions:
if key not in _extra_page_actions:
_extra_page_actions[key] = []
_extra_page_actions[key].extend(plugin.extra_page_actions[key])
if getattr(PluginClass, 'extra_edit_actions', False):
for key in plugin.extra_edit_actions:
if key not in _extra_edit_actions:
_extra_edit_actions[key] = []
_extra_edit_actions[key].extend(plugin.extra_edit_actions[key])
if getattr(PluginClass, 'navbar_links', False):
_navbar_links.extend(list(plugin.navbar_links)) | def register(PluginClass) | Register a plugin class. This function will call back your plugin's
constructor. | 1.948292 | 2.064886 | 0.943535 |
if 'crispy_forms' in settings.INSTALLED_APPS:
from crispy_forms.templatetags.crispy_forms_filters import as_crispy_form
return as_crispy_form(form)
template = get_template("bootstrap/form.html")
form = _preprocess_fields(form)
return template.render({"form": form}) | def render_form(form) | same than {{ form|crispy }} if crispy_forms is installed.
render using a bootstrap3 templating otherwise | 3.40788 | 3.008517 | 1.132744 |
from waliki.settings import WALIKI_USE_MATHJAX # NOQA
return {k: v for (k, v) in locals().items() if k.startswith('WALIKI')} | def settings(request) | inject few waliki's settings to the context to be used in templates | 5.857049 | 4.437412 | 1.319924 |
try:
utf16 = s.encode('utf_16_be')
except AttributeError: # ints and floats
utf16 = str(s).encode('utf_16_be')
safe = utf16.replace(b'\x00)', b'\x00\\)').replace(b'\x00(', b'\x00\\(')
return b''.join((codecs.BOM_UTF16_BE, safe)) | def smart_encode_str(s) | Create a UTF-16 encoded PDF string literal for `s`. | 3.135633 | 3.01847 | 1.038815 |
fdf = [b'%FDF-1.2\x0a%\xe2\xe3\xcf\xd3\x0d\x0a']
fdf.append(b'1 0 obj\x0a<</FDF')
fdf.append(b'<</Fields[')
fdf.append(b''.join(handle_data_strings(fdf_data_strings,
fields_hidden, fields_readonly,
checkbox_checked_name)))
fdf.append(b''.join(handle_data_names(fdf_data_names,
fields_hidden, fields_readonly)))
if pdf_form_url:
fdf.append(b''.join(b'/F (', smart_encode_str(pdf_form_url), b')\x0a'))
fdf.append(b']\x0a')
fdf.append(b'>>\x0a')
fdf.append(b'>>\x0aendobj\x0a')
fdf.append(b'trailer\x0a\x0a<<\x0a/Root 1 0 R\x0a>>\x0a')
fdf.append(b'%%EOF\x0a\x0a')
return b''.join(fdf) | def forge_fdf(pdf_form_url=None, fdf_data_strings=[], fdf_data_names=[],
fields_hidden=[], fields_readonly=[],
checkbox_checked_name=b"Yes") | Generates fdf string from fields specified
* pdf_form_url (default: None): just the url for the form.
* fdf_data_strings (default: []): array of (string, value) tuples for the
form fields (or dicts). Value is passed as a UTF-16 encoded string,
unless True/False, in which case it is assumed to be a checkbox
(and passes names, '/Yes' (by default) or '/Off').
* fdf_data_names (default: []): array of (string, value) tuples for the
form fields (or dicts). Value is passed to FDF as a name, '/value'
* fields_hidden (default: []): list of field names that should be set
hidden.
* fields_readonly (default: []): list of field names that should be set
readonly.
* checkbox_checked_value (default: "Yes"): By default means a checked
checkboxes gets passed the value "/Yes". You may find that the default
does not work with your PDF, in which case you might want to try "On".
The result is a string suitable for writing to a .fdf file. | 2.327713 | 2.440614 | 0.953741 |
# Task limit
if task_limit is not None and not task_limit > 0:
raise ValueError('The task limit must be None or greater than 0')
# Safe context
async with StreamerManager() as manager:
main_streamer = await manager.enter_and_create_task(source)
# Loop over events
while manager.tasks:
# Extract streamer groups
substreamers = manager.streamers[1:]
mainstreamers = [main_streamer] if main_streamer in manager.tasks else []
# Switch - use the main streamer then the substreamer
if switch:
filters = mainstreamers + substreamers
# Concat - use the first substreamer then the main streamer
elif ordered:
filters = substreamers[:1] + mainstreamers
# Flat - use the substreamers then the main streamer
else:
filters = substreamers + mainstreamers
# Wait for next event
streamer, task = await manager.wait_single_event(filters)
# Get result
try:
result = task.result()
# End of stream
except StopAsyncIteration:
# Main streamer is finished
if streamer is main_streamer:
main_streamer = None
# A substreamer is finished
else:
await manager.clean_streamer(streamer)
# Re-schedule the main streamer if necessary
if main_streamer is not None and main_streamer not in manager.tasks:
manager.create_task(main_streamer)
# Process result
else:
# Switch mecanism
if switch and streamer is main_streamer:
await manager.clean_streamers(substreamers)
# Setup a new source
if streamer is main_streamer:
await manager.enter_and_create_task(result)
# Re-schedule the main streamer if task limit allows it
if task_limit is None or task_limit > len(manager.tasks):
manager.create_task(streamer)
# Yield the result
else:
yield result
# Re-schedule the streamer
manager.create_task(streamer) | async def base_combine(source, switch=False, ordered=False, task_limit=None) | Base operator for managing an asynchronous sequence of sequences.
The sequences are awaited concurrently, although it's possible to limit
the amount of running sequences using the `task_limit` argument.
The ``switch`` argument enables the switch mecanism, which cause the
previous subsequence to be discarded when a new one is created.
The items can either be generated in order or as soon as they're received,
depending on the ``ordered`` argument. | 3.633113 | 3.575224 | 1.016192 |
return base_combine.raw(
source, task_limit=task_limit, switch=False, ordered=True) | def concat(source, task_limit=None) | Given an asynchronous sequence of sequences, generate the elements
of the sequences in order.
The sequences are awaited concurrently, although it's possible to limit
the amount of running sequences using the `task_limit` argument.
Errors raised in the source or an element sequence are propagated. | 16.762703 | 24.247648 | 0.691313 |
return base_combine.raw(
source, task_limit=task_limit, switch=False, ordered=False) | def flatten(source, task_limit=None) | Given an asynchronous sequence of sequences, generate the elements
of the sequences as soon as they're received.
The sequences are awaited concurrently, although it's possible to limit
the amount of running sequences using the `task_limit` argument.
Errors raised in the source or an element sequence are propagated. | 19.868174 | 27.504507 | 0.722361 |
return concat.raw(
combine.smap.raw(source, func, *more_sources), task_limit=task_limit) | def concatmap(source, func, *more_sources, task_limit=None) | Apply a given function that creates a sequence from the elements of one
or several asynchronous sequences, and generate the elements of the created
sequences in order.
The function is applied as described in `map`, and must return an
asynchronous sequence. The returned sequences are awaited concurrently,
although it's possible to limit the amount of running sequences using
the `task_limit` argument. | 9.727205 | 12.739188 | 0.763566 |
return flatten.raw(
combine.smap.raw(source, func, *more_sources), task_limit=task_limit) | def flatmap(source, func, *more_sources, task_limit=None) | Apply a given function that creates a sequence from the elements of one
or several asynchronous sequences, and generate the elements of the created
sequences as soon as they arrive.
The function is applied as described in `map`, and must return an
asynchronous sequence. The returned sequences are awaited concurrently,
although it's possible to limit the amount of running sequences using
the `task_limit` argument.
Errors raised in a source or output sequence are propagated. | 10.330174 | 17.769365 | 0.581347 |
return switch.raw(combine.smap.raw(source, func, *more_sources)) | def switchmap(source, func, *more_sources) | Apply a given function that creates a sequence from the elements of one
or several asynchronous sequences and generate the elements of the most
recently created sequence.
The function is applied as described in `map`, and must return an
asynchronous sequence. Errors raised in a source or output sequence (that
was not already closed) are propagated. | 20.066463 | 32.905628 | 0.609819 |
iscorofunc = asyncio.iscoroutinefunction(func)
async with streamcontext(source) as streamer:
# Initialize
if initializer is None:
try:
value = await anext(streamer)
except StopAsyncIteration:
return
else:
value = initializer
# First value
yield value
# Iterate streamer
async for item in streamer:
value = func(value, item)
if iscorofunc:
value = await value
yield value | async def accumulate(source, func=op.add, initializer=None) | Generate a series of accumulated sums (or other binary function)
from an asynchronous sequence.
If ``initializer`` is present, it is placed before the items
of the sequence in the calculation, and serves as a default
when the sequence is empty. | 3.867696 | 4.635505 | 0.834363 |
acc = accumulate.raw(source, func, initializer)
return select.item.raw(acc, -1) | def reduce(source, func, initializer=None) | Apply a function of two arguments cumulatively to the items
of an asynchronous sequence, reducing the sequence to a single value.
If ``initializer`` is present, it is placed before the items
of the sequence in the calculation, and serves as a default when the
sequence is empty. | 15.808893 | 34.592117 | 0.457009 |
result = []
async with streamcontext(source) as streamer:
async for item in streamer:
result.append(item)
yield result | async def list(source) | Generate a single list from an asynchronous sequence. | 6.257992 | 4.702574 | 1.330759 |
async with streamcontext(aiterable) as streamer:
async for item in streamer:
item
try:
return item
except NameError:
raise StreamEmpty() | async def wait_stream(aiterable) | Wait for an asynchronous iterable to finish and return the last item.
The iterable is executed within a safe stream context.
A StreamEmpty exception is raised if the sequence is empty. | 7.378974 | 6.512679 | 1.133017 |
if asyncio.iscoroutinefunction(func):
async def innerfunc(arg):
await func(arg)
return arg
else:
def innerfunc(arg):
func(arg)
return arg
return map.raw(source, innerfunc) | def action(source, func) | Perform an action for each element of an asynchronous sequence
without modifying it.
The given function can be synchronous or asynchronous. | 3.971736 | 3.950197 | 1.005452 |
def func(value):
if template:
value = template.format(value)
builtins.print(value, **kwargs)
return action.raw(source, func) | def print(source, template=None, **kwargs) | Print each element of an asynchronous sequence without modifying it.
An optional template can be provided to be formatted with the elements.
All the keyword arguments are forwarded to the builtin function print. | 6.843581 | 8.333899 | 0.821174 |
@functools.wraps(fn)
async def wrapper(*args, **kwargs):
return await fn(*args, **kwargs)
return wrapper | def async_(fn) | Wrap the given function into a coroutine function. | 1.87718 | 1.92819 | 0.973545 |
assert issubclass(cls, AsyncIteratorContext)
aiterator = aiter(aiterable)
if isinstance(aiterator, cls):
return aiterator
return cls(aiterator) | def aitercontext(aiterable, *, cls=AsyncIteratorContext) | Return an asynchronous context manager from an asynchronous iterable.
The context management makes sure the aclose asynchronous method
has run before it exits. It also issues warnings and RuntimeError
if it is used incorrectly.
It is safe to use with any asynchronous iterable and prevent
asynchronous iterator context to be wrapped twice.
Correct usage::
ait = some_asynchronous_iterable()
async with aitercontext(ait) as safe_ait:
async for item in safe_ait:
<block>
An optional subclass of AsyncIteratorContext can be provided.
This class will be used to wrap the given iterable. | 3.342441 | 5.547912 | 0.602468 |
queue = collections.deque(maxlen=n if n > 0 else 0)
async with streamcontext(source) as streamer:
async for item in streamer:
queue.append(item)
for item in queue:
yield item | async def takelast(source, n) | Forward the last ``n`` elements from an asynchronous sequence.
If ``n`` is negative, it simply terminates after iterating the source.
Note: it is required to reach the end of the source before the first
element is generated. | 3.82403 | 4.749217 | 0.805192 |
source = transform.enumerate.raw(source)
async with streamcontext(source) as streamer:
async for i, item in streamer:
if i >= n:
yield item | async def skip(source, n) | Forward an asynchronous sequence, skipping the first ``n`` elements.
If ``n`` is negative, no elements are skipped. | 12.142706 | 10.696642 | 1.135189 |
queue = collections.deque(maxlen=n if n > 0 else 0)
async with streamcontext(source) as streamer:
async for item in streamer:
if n <= 0:
yield item
continue
if len(queue) == n:
yield queue[0]
queue.append(item) | async def skiplast(source, n) | Forward an asynchronous sequence, skipping the last ``n`` elements.
If ``n`` is negative, no elements are skipped.
Note: it is required to reach the ``n+1`` th element of the source
before the first element is generated. | 3.844121 | 4.164516 | 0.923065 |
source = transform.enumerate.raw(source)
async with streamcontext(source) as streamer:
async for i, item in streamer:
if func(i):
yield item | async def filterindex(source, func) | Filter an asynchronous sequence using the index of the elements.
The given function is synchronous, takes the index as an argument,
and returns ``True`` if the corresponding should be forwarded,
``False`` otherwise. | 9.892514 | 9.948513 | 0.994371 |
s = builtins.slice(*args)
start, stop, step = s.start or 0, s.stop, s.step or 1
# Filter the first items
if start < 0:
source = takelast.raw(source, abs(start))
elif start > 0:
source = skip.raw(source, start)
# Filter the last items
if stop is not None:
if stop >= 0 and start < 0:
raise ValueError(
"Positive stop with negative start is not supported")
elif stop >= 0:
source = take.raw(source, stop - start)
else:
source = skiplast.raw(source, abs(stop))
# Filter step items
if step is not None:
if step > 1:
source = filterindex.raw(source, lambda i: i % step == 0)
elif step < 0:
raise ValueError("Negative step not supported")
# Return
return source | def slice(source, *args) | Slice an asynchronous sequence.
The arguments are the same as the builtin type slice.
There are two limitations compare to regular slices:
- Positive stop index with negative start index is not supported
- Negative step is not supported | 2.930332 | 3.056498 | 0.958722 |
# Prepare
if index >= 0:
source = skip.raw(source, index)
else:
source = takelast(source, abs(index))
async with streamcontext(source) as streamer:
# Get first item
try:
result = await anext(streamer)
except StopAsyncIteration:
raise IndexError("Index out of range")
# Check length
if index < 0:
count = 1
async for _ in streamer:
count += 1
if count != abs(index):
raise IndexError("Index out of range")
# Yield result
yield result | async def item(source, index) | Forward the ``n``th element of an asynchronous sequence.
The index can be negative and works like regular indexing.
If the index is out of range, and ``IndexError`` is raised. | 4.18037 | 4.132962 | 1.011471 |
if isinstance(index, builtins.slice):
return slice.raw(source, index.start, index.stop, index.step)
if isinstance(index, int):
return item.raw(source, index)
raise TypeError("Not a valid index (int or slice)") | def getitem(source, index) | Forward one or several items from an asynchronous sequence.
The argument can either be a slice or an integer.
See the slice and item operators for more information. | 3.454866 | 3.639823 | 0.949185 |
iscorofunc = asyncio.iscoroutinefunction(func)
async with streamcontext(source) as streamer:
async for item in streamer:
result = func(item)
if iscorofunc:
result = await result
if not result:
return
yield item | async def takewhile(source, func) | Forward an asynchronous sequence while a condition is met.
The given function takes the item as an argument and returns a boolean
corresponding to the condition to meet. The function can either be
synchronous or asynchronous. | 3.792608 | 4.015685 | 0.944449 |
module_dir = __all__
operators = stream.__dict__
for key, value in operators.items():
if getattr(value, 'pipe', None):
globals()[key] = value.pipe
if key not in module_dir:
module_dir.append(key) | def update_pipe_module() | Populate the pipe module dynamically. | 5.625951 | 5.170109 | 1.088169 |
count = itertools.count(start, step)
async with streamcontext(source) as streamer:
async for item in streamer:
yield next(count), item | async def enumerate(source, start=0, step=1) | Generate ``(index, value)`` tuples from an asynchronous sequence.
This index is computed using a starting point and an increment,
respectively defaulting to ``0`` and ``1``. | 5.629621 | 6.729291 | 0.836584 |
if asyncio.iscoroutinefunction(func):
async def starfunc(args):
return await func(*args)
else:
def starfunc(args):
return func(*args)
return map.raw(source, starfunc, ordered=ordered, task_limit=task_limit) | def starmap(source, func, ordered=True, task_limit=None) | Apply a given function to the unpacked elements of
an asynchronous sequence.
Each element is unpacked before applying the function.
The given function can either be synchronous or asynchronous.
The results can either be returned in or out of order, depending on
the corresponding ``ordered`` argument. This argument is ignored if
the provided function is synchronous.
The coroutines run concurrently but their amount can be limited using
the ``task_limit`` argument. A value of ``1`` will cause the coroutines
to run sequentially. This argument is ignored if the provided function
is synchronous. | 2.72524 | 3.403296 | 0.800765 |
while True:
async with streamcontext(source) as streamer:
async for item in streamer:
yield item
# Prevent blocking while loop if the stream is empty
await asyncio.sleep(0) | async def cycle(source) | Iterate indefinitely over an asynchronous sequence.
Note: it does not perform any buffering, but re-iterate over
the same given sequence instead. If the sequence is not
re-iterable, the generator might end up looping indefinitely
without yielding any item. | 8.108944 | 9.877162 | 0.820979 |
async with streamcontext(source) as streamer:
async for first in streamer:
xs = select.take(create.preserve(streamer), n-1)
yield [first] + await aggregate.list(xs) | async def chunks(source, n) | Generate chunks of size ``n`` from an asynchronous sequence.
The chunks are lists, and the last chunk might contain less than ``n``
elements. | 19.686268 | 22.202299 | 0.886677 |
while True:
await asyncio.sleep(interval)
yield offset + width * random_module.random() | async def random(offset=0., width=1., interval=0.1) | Generate a stream of random numbers. | 5.614445 | 5.968728 | 0.940643 |
async with streamcontext(source) as streamer:
async for item in streamer:
yield item ** exponent | async def power(source, exponent) | Raise the elements of an asynchronous sequence to the given power. | 7.062226 | 5.876719 | 1.201729 |
timeout = 0
loop = asyncio.get_event_loop()
async with streamcontext(source) as streamer:
async for item in streamer:
delta = timeout - loop.time()
delay = delta if delta > 0 else 0
await asyncio.sleep(delay)
yield item
timeout = loop.time() + interval | async def spaceout(source, interval) | Make sure the elements of an asynchronous sequence are separated
in time by the given interval. | 3.420299 | 3.269789 | 1.046031 |
async with streamcontext(source) as streamer:
while True:
try:
item = await wait_for(anext(streamer), timeout)
except StopAsyncIteration:
break
else:
yield item | async def timeout(source, timeout) | Raise a time-out if an element of the asynchronous sequence
takes too long to arrive.
Note: the timeout is not global but specific to each step of
the iteration. | 5.112751 | 6.586899 | 0.7762 |
await asyncio.sleep(delay)
async with streamcontext(source) as streamer:
async for item in streamer:
yield item | async def delay(source, delay) | Delay the iteration of an asynchronous sequence. | 5.531606 | 5.061666 | 1.092843 |
for source in sources:
async with streamcontext(source) as streamer:
async for item in streamer:
yield item | async def chain(*sources) | Chain asynchronous sequences together, in the order they are given.
Note: the sequences are not iterated until it is required,
so if the operation is interrupted, the remaining sequences
will be left untouched. | 5.988226 | 7.881842 | 0.75975 |
async with AsyncExitStack() as stack:
# Handle resources
streamers = [await stack.enter_async_context(streamcontext(source))
for source in sources]
# Loop over items
while True:
try:
coros = builtins.map(anext, streamers)
items = await asyncio.gather(*coros)
except StopAsyncIteration:
break
else:
yield tuple(items) | async def zip(*sources) | Combine and forward the elements of several asynchronous sequences.
Each generated value is a tuple of elements, using the same order as
their respective sources. The generation continues until the shortest
sequence is exhausted.
Note: the different sequences are awaited in parrallel, so that their
waiting times don't add up. | 5.0337 | 5.579398 | 0.902194 |
if more_sources:
source = zip(source, *more_sources)
async with streamcontext(source) as streamer:
async for item in streamer:
yield func(*item) if more_sources else func(item) | async def smap(source, func, *more_sources) | Apply a given function to the elements of one or several
asynchronous sequences.
Each element is used as a positional argument, using the same order as
their respective sources. The generation continues until the shortest
sequence is exhausted. The function is treated synchronously.
Note: if more than one sequence is provided, they're awaited concurrently
so that their waiting times don't add up. | 4.494507 | 5.147287 | 0.87318 |
def func(*args):
return create.just(corofn(*args))
if ordered:
return advanced.concatmap.raw(
source, func, *more_sources, task_limit=task_limit)
return advanced.flatmap.raw(
source, func, *more_sources, task_limit=task_limit) | def amap(source, corofn, *more_sources, ordered=True, task_limit=None) | Apply a given coroutine function to the elements of one or several
asynchronous sequences.
Each element is used as a positional argument, using the same order as
their respective sources. The generation continues until the shortest
sequence is exhausted.
The results can either be returned in or out of order, depending on
the corresponding ``ordered`` argument.
The coroutines run concurrently but their amount can be limited using
the ``task_limit`` argument. A value of ``1`` will cause the coroutines
to run sequentially.
If more than one sequence is provided, they're also awaited concurrently,
so that their waiting times don't add up. | 4.11463 | 5.776571 | 0.712296 |
if asyncio.iscoroutinefunction(func):
return amap.raw(
source, func, *more_sources,
ordered=ordered, task_limit=task_limit)
return smap.raw(source, func, *more_sources) | def map(source, func, *more_sources, ordered=True, task_limit=None) | Apply a given function to the elements of one or several
asynchronous sequences.
Each element is used as a positional argument, using the same order as
their respective sources. The generation continues until the shortest
sequence is exhausted. The function can either be synchronous or
asynchronous (coroutine function).
The results can either be returned in or out of order, depending on
the corresponding ``ordered`` argument. This argument is ignored if the
provided function is synchronous.
The coroutines run concurrently but their amount can be limited using
the ``task_limit`` argument. A value of ``1`` will cause the coroutines
to run sequentially. This argument is ignored if the provided function
is synchronous.
If more than one sequence is provided, they're also awaited concurrently,
so that their waiting times don't add up.
It might happen that the provided function returns a coroutine but is not
a coroutine function per se. In this case, one can wrap the function with
``aiostream.async_`` in order to force ``map`` to await the resulting
coroutine. The following example illustrates the use ``async_`` with a
lambda function::
from aiostream import stream, async_
...
ys = stream.map(xs, async_(lambda ms: asyncio.sleep(ms / 1000))) | 3.825234 | 6.322645 | 0.605005 |
if is_async_iterable(it):
return from_async_iterable.raw(it)
if isinstance(it, Iterable):
return from_iterable.raw(it)
raise TypeError(
f"{type(it).__name__!r} object is not (async) iterable") | def iterate(it) | Generate values from a sychronous or asynchronous iterable. | 3.967711 | 3.65682 | 1.085017 |
args = () if times is None else (times,)
it = itertools.repeat(value, *args)
agen = from_iterable.raw(it)
return time.spaceout.raw(agen, interval) if interval else agen | def repeat(value, times=None, *, interval=0) | Generate the same value a given number of times.
If ``times`` is ``None``, the value is repeated indefinitely.
An optional interval can be given to space the values out. | 10.960748 | 12.539803 | 0.874077 |
agen = from_iterable.raw(builtins.range(*args))
return time.spaceout.raw(agen, interval) if interval else agen | def range(*args, interval=0) | Generate a given range of numbers.
It supports the same arguments as the builtin function.
An optional interval can be given to space the values out. | 30.865288 | 43.509941 | 0.709385 |
agen = from_iterable.raw(itertools.count(start, step))
return time.spaceout.raw(agen, interval) if interval else agen | def count(start=0, step=1, *, interval=0) | Generate consecutive numbers indefinitely.
Optional starting point and increment can be defined,
respectively defaulting to ``0`` and ``1``.
An optional interval can be given to space the values out. | 21.924417 | 31.338753 | 0.699594 |
params = {
"input": text,
"key": self.key,
"cs": self.cs,
"conversation_id": self.convo_id,
"wrapper": "CleverWrap.py"
}
reply = self._send(params)
self._process_reply(reply)
return self.output | def say(self, text) | Say something to www.cleverbot.com
:type text: string
Returns: string | 6.306431 | 5.367368 | 1.174958 |
# Get a response
try:
r = requests.get(self.url, params=params)
# catch errors, print then exit.
except requests.exceptions.RequestException as e:
print(e)
return r.json(strict=False) | def _send(self, params) | Make the request to www.cleverbot.com
:type params: dict
Returns: dict | 5.216327 | 4.986476 | 1.046095 |
self.cs = reply.get("cs", None)
self.count = int(reply.get("interaction_count", None))
self.output = reply.get("output", None)
self.convo_id = reply.get("conversation_id", None)
self.history = {key:value for key, value in reply.items() if key.startswith("interaction")}
self.time_taken = int(reply.get("time_taken", None))
self.time_elapsed = int(reply.get("time_elapsed", None)) | def _process_reply(self, reply) | take the cleverbot.com response and populate properties. | 3.081259 | 2.753017 | 1.11923 |
'''Ends the tracer.
May be called in any state. Transitions the state to ended and releases
any SDK resources owned by this tracer (this includes only internal
resources, things like passed-in
:class:`oneagent.common.DbInfoHandle` need to be released manually).
Prefer using the tracer as a context manager (i.e., with a
:code:`with`-block) instead of manually calling this method.
'''
if self.handle is not None:
self.nsdk.tracer_end(self.handle)
self.handle = None | def end(self) | Ends the tracer.
May be called in any state. Transitions the state to ended and releases
any SDK resources owned by this tracer (this includes only internal
resources, things like passed-in
:class:`oneagent.common.DbInfoHandle` need to be released manually).
Prefer using the tracer as a context manager (i.e., with a
:code:`with`-block) instead of manually calling this method. | 11.59232 | 1.805158 | 6.421775 |
'''Marks the tracer as failed with the given exception class name
:code:`clsname` and message :code:`msg`.
May only be called in the started state and only if the tracer is not
already marked as failed. Note that this does not end the tracer! Once a
tracer is marked as failed, attempts to do it again are forbidden.
If possible, using the tracer as a context manager (i.e., with a
:code:`with`-block) or :meth:`.mark_failed_exc` is more convenient than
this method.
:param str clsname: Fully qualified name of the exception type that
caused the failure.
:param str msg: Exception message that caused the failure.
'''
self.nsdk.tracer_error(self.handle, clsname, msg) | def mark_failed(self, clsname, msg) | Marks the tracer as failed with the given exception class name
:code:`clsname` and message :code:`msg`.
May only be called in the started state and only if the tracer is not
already marked as failed. Note that this does not end the tracer! Once a
tracer is marked as failed, attempts to do it again are forbidden.
If possible, using the tracer as a context manager (i.e., with a
:code:`with`-block) or :meth:`.mark_failed_exc` is more convenient than
this method.
:param str clsname: Fully qualified name of the exception type that
caused the failure.
:param str msg: Exception message that caused the failure. | 7.004141 | 1.552348 | 4.511964 |
'''Marks the tracer as failed with the given exception :code:`e_val` of
type :code:`e_ty` (defaults to the current exception).
May only be called in the started state and only if the tracer is not
already marked as failed. Note that this does not end the tracer! Once a
tracer is marked as failed, attempts to do it again are forbidden.
If possible, using the tracer as a context manager (i.e., with a
:code:`with`-block) is more convenient than this method.
If :code:`e_val` and :code:`e_ty` are both none, the current exception
(as retured by :func:`sys.exc_info`) is used.
:param BaseException e_val: The exception object that caused the
failure. If :code:`None`, the current exception value
(:code:`sys.exc_info()[1]`) is used.
:param type e_ty: The type of the exception that caused the failure. If
:code:`None` the type of :code:`e_val` is used. If that is also
:code:`None`, the current exception type (:code:`sys.exc_info()[0]`)
is used.
'''
_error_from_exc(self.nsdk, self.handle, e_val, e_ty) | def mark_failed_exc(self, e_val=None, e_ty=None) | Marks the tracer as failed with the given exception :code:`e_val` of
type :code:`e_ty` (defaults to the current exception).
May only be called in the started state and only if the tracer is not
already marked as failed. Note that this does not end the tracer! Once a
tracer is marked as failed, attempts to do it again are forbidden.
If possible, using the tracer as a context manager (i.e., with a
:code:`with`-block) is more convenient than this method.
If :code:`e_val` and :code:`e_ty` are both none, the current exception
(as retured by :func:`sys.exc_info`) is used.
:param BaseException e_val: The exception object that caused the
failure. If :code:`None`, the current exception value
(:code:`sys.exc_info()[1]`) is used.
:param type e_ty: The type of the exception that caused the failure. If
:code:`None` the type of :code:`e_val` is used. If that is also
:code:`None`, the current exception type (:code:`sys.exc_info()[0]`)
is used. | 4.045304 | 1.355147 | 2.985139 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.