signature
stringlengths
8
3.44k
body
stringlengths
0
1.41M
docstring
stringlengths
1
122k
id
stringlengths
5
17
def get_cl_body(self):
raise NotImplementedError()<EOL>
Get the CL code for the body of this function. Returns: str: the CL code of this function body
f15562:c1:m5
def evaluate(self, inputs, nmr_instances, use_local_reduction=False, cl_runtime_info=None):
raise NotImplementedError()<EOL>
Evaluate this function for each set of given parameters. Given a set of input parameters, this model will be evaluated for every parameter set. This function will convert possible dots in the parameter names to underscores for use in the CL kernel. Args: inputs (Iterable[Union(ndarray, mot.lib.utils.KernelData)] or Mapping[str: Union(ndarray, mot.lib.utils.KernelData)]): for each CL function parameter the input data. Each of these input datasets must either be a scalar or be of equal length in the first dimension. The elements can either be raw ndarrays or KernelData objects. If an ndarray is given we will load it read/write by default. You can provide either an iterable with one value per parameter, or a mapping with for every parameter a corresponding value. nmr_instances (int): the number of parallel processes to run. use_local_reduction (boolean): set this to True if you want to use local memory reduction in evaluating this function. If this is set to True we will multiply the global size (given by the nmr_instances) by the work group sizes. cl_runtime_info (mot.configuration.CLRuntimeInfo): the runtime information for execution Returns: ndarray: the return values of the function, which can be None if this function has a void return type.
f15562:c1:m6
def get_dependencies(self):
raise NotImplementedError()<EOL>
Get the list of dependencies this function depends on. Returns: list[CLFunction]: the list of dependencies for this function.
f15562:c1:m7
def __init__(self, cl_code):
self._cl_code = cl_code<EOL>
Simple code object for including type definitions in the kernel. Args: cl_code (str): CL code to be included in the kernel
f15562:c2:m0
def __init__(self, return_type, cl_function_name, parameter_list, cl_body, dependencies=None):
super().__init__()<EOL>self._return_type = return_type<EOL>self._function_name = cl_function_name<EOL>self._parameter_list = self._resolve_parameters(parameter_list)<EOL>self._cl_body = cl_body<EOL>self._dependencies = dependencies or []<EOL>
A simple implementation of a CL function. Args: return_type (str): the CL return type of the function cl_function_name (string): The name of the CL function parameter_list (list or tuple): This either contains instances of :class:`CLFunctionParameter` or strings from which to form the function parameters. cl_body (str): the body of the CL code for this function. dependencies (Iterable[CLCodeObject]): The CL code objects this function depends on, these will be prepended to the CL code generated by this function.
f15562:c3:m0
@classmethod<EOL><INDENT>def from_string(cls, cl_function, dependencies=()):<DEDENT>
return_type, function_name, parameter_list, body = split_cl_function(cl_function)<EOL>return SimpleCLFunction(return_type, function_name, parameter_list, body, dependencies=dependencies)<EOL>
Parse the given CL function into a SimpleCLFunction object. Args: cl_function (str): the function we wish to turn into an object dependencies (list or tuple of CLLibrary): The list of CL libraries this function depends on Returns: SimpleCLFunction: the CL data type for this parameter declaration
f15562:c3:m1
def _get_parameter_signatures(self):
declarations = []<EOL>for p in self.get_parameters():<EOL><INDENT>new_p = p.get_renamed(p.name.replace('<STR_LIT:.>', '<STR_LIT:_>'))<EOL>declarations.append(new_p.get_declaration())<EOL><DEDENT>return declarations<EOL>
Get the signature of the parameters for the CL function declaration. This should return the list of signatures of the parameters for use inside the function signature. Returns: list: the signatures of the parameters for the use in the CL code.
f15562:c3:m10
def _get_cl_dependency_code(self):
code = '<STR_LIT>'<EOL>for d in self._dependencies:<EOL><INDENT>code += d.get_cl_code() + "<STR_LIT:\n>"<EOL><DEDENT>return code<EOL>
Get the CL code for all the CL code for all the dependencies. Returns: str: The CL code with the actual code.
f15562:c3:m11
@property<EOL><INDENT>def name(self):<DEDENT>
raise NotImplementedError()<EOL>
The name of this parameter. Returns: str: the name of this parameter
f15562:c4:m0
def get_declaration(self):
raise NotImplementedError()<EOL>
Get the complete CL declaration for this parameter. Returns: str: the declaration for this data type.
f15562:c4:m1
@property<EOL><INDENT>def ctype(self):<DEDENT>
raise NotImplementedError()<EOL>
Get the ctype of this data type. For example, if the data type is float4*, we will return float4 here. Returns: str: the full ctype of this data type
f15562:c4:m2
@property<EOL><INDENT>def address_space(self):<DEDENT>
raise NotImplementedError()<EOL>
Get the address space of this data declaration. Returns: str: the data type address space, one of ``global``, ``local``, ``constant`` or ``private``.
f15562:c4:m3
@property<EOL><INDENT>def basic_ctype(self):<DEDENT>
raise NotImplementedError()<EOL>
Get the basic data type without the vector and pointer additions. For example, if the full data ctype is ``float4*``, we will only return ``float`` here. Returns: str: the raw CL data type
f15562:c4:m4
@property<EOL><INDENT>def is_vector_type(self):<DEDENT>
raise NotImplementedError()<EOL>
Check if this data type is a vector type (like for example double4, float2, int8, etc.). Returns: boolean: True if it is a vector type, false otherwise
f15562:c4:m5
@property<EOL><INDENT>def vector_length(self):<DEDENT>
raise NotImplementedError()<EOL>
Get the length of this vector, returns None if not a vector type. Returns: int: the length of the vector type (for example, if the data type is float4, this returns 4).
f15562:c4:m6
@property<EOL><INDENT>def is_pointer_type(self):<DEDENT>
raise NotImplementedError()<EOL>
Check if this parameter is a pointer type (appended by a ``*``) Returns: boolean: True if it is a pointer type, false otherwise
f15562:c4:m7
@property<EOL><INDENT>def nmr_pointers(self):<DEDENT>
raise NotImplementedError()<EOL>
Get the number of asterisks / pointer references of this data type. If the data type is float**, we return 2 here. Returns: int: the number of pointer asterisks in the data type.
f15562:c4:m8
@property<EOL><INDENT>def array_sizes(self):<DEDENT>
raise NotImplementedError()<EOL>
Get the dimension of this array type. This returns for example (10, 5) for the data type float[10][5]. Returns: Tuple[int]: the sizes of the arrays
f15562:c4:m9
@property<EOL><INDENT>def is_array_type(self):<DEDENT>
raise NotImplementedError()<EOL>
Check if this parameter is an array type (like float[3] or int[10][5]). Returns: boolean: True if this is an array type, false otherwise
f15562:c4:m10
def get_renamed(self, name):
raise NotImplementedError()<EOL>
Get a copy of the current parameter but then with a new name. Args: name (str): the new name for this parameter Returns: cls: a copy of the current type but with a new name
f15562:c4:m11
def __init__(self, declaration):
self._address_space = None<EOL>self._type_qualifiers = []<EOL>self._basic_ctype = '<STR_LIT>'<EOL>self._vector_type_length = None<EOL>self._nmr_pointer_stars = <NUM_LIT:0><EOL>self._pointer_qualifiers = []<EOL>self._name = '<STR_LIT>'<EOL>self._array_sizes = []<EOL>param = self<EOL>class Semantics:<EOL><INDENT>def type_qualifiers(self, ast):<EOL><INDENT>if ast in param._type_qualifiers:<EOL><INDENT>raise ValueError('<STR_LIT>'.format(ast))<EOL><DEDENT>param._type_qualifiers.append(ast)<EOL>return ast<EOL><DEDENT>def address_space(self, ast):<EOL><INDENT>param._address_space = '<STR_LIT>'.join(ast)<EOL>return '<STR_LIT>'.join(ast)<EOL><DEDENT>def basic_ctype(self, ast):<EOL><INDENT>param._basic_ctype = ast<EOL>return ast<EOL><DEDENT>def vector_type_length(self, ast):<EOL><INDENT>param._vector_type_length = int(ast)<EOL>return ast<EOL><DEDENT>def pointer_star(self, ast):<EOL><INDENT>param._nmr_pointer_stars += <NUM_LIT:1><EOL>return ast<EOL><DEDENT>def pointer_qualifiers(self, ast):<EOL><INDENT>if ast in param._pointer_qualifiers:<EOL><INDENT>raise ValueError('<STR_LIT>'.format(ast))<EOL><DEDENT>param._pointer_qualifiers.append(ast)<EOL>return ast<EOL><DEDENT>def name(self, ast):<EOL><INDENT>param._name = ast<EOL>return ast<EOL><DEDENT>def array_size(self, ast):<EOL><INDENT>param._array_sizes.append(int(ast[<NUM_LIT:1>:-<NUM_LIT:1>]))<EOL>return ast<EOL><DEDENT><DEDENT>_cl_data_type_parser.parse(declaration, semantics=Semantics())<EOL>
Creates a new function parameter for the CL functions. Args: declaration (str): the declaration of this parameter. For example ``global int foo``.
f15562:c5:m0
@property<EOL><INDENT>def cl_environment(self):<DEDENT>
return self._cl_environment<EOL>
Get the used CL environment. Returns: cl_environment (CLEnvironment): The cl environment to use for calculations.
f15562:c6:m1
@property<EOL><INDENT>def cl_queue(self):<DEDENT>
return self._cl_queue<EOL>
Get the queue this worker is using for its GPU computations. The load balancing routine will use this queue to flush and finish the computations. Returns: pyopencl queues: the queue used by this worker
f15562:c6:m2
def _build_kernel(self, kernel_source, compile_flags=()):
return cl.Program(self._cl_context, kernel_source).build('<STR_LIT:U+0020>'.join(compile_flags))<EOL>
Convenience function for building the kernel for this worker. Args: kernel_source (str): the kernel source to use for building the kernel Returns: cl.Program: a compiled CL kernel
f15562:c6:m4
def _get_kernel_arguments(self):
declarations = []<EOL>for name, data in self._kernel_data.items():<EOL><INDENT>declarations.extend(data.get_kernel_parameters('<STR_LIT:_>' + name))<EOL><DEDENT>return declarations<EOL>
Get the list of kernel arguments for loading the kernel data elements into the kernel. This will use the sorted keys for looping through the kernel input items. Returns: list of str: the list of parameter definitions
f15562:c6:m6
def get_scalar_arg_dtypes(self):
dtypes = []<EOL>for name, data in self._kernel_data.items():<EOL><INDENT>dtypes.extend(data.get_scalar_arg_dtypes())<EOL><DEDENT>return dtypes<EOL>
Get the location and types of the input scalars. Returns: list: for every kernel input element either None if the data is a buffer or the numpy data type if if is a scalar.
f15562:c6:m7
def uniform(nmr_distributions, nmr_samples, low=<NUM_LIT:0>, high=<NUM_LIT:1>, ctype='<STR_LIT:float>', seed=None):
if is_scalar(low):<EOL><INDENT>low = np.ones((nmr_distributions, <NUM_LIT:1>)) * low<EOL><DEDENT>if is_scalar(high):<EOL><INDENT>high = np.ones((nmr_distributions, <NUM_LIT:1>)) * high<EOL><DEDENT>kernel_data = {'<STR_LIT>': Array(low, as_scalar=True),<EOL>'<STR_LIT>': Array(high, as_scalar=True)}<EOL>kernel = SimpleCLFunction.from_string('''<STR_LIT>''' + ctype + '''<STR_LIT>''' + str(nmr_samples) + '''<STR_LIT>''' + ctype + '''<STR_LIT>''', dependencies=[Rand123()])<EOL>return _generate_samples(kernel, nmr_distributions, nmr_samples, ctype, kernel_data, seed=seed)<EOL>
Draw random samples from the Uniform distribution. Args: nmr_distributions (int): the number of unique continuous_distributions to create nmr_samples (int): The number of samples to draw low (double): The minimum value of the random numbers high (double): The minimum value of the random numbers ctype (str): the C type of the output samples seed (float): the seed for the RNG Returns: ndarray: A two dimensional numpy array as (nmr_distributions, nmr_samples).
f15563:m0
def normal(nmr_distributions, nmr_samples, mean=<NUM_LIT:0>, std=<NUM_LIT:1>, ctype='<STR_LIT:float>', seed=None):
if is_scalar(mean):<EOL><INDENT>mean = np.ones((nmr_distributions, <NUM_LIT:1>)) * mean<EOL><DEDENT>if is_scalar(std):<EOL><INDENT>std = np.ones((nmr_distributions, <NUM_LIT:1>)) * std<EOL><DEDENT>kernel_data = {'<STR_LIT>': Array(mean, as_scalar=True),<EOL>'<STR_LIT>': Array(std, as_scalar=True)}<EOL>kernel = SimpleCLFunction.from_string('''<STR_LIT>''' + ctype + '''<STR_LIT>''' + str(nmr_samples) + '''<STR_LIT>''' + ctype + '''<STR_LIT>''', dependencies=[Rand123()])<EOL>return _generate_samples(kernel, nmr_distributions, nmr_samples, ctype, kernel_data, seed=seed)<EOL>
Draw random samples from the Gaussian distribution. Args: nmr_distributions (int): the number of unique continuous_distributions to create nmr_samples (int): The number of samples to draw mean (float or ndarray): The mean of the distribution std (float or ndarray): The standard deviation or the distribution ctype (str): the C type of the output samples seed (float): the seed for the RNG Returns: ndarray: A two dimensional numpy array as (nmr_distributions, nmr_samples).
f15563:m1
def get_historical_data(nmr_problems):
observations = np.tile(np.array([[<NUM_LIT:10>, <NUM_LIT>, <NUM_LIT>, <NUM_LIT>]]), (nmr_problems, <NUM_LIT:1>))<EOL>nmr_tanks_ground_truth = np.ones((nmr_problems,)) * <NUM_LIT><EOL>return observations, nmr_tanks_ground_truth<EOL>
Get the historical tank data. Args: nmr_problems (int): the number of problems Returns: tuple: (observations, nmr_tanks_ground_truth)
f15564:m0
def get_simulated_data(nmr_problems):
<EOL>nmr_observed_tanks = <NUM_LIT:10><EOL>nmr_tanks_ground_truth = normal(nmr_problems, <NUM_LIT:1>, mean=<NUM_LIT>, std=<NUM_LIT:30>, ctype='<STR_LIT>')<EOL>observations = uniform(nmr_problems, nmr_observed_tanks, low=<NUM_LIT:0>, high=nmr_tanks_ground_truth, ctype='<STR_LIT>')<EOL>return observations, nmr_tanks_ground_truth<EOL>
Simulate some data. This returns the simulated tank observations and the corresponding ground truth maximum number of tanks. Args: nmr_problems (int): the number of problems Returns: tuple: (observations, nmr_tanks_ground_truth)
f15564:m1
def mock_decorator(*args, **kwargs):
def _called_decorator(dec_func):<EOL><INDENT>@wraps(dec_func)<EOL>def _decorator(*args, **kwargs):<EOL><INDENT>return dec_func()<EOL><DEDENT>return _decorator<EOL><DEDENT>return _called_decorator<EOL>
Mocked decorator, needed in the case we need to mock a decorator
f15568:m0
def import_mock(name, *args, **kwargs):
if any(name.startswith(s) for s in mock_modules):<EOL><INDENT>return MockModule()<EOL><DEDENT>return orig_import(name, *args, **kwargs)<EOL>
Mock all modules starting with one of the mock_modules names.
f15568:m1
def __init__(self, instance, related_field):
self.instance = instance<EOL>self.related_field = related_field<EOL>
Create the RelatedCollection on the 'instance', related to the field 'related_field'
f15569:c0:m0
def __call__(self, **filters):
if not filters:<EOL><INDENT>filters = {}<EOL><DEDENT>filters[self.related_field.name] = self.instance._pk<EOL>return self.related_field._model.collection(**filters)<EOL>
Return a collection on the related model, given the current instance as a filter for the related field.
f15569:c0:m1
def remove_instance(self):
with fields.FieldLock(self.related_field):<EOL><INDENT>related_pks = self()<EOL>for pk in related_pks:<EOL><INDENT>related_instance = self.related_field._model(pk)<EOL>related_field = getattr(related_instance, self.related_field.name)<EOL>remover = getattr(related_field, '<STR_LIT>', None)<EOL>if remover is not None:<EOL><INDENT>getattr(related_field, remover)(self.instance._pk)<EOL><DEDENT>else:<EOL><INDENT>related_field.delete()<EOL><DEDENT><DEDENT><DEDENT>
Remove the instance from the related fields (delete the field if it's a simple one, or remove the instance from the field if it's a set/list/ sorted_set)
f15569:c0:m2
def __init__(self, *args, **kwargs):
super(RelatedModel, self).__init__(*args, **kwargs)<EOL>self.related_collections = []<EOL>relations = getattr(self.database, '<STR_LIT>', {}).get(self._name.lower(), [])<EOL>for relation in relations:<EOL><INDENT>model_name, field_name, _ = relation<EOL>related_model = self.database._models[model_name]<EOL>related_field = related_model.get_field(field_name)<EOL>collection = related_field.related_collection_class(self, related_field)<EOL>setattr(self, related_field.related_name, collection)<EOL>self.related_collections.append(related_field.related_name)<EOL><DEDENT>
Create the instance then add all related collections (link between this instance and related fields on other models)
f15569:c1:m0
def delete(self):
for related_collection_name in self.related_collections:<EOL><INDENT>related_collection = getattr(self, related_collection_name)<EOL>related_collection.remove_instance()<EOL><DEDENT>return super(RelatedModel, self).delete()<EOL>
When the instance is deleted, we propagate the deletion to the related collections, which will remove it from the related fields.
f15569:c1:m1
@classmethod<EOL><INDENT>def use_database(cls, database):<DEDENT>
original_database = getattr(cls, '<STR_LIT>', None)<EOL>impacted_models = super(RelatedModel, cls).use_database(database)<EOL>if not original_database or not getattr(original_database, '<STR_LIT>', {}):<EOL><INDENT>return impacted_models<EOL><DEDENT>reverse_relations = {}<EOL>for related_model_name, relations in original_database._relations.items():<EOL><INDENT>for relation in relations:<EOL><INDENT>reverse_relations.setdefault(relation[<NUM_LIT:0>], []).append((related_model_name, relation))<EOL><DEDENT><DEDENT>if not hasattr(database, '<STR_LIT>'):<EOL><INDENT>database._relations = {}<EOL><DEDENT>for _model in impacted_models:<EOL><INDENT>if _model.abstract:<EOL><INDENT>continue<EOL><DEDENT>for related_model_name, relation in reverse_relations[_model._name]:<EOL><INDENT>if related_model_name in database._relations:<EOL><INDENT>field = _model.get_field(relation[<NUM_LIT:1>])<EOL>field._assert_relation_does_not_exists()<EOL><DEDENT>original_database._relations[related_model_name].remove(relation)<EOL>database._relations.setdefault(related_model_name, []).append(relation)<EOL><DEDENT><DEDENT>return impacted_models<EOL>
Move model and its submodels to the new database, as the original use_database method. And manage relations too (done here instead of the database because the database class is not aware of relations)
f15569:c1:m2
def __init__(self, to, *args, **kwargs):
kwargs['<STR_LIT>'] = True<EOL>super(RelatedFieldMixin, self).__init__(*args, **kwargs)<EOL>self.related_to = to<EOL>self.related_name = kwargs.pop('<STR_LIT:related_name>', None)<EOL>
Force the field to be indexable and save related arguments.
f15569:c3:m0
def _attach_to_model(self, model):
super(RelatedFieldMixin, self)._attach_to_model(model)<EOL>if model.abstract:<EOL><INDENT>return<EOL><DEDENT>self.related_name = self._get_related_name()<EOL>self.related_to = self._get_related_model_name()<EOL>if not hasattr(self.database, '<STR_LIT>'):<EOL><INDENT>self.database._relations = {}<EOL><DEDENT>self.database._relations.setdefault(self.related_to, [])<EOL>self._assert_relation_does_not_exists()<EOL>relation = (self._model._name, self.name, self.related_name)<EOL>self.database._relations[self.related_to].append(relation)<EOL>
When we have a model, save the relation in the database, to later create RelatedCollection objects in the related model
f15569:c3:m1
def _assert_relation_does_not_exists(self):
relations = self.database._relations[self.related_to]<EOL>existing = [r for r in relations if r[<NUM_LIT:2>] == self.related_name]<EOL>if existing:<EOL><INDENT>error = ("<STR_LIT>"<EOL>"<STR_LIT>")<EOL>raise ImplementationError(error % (self._model.__name__, self.name, self.related_name,<EOL>self.related_to, existing[<NUM_LIT:0>][<NUM_LIT:1>], existing[<NUM_LIT:0>][<NUM_LIT:0>]))<EOL><DEDENT>
Check if a relation with the current related_name doesn't already exists for the related model
f15569:c3:m2
def _get_related_model_name(self):
if isinstance(self.related_to, type) and issubclass(self.related_to, RelatedModel):<EOL><INDENT>model_name = self.related_to._name<EOL><DEDENT>elif isinstance(self.related_to, str):<EOL><INDENT>if self.related_to == '<STR_LIT>':<EOL><INDENT>model_name = self._model._name<EOL><DEDENT>elif '<STR_LIT::>' not in self.related_to:<EOL><INDENT>model_name = '<STR_LIT::>'.join((self._model.namespace, self.related_to))<EOL><DEDENT>else:<EOL><INDENT>model_name = self.related_to<EOL><DEDENT><DEDENT>else:<EOL><INDENT>raise ImplementationError("<STR_LIT>"<EOL>"<STR_LIT>"<EOL>"<STR_LIT>"<EOL>"<STR_LIT>")<EOL><DEDENT>return model_name.lower()<EOL>
Return the name of the related model, as used to store all models in the database object, in the following format: "%(namespace)s:%(class_name)" The result is computed from the "to" argument of the RelatedField constructor, stored in the "related_to" attribute, based on theses rules : - if a "RelatedModel" subclass, get its namespace:name - if a string : - if "self", get from the current model (relation on self) - if a RelatedModel class name, keep it, and if no namesace, use the namespace of the current model
f15569:c3:m3
def _get_related_name(self):
related_name = self.related_name or '<STR_LIT>'<EOL>related_name = related_name % {<EOL>'<STR_LIT>': re_identifier.sub('<STR_LIT:_>', self._model.namespace),<EOL>'<STR_LIT>': self._model.__name__<EOL>}<EOL>if re_identifier.sub('<STR_LIT>', related_name) != related_name:<EOL><INDENT>raise ImplementationError('<STR_LIT>'<EOL>'<STR_LIT>' % related_name)<EOL><DEDENT>return related_name.lower()<EOL>
Return the related name to use to access this related field. If the related_name argument is not defined in its declaration, a new one will be computed following this format: '%s_set' with %s the name of the model owning the related.field. If the related_name argument exists, it can be the exact name to use ( be careful to use a valid python attribute name), or a string where you can set placeholder for the namespace and the model of the current model. It's useful if the current model is abstract with many subclasses Exemples: class Base(RelatedModel): abstract = True namespace = 'project' a_field = FKStringField('Other', related_name='%(namespace)s_%(model)s_related') # => related names accessible from Other are # "project_childa_related" and "project_childb_related" class ChildA(Base): pass class ChildB(Base): pass class Other(RelatedModel): namespace = 'project' field_a = FKStringField(ChildA) # => related name accessible from ChildA and ChildB will be "other_related" field_b = FKStringField(ChildB, related_name='some_others') # => related name accessible from ChildA and ChildB will be "some_others"
f15569:c3:m4
def from_python(self, value):
if isinstance(value, model.RedisModel):<EOL><INDENT>value = value._pk<EOL><DEDENT>elif isinstance(value, SimpleValueRelatedFieldMixin):<EOL><INDENT>value = value.proxy_get()<EOL><DEDENT>return value<EOL>
Provide the ability to pass a RedisModel instances or a FK field as value instead of passing the PK. The value will then be translated in the real PK.
f15569:c3:m5
def from_python_many(self, *values):
return list(map(self.from_python, values))<EOL>
Apply the from_python to each values and return the final list
f15569:c3:m6
@classmethod<EOL><INDENT>def _make_command_method(cls, command_name, many=False):<DEDENT>
def func(self, *args, **kwargs):<EOL><INDENT>if many:<EOL><INDENT>args = self.from_python_many(*args)<EOL><DEDENT>else:<EOL><INDENT>if '<STR_LIT:value>' in kwargs:<EOL><INDENT>kwargs['<STR_LIT:value>'] = self.from_python(kwargs['<STR_LIT:value>'])<EOL><DEDENT>else:<EOL><INDENT>args = list(args)<EOL>args[<NUM_LIT:0>] = self.from_python(args[<NUM_LIT:0>])<EOL><DEDENT><DEDENT>sup_method = getattr(super(cls, self), command_name)<EOL>return sup_method(*args, **kwargs)<EOL><DEDENT>return func<EOL>
Return a function which will convert objects to their pk, then call the super method for the given name. The "many" attribute indicates that the command accept one or many values as arguments (in *args)
f15569:c3:m7
def __call__(self, **filters):
model = self.database._models[self.related_to]<EOL>collection = model.collection_manager(model)<EOL>return collection(**filters).intersect(self)<EOL>
When calling (via `()` or via `.collection()`) a MultiValuesRelatedField, we return a collection, filtered with given arguments, the result beeing "intersected" with the members of the current field.
f15569:c7:m0
def zadd(self, *args, **kwargs):
if '<STR_LIT>' not in kwargs:<EOL><INDENT>kwargs['<STR_LIT>'] = self.from_python_many<EOL><DEDENT>pieces = fields.SortedSetField.coerce_zadd_args(*args, **kwargs)<EOL>return super(M2MSortedSetField, self).zadd(*pieces)<EOL>
Parse args and kwargs to check values to pass them through the from_python method. We pass the parsed args/kwargs as args in the super call, to avoid doing the same calculation on kwargs one more time.
f15569:c10:m0
@classmethod<EOL><INDENT>def compose(cls, index_classes, key=None, transform=None, name=None):<DEDENT>
attrs = {}<EOL>if index_classes:<EOL><INDENT>attrs['<STR_LIT>'] = index_classes<EOL><DEDENT>klass = type(str(name or cls.__name__), (cls, ), attrs)<EOL>configure_attrs = {}<EOL>if key is not None:<EOL><INDENT>configure_attrs['<STR_LIT:key>'] = key<EOL><DEDENT>if transform is not None:<EOL><INDENT>configure_attrs['<STR_LIT>'] = transform<EOL><DEDENT>if configure_attrs:<EOL><INDENT>klass = klass.configure(**configure_attrs)<EOL><DEDENT>return klass<EOL>
Create a new class with the given index classes Parameters ----------- index_classes: list The list of index classes to be used in the multi-index class to create name: str The name of the new multi-index class. If not set, it will be the same as the current class key: str A key to augment the default key of each index, to avoid collision. transform: callable None by default, can be set to a function that will transform the value to be indexed. This callable can accept one (`value`) or two (`self`, `value`) arguments
f15570:c0:m0
@cached_property<EOL><INDENT>def _indexes(self):<DEDENT>
return [index_class(field=self.field) for index_class in self.index_classes]<EOL>
Instantiate the indexes only when asked Returns ------- list A list of all indexes, tied to the field.
f15570:c0:m1
def can_handle_suffix(self, suffix):
for index in self._indexes:<EOL><INDENT>if index.can_handle_suffix(suffix):<EOL><INDENT>return True<EOL><DEDENT><DEDENT>return False<EOL>
Tell if one of the managed indexes can be used for the given filter prefix For parameters, see BaseIndex.can_handle_suffix
f15570:c0:m2
def _reset_cache(self):
for index in self._indexes:<EOL><INDENT>index._reset_cache()<EOL><DEDENT>
Reset attributes used to potentially rollback the indexes For the parameters, seen BaseIndex._reset_cache
f15570:c0:m3
def _rollback(self):
for index in self._indexes:<EOL><INDENT>index._rollback()<EOL><DEDENT>
Restore the index in its previous state For the parameters, seen BaseIndex._rollback
f15570:c0:m4
def get_unique_index(self):
return [index for index in self._indexes if index.handle_uniqueness][<NUM_LIT:0>]<EOL>
Returns the first index handling uniqueness Returns ------- BaseIndex The first index capable of handling uniqueness Raises ------ IndexError If not index is capable of handling uniqueness
f15570:c0:m5
@property<EOL><INDENT>def handle_uniqueness(self):<DEDENT>
try:<EOL><INDENT>self.get_unique_index()<EOL><DEDENT>except IndexError:<EOL><INDENT>return False<EOL><DEDENT>else:<EOL><INDENT>return True<EOL><DEDENT>
Tell if at least one of the indexes can handle uniqueness Returns ------- bool ``True`` if this multi-index can handle uniqueness.
f15570:c0:m6
def prepare_args(self, args, transform=True):
updated_args = list(args)<EOL>if transform:<EOL><INDENT>updated_args[-<NUM_LIT:1>] = self.transform_value(updated_args[-<NUM_LIT:1>])<EOL><DEDENT>if self.key:<EOL><INDENT>updated_args.insert(-<NUM_LIT:1>, self.key)<EOL><DEDENT>return updated_args<EOL>
Prepare args to be used by a sub-index Parameters ---------- args: list The while list of arguments passed to add, check_uniqueness, get_filtered_keys... transform: bool If ``True``, the last entry in `args`, ie the value, will be transformed. Else it will be kept as is.
f15570:c0:m7
def check_uniqueness(self, *args):
self.get_unique_index().check_uniqueness(*self.prepare_args(args, transform=False))<EOL>
For a unique index, check if the given args are not used twice For the parameters, seen BaseIndex.check_uniqueness
f15570:c0:m8
def add(self, *args, **kwargs):
check_uniqueness = kwargs.pop('<STR_LIT>', False)<EOL>args = self.prepare_args(args)<EOL>for index in self._indexes:<EOL><INDENT>index.add(*args, check_uniqueness=check_uniqueness and index.handle_uniqueness, **kwargs)<EOL>if check_uniqueness and index.handle_uniqueness:<EOL><INDENT>check_uniqueness = False<EOL><DEDENT><DEDENT>
Add the instance tied to the field to all the indexes For the parameters, seen BaseIndex.add
f15570:c0:m9
def remove(self, *args):
args = self.prepare_args(args)<EOL>for index in self._indexes:<EOL><INDENT>index.remove(*args)<EOL><DEDENT>
Remove the instance tied to the field from all the indexes For the parameters, seen BaseIndex.remove
f15570:c0:m10
def get_filtered_keys(self, suffix, *args, **kwargs):
args = self.prepare_args(args, transform=False)<EOL>for index in self._indexes:<EOL><INDENT>if index.can_handle_suffix(suffix):<EOL><INDENT>return index.get_filtered_keys(suffix, *args, **kwargs)<EOL><DEDENT><DEDENT>
Returns the index keys to be used by the collection for the given args For the parameters, see BaseIndex.get_filtered_keys
f15570:c0:m11
def get_all_storage_keys(self):
keys = set()<EOL>for index in self._indexes:<EOL><INDENT>keys.update(index.get_all_storage_keys())<EOL><DEDENT>return keys<EOL>
Returns the keys to be removed by `clear` in aggressive mode For the parameters, see BaseIndex.get_all_storage_keys
f15570:c0:m12
def _list_to_set(self, list_key, set_key):
if self.cls.database.support_scripting():<EOL><INDENT>self.cls.database.call_script(<EOL>script_dict=self.__class__.scripts['<STR_LIT>'],<EOL>keys=[list_key, set_key]<EOL>)<EOL><DEDENT>else:<EOL><INDENT>conn = self.cls.get_connection()<EOL>conn.sadd(set_key, *conn.lrange(list_key, <NUM_LIT:0>, -<NUM_LIT:1>))<EOL><DEDENT>
Store all content of the given ListField in a redis set. Use scripting if available to avoid retrieving all values locally from the list before sending them back to the set
f15571:c0:m1
@property<EOL><INDENT>def _collection(self):<DEDENT>
old_sort_limits_and_len_mode = None if self._sort_limits is None else self._sort_limits.copy(), self._len_mode<EOL>old_sorts = None if self._sort is None else self._sort.copy(),None if self._sort_by_sortedset is None else self._sort_by_sortedset.copy()<EOL>try:<EOL><INDENT>if self.stored_key and not self._stored_len:<EOL><INDENT>if self._len_mode:<EOL><INDENT>self._len = <NUM_LIT:0><EOL>self._len_mode = False<EOL><DEDENT>self._sort_limits = {}<EOL>return []<EOL><DEDENT>if self._sort_by_sortedset and self._sort and self._sort.get('<STR_LIT>'):<EOL><INDENT>self._sort = None<EOL>self._sort_by_sortedset['<STR_LIT>'] = not self._sort_by_sortedset.get('<STR_LIT>', False)<EOL><DEDENT>return super(ExtendedCollectionManager, self)._collection<EOL><DEDENT>finally:<EOL><INDENT>self._sort_limits, self._len_mode = old_sort_limits_and_len_mode<EOL>self._sort, self._sort_by_sortedset = old_sorts<EOL><DEDENT>
Effectively retrieve data according to lazy_collection. If we have a stored collection, without any result, return an empty list
f15571:c0:m2
def _prepare_sets(self, sets):
if self.stored_key and not self.stored_key_exists():<EOL><INDENT>raise DoesNotExist('<STR_LIT>'<EOL>'<STR_LIT>')<EOL><DEDENT>conn = self.cls.get_connection()<EOL>all_sets = set()<EOL>tmp_keys = set()<EOL>lists = []<EOL>def add_key(key, key_type=None, is_tmp=False):<EOL><INDENT>if not key_type:<EOL><INDENT>key_type = conn.type(key)<EOL><DEDENT>if key_type == '<STR_LIT>':<EOL><INDENT>all_sets.add(key)<EOL><DEDENT>elif key_type == '<STR_LIT>':<EOL><INDENT>all_sets.add(key)<EOL>self._has_sortedsets = True<EOL><DEDENT>elif key_type == '<STR_LIT:list>':<EOL><INDENT>lists.append(key)<EOL><DEDENT>elif key_type == '<STR_LIT:none>':<EOL><INDENT>all_sets.add(key)<EOL><DEDENT>else:<EOL><INDENT>raise ValueError('<STR_LIT>' % (<EOL>key, key_type<EOL>))<EOL><DEDENT>if is_tmp:<EOL><INDENT>tmp_keys.add(key)<EOL><DEDENT><DEDENT>for set_ in sets:<EOL><INDENT>if isinstance(set_, str):<EOL><INDENT>add_key(set_)<EOL><DEDENT>elif isinstance(set_, ParsedFilter):<EOL><INDENT>value = set_.value<EOL>if isinstance(value, RedisModel):<EOL><INDENT>value = value.pk.get()<EOL><DEDENT>elif isinstance(value, SingleValueField):<EOL><INDENT>value = value.proxy_get()<EOL><DEDENT>elif isinstance(value, RedisField):<EOL><INDENT>raise ValueError(u'<STR_LIT>' % (set_.index.field.name, value))<EOL><DEDENT>for index_key, key_type, is_tmp in set_.index.get_filtered_keys(<EOL>set_.suffix,<EOL>accepted_key_types=self._accepted_key_types,<EOL>*(set_.extra_field_parts + [value])<EOL>):<EOL><INDENT>if key_type not in self._accepted_key_types:<EOL><INDENT>raise ValueError('<STR_LIT>' % (<EOL>set_.index.__class__.__name__<EOL>))<EOL><DEDENT>add_key(index_key, key_type, is_tmp)<EOL><DEDENT><DEDENT>elif isinstance(set_, SetField):<EOL><INDENT>add_key(set_.key, '<STR_LIT>')<EOL><DEDENT>elif isinstance(set_, SortedSetField):<EOL><INDENT>add_key(set_.key, '<STR_LIT>')<EOL><DEDENT>elif isinstance(set_, (ListField, _StoredCollection)):<EOL><INDENT>add_key(set_.key, '<STR_LIT:list>')<EOL><DEDENT>elif isinstance(set_, tuple) and len(set_):<EOL><INDENT>tmp_key = self._unique_key()<EOL>conn.sadd(tmp_key, *set_)<EOL>add_key(tmp_key, '<STR_LIT>', True)<EOL><DEDENT>else:<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT><DEDENT>if lists:<EOL><INDENT>if not len(all_sets) and len(lists) == <NUM_LIT:1>:<EOL><INDENT>all_sets = {lists[<NUM_LIT:0>]}<EOL><DEDENT>else:<EOL><INDENT>for list_key in lists:<EOL><INDENT>tmp_key = self._unique_key()<EOL>self._list_to_set(list_key, tmp_key)<EOL>add_key(tmp_key, '<STR_LIT>', True)<EOL><DEDENT><DEDENT><DEDENT>return all_sets, tmp_keys<EOL>
The original "_prepare_sets" method simple return the list of sets in _lazy_collection, know to be all keys of redis sets. As the new "intersect" method can accept different types of "set", we have to handle them because we must return only keys of redis sets.
f15571:c0:m3
def filter(self, **filters):
return self._add_filters(**filters)<EOL>
Add more filters to the collection
f15571:c0:m4
def intersect(self, *sets):
sets_ = set()<EOL>for set_ in sets:<EOL><INDENT>if isinstance(set_, (list, set)):<EOL><INDENT>set_ = tuple(set_)<EOL><DEDENT>elif isinstance(set_, MultiValuesField) and not getattr(set_, '<STR_LIT>', None):<EOL><INDENT>raise ValueError('<STR_LIT>'<EOL>% set_.__class__.__name__)<EOL><DEDENT>elif not isinstance(set_, (tuple, str, MultiValuesField, _StoredCollection)):<EOL><INDENT>raise ValueError('<STR_LIT>'<EOL>'<STR_LIT>'<EOL>'<STR_LIT>'<EOL>'<STR_LIT>'<EOL>'<STR_LIT>'<EOL>'<STR_LIT>' % set_)<EOL><DEDENT>if isinstance(set_, SortedSetField):<EOL><INDENT>self._has_sortedsets = True<EOL><DEDENT>sets_.add(set_)<EOL><DEDENT>self._lazy_collection['<STR_LIT>'].update(sets_)<EOL>return self<EOL>
Add a list of sets to the existing list of sets to check. Returns self for chaining. Each "set" represent a list of pk, the final goal is to return only pks matching the intersection of all sets. A "set" can be: - a string: considered as the name of a redis set, sorted set or list (if a list, values will be stored in a temporary set) - a list, set or tuple: values will be stored in a temporary set - a SetField: we will directly use it's content on redis - a ListField or SortedSetField: values will be stored in a temporary set (except if we want a sort or values and it's the only "set" to use)
f15571:c0:m5
def _combine_sets(self, sets, final_set):
if self._has_sortedsets:<EOL><INDENT>self.cls.get_connection().zinterstore(final_set, list(sets))<EOL><DEDENT>else:<EOL><INDENT>final_set = super(ExtendedCollectionManager, self)._combine_sets(sets, final_set)<EOL><DEDENT>return final_set<EOL>
Given a list of set, combine them to create the final set that will be used to make the final redis call. If we have a least a sorted set, use zinterstore insted of sunionstore
f15571:c0:m6
def _final_redis_call(self, final_set, sort_options):
conn = self.cls.get_connection()<EOL>if self._has_sortedsets and sort_options is None:<EOL><INDENT>return conn.zrange(final_set, <NUM_LIT:0>, -<NUM_LIT:1>)<EOL><DEDENT>if self.stored_key and not self._lazy_collection['<STR_LIT>']and len(self._lazy_collection['<STR_LIT>']) == <NUM_LIT:1>and (sort_options is None or sort_options == {'<STR_LIT>': '<STR_LIT>'}):<EOL><INDENT>return conn.lrange(final_set, <NUM_LIT:0>, -<NUM_LIT:1>)<EOL><DEDENT>return super(ExtendedCollectionManager, self)._final_redis_call(<EOL>final_set, sort_options)<EOL>
The final redis call to obtain the values to return from the "final_set" with some sort options. IIf we have at leaset a sorted set and if we don't have any sort options, call zrange on the final set wich is the result of a call to zinterstore.
f15571:c0:m7
def _collection_length(self, final_set):
conn = self.cls.get_connection()<EOL>if self._has_sortedsets:<EOL><INDENT>return conn.zcard(final_set)<EOL><DEDENT>elif self.stored_key and not self._lazy_collection['<STR_LIT>']and len(self._lazy_collection['<STR_LIT>']) == <NUM_LIT:1>:<EOL><INDENT>return conn.llen(final_set)<EOL><DEDENT>return super(ExtendedCollectionManager, self)._collection_length(final_set)<EOL>
Return the length of the final collection, directly asking redis for the count without calling sort
f15571:c0:m8
def sort(self, **parameters):
self._sort_by_sortedset = None<EOL>is_sortedset = False<EOL>if parameters.get('<STR_LIT>'):<EOL><INDENT>if parameters.get('<STR_LIT>'):<EOL><INDENT>raise ValueError("<STR_LIT>"<EOL>"<STR_LIT>")<EOL><DEDENT>by = parameters.get('<STR_LIT>', None)<EOL>if isinstance(by, SortedSetField) and getattr(by, '<STR_LIT>', None):<EOL><INDENT>by = by.key<EOL><DEDENT>elif not isinstance(by, str):<EOL><INDENT>by = None<EOL><DEDENT>if by is None:<EOL><INDENT>raise ValueError('<STR_LIT>'<EOL>'<STR_LIT>'<EOL>'<STR_LIT>'<EOL>'<STR_LIT>')<EOL><DEDENT>is_sortedset = True<EOL>parameters['<STR_LIT>'] = by<EOL>del parameters['<STR_LIT>']<EOL><DEDENT>else:<EOL><INDENT>by = parameters.get('<STR_LIT>')<EOL>if by and isinstance(by, RedisField):<EOL><INDENT>parameters['<STR_LIT>'] = by.name<EOL><DEDENT><DEDENT>super(ExtendedCollectionManager, self).sort(**parameters)<EOL>if is_sortedset:<EOL><INDENT>self._sort_by_sortedset = self._sort<EOL>self._sort = None<EOL><DEDENT>return self<EOL>
Enhance the default sort method to accept a new parameter "by_score", to use instead of "by" if you want to sort by the score of a sorted set. You must pass to "by_sort" the key of a redis sorted set (or a sortedSetField attached to an instance)
f15571:c0:m9
def _zset_to_keys(self, key, values=None, alpha=False):
conn = self.cls.get_connection()<EOL>default = '<STR_LIT>' if alpha else '<STR_LIT>'<EOL>if values is None:<EOL><INDENT>result = conn.zrange(key, start=<NUM_LIT:0>, end=-<NUM_LIT:1>, withscores=True)<EOL>values = list(islice(chain.from_iterable(result), <NUM_LIT:0>, None, <NUM_LIT:2>))<EOL><DEDENT>else:<EOL><INDENT>if isinstance(self.cls.database, PipelineDatabase):<EOL><INDENT>with self.cls.database.pipeline(transaction=False) as pipe:<EOL><INDENT>for value in values:<EOL><INDENT>pipe.zscore(key, value)<EOL><DEDENT>scores = pipe.execute()<EOL><DEDENT><DEDENT>else:<EOL><INDENT>scores = []<EOL>for value in values:<EOL><INDENT>scores.append(conn.zscore(key, value))<EOL><DEDENT><DEDENT>result = []<EOL>for index, value in enumerate(values):<EOL><INDENT>score = scores[index]<EOL>if score is None:<EOL><INDENT>score = default<EOL><DEDENT>result.append((value, score))<EOL><DEDENT><DEDENT>base_tmp_key = self._unique_key()<EOL>conn.set(base_tmp_key, '<STR_LIT>') <EOL>tmp_keys = []<EOL>mapping = {}<EOL>for value, score in result:<EOL><INDENT>tmp_key = '<STR_LIT>' % (base_tmp_key, value)<EOL>tmp_keys.append(tmp_key)<EOL>mapping[tmp_key] = score<EOL><DEDENT>conn.mset(mapping)<EOL>return base_tmp_key, tmp_keys<EOL>
Convert a redis sorted set to a list of keys, to be used by sort. Each key is on the following format, for each value in the sorted set: ramdom_string:value-in-the-sorted-set => score-of-the-value The random string is the same for all keys. If values is not None, only these values from the sorted set are saved as keys. If a value in values is not on the sorted set, it's still saved as a key but with a default value ('' is alpha is True, else '-inf')
f15571:c0:m10
def _prepare_sort_by_score(self, values, sort_options):
<EOL>base_tmp_key, tmp_keys = self._zset_to_keys(<EOL>key=self._sort_by_sortedset['<STR_LIT>'],<EOL>values=values,<EOL>)<EOL>sort_options['<STR_LIT>'] = '<STR_LIT>' % base_tmp_key<EOL>for key in ('<STR_LIT>', '<STR_LIT>', '<STR_LIT>', '<STR_LIT:store>'):<EOL><INDENT>if key in self._sort_by_sortedset:<EOL><INDENT>sort_options[key] = self._sort_by_sortedset[key]<EOL><DEDENT><DEDENT>if sort_options.get('<STR_LIT>'):<EOL><INDENT>try:<EOL><INDENT>pos = sort_options['<STR_LIT>'].index(SORTED_SCORE)<EOL><DEDENT>except:<EOL><INDENT>pass<EOL><DEDENT>else:<EOL><INDENT>sort_options['<STR_LIT>'][pos] = '<STR_LIT>' % base_tmp_key<EOL><DEDENT><DEDENT>return base_tmp_key, tmp_keys<EOL>
Create the key to sort on the sorted set references in self._sort_by_sortedset and adapte sort options
f15571:c0:m11
def _prepare_results(self, results):
<EOL>if self._sort_by_sortedset_after and (len(results) > <NUM_LIT:1> or self._values):<EOL><INDENT>conn = self.cls.get_connection()<EOL>sort_params = {}<EOL>base_tmp_key, tmp_keys = self._prepare_sort_by_score(results, sort_params)<EOL>final_set = '<STR_LIT>' % base_tmp_key<EOL>conn.sadd(final_set, *results)<EOL>results = conn.sort(final_set, **sort_params)<EOL>conn.delete(*(tmp_keys + [final_set, base_tmp_key]))<EOL><DEDENT>if self._store:<EOL><INDENT>return<EOL><DEDENT>if self._values and self._values['<STR_LIT>'] != '<STR_LIT>':<EOL><INDENT>results = self._to_values(results)<EOL><DEDENT>return super(ExtendedCollectionManager, self)._prepare_results(results)<EOL>
Sort results by score if not done before (faster, if we have no values to retrieve, or slice)
f15571:c0:m12
def _to_values(self, collection):
result = zip(*([iter(collection)] * len(self._values['<STR_LIT>']['<STR_LIT>'])))<EOL>if self._values['<STR_LIT>'] == '<STR_LIT>':<EOL><INDENT>result = (dict(zip(self._values['<STR_LIT>']['<STR_LIT>'], a_result)) for a_result in result)<EOL><DEDENT>return result<EOL>
Regroup values in tuples or dicts for each "instance". Exemple: Given this result from redis: ['id1', 'name1', 'id2', 'name2'] tuples: [('id1', 'name1'), ('id2', 'name2')] dicts: [{'id': 'id1', 'name': 'name1'}, {'id': 'id2', 'name': 'name2'}]
f15571:c0:m13
@property<EOL><INDENT>def _sort_by_sortedset_before(self):<DEDENT>
return self._sort_by_sortedset and self._sort_limits and (not self._lazy_collection['<STR_LIT>']<EOL>or self._want_score_value)<EOL>
Return True if we have to sort by set and do the stuff *before* asking redis for the sort
f15571:c0:m14
@property<EOL><INDENT>def _sort_by_sortedset_after(self):<DEDENT>
return self._sort_by_sortedset and not self._sort_limits and (not self._lazy_collection['<STR_LIT>']<EOL>or self._want_score_value)<EOL>
Return True if we have to sort by set and do the stuff *after* asking redis for the sort
f15571:c0:m15
@property<EOL><INDENT>def _want_score_value(self):<DEDENT>
return self._values and SORTED_SCORE in self._values['<STR_LIT>']['<STR_LIT>']<EOL>
Return True if we want the score of the sorted set used to sort in the results from values/values_list
f15571:c0:m16
def _prepare_sort_options(self, has_pk):
sort_options = super(ExtendedCollectionManager, self)._prepare_sort_options(has_pk)<EOL>if self._values:<EOL><INDENT>if not sort_options:<EOL><INDENT>sort_options = {}<EOL><DEDENT>sort_options['<STR_LIT>'] = self._values['<STR_LIT>']['<STR_LIT>']<EOL><DEDENT>if self._sort_by_sortedset_after:<EOL><INDENT>for key in ('<STR_LIT>', '<STR_LIT:store>'):<EOL><INDENT>if key in self._sort_by_sortedset:<EOL><INDENT>del self._sort_by_sortedset[key]<EOL><DEDENT><DEDENT>if sort_options and (not has_pk or self._want_score_value):<EOL><INDENT>for key in ('<STR_LIT>', '<STR_LIT:store>'):<EOL><INDENT>if key in sort_options:<EOL><INDENT>self._sort_by_sortedset[key] = sort_options.pop(key)<EOL><DEDENT><DEDENT><DEDENT>if not sort_options:<EOL><INDENT>sort_options = None<EOL><DEDENT><DEDENT>return sort_options<EOL>
Prepare sort options for _values attributes. If we manager sort by score after getting the result, we do not want to get values from the first sort call, but only from the last one, after converting results in zset into keys
f15571:c0:m17
def _get_final_set(self, sets, pk, sort_options):
if self._lazy_collection['<STR_LIT>']:<EOL><INDENT>sets = sets[::]<EOL>sets.extend(self._lazy_collection['<STR_LIT>'])<EOL>if not self._lazy_collection['<STR_LIT>'] and not self.stored_key:<EOL><INDENT>sets.append(self.cls.get_field('<STR_LIT>').collection_key)<EOL><DEDENT><DEDENT>final_set, keys_to_delete_later = super(ExtendedCollectionManager,<EOL>self)._get_final_set(sets, pk, sort_options)<EOL>if final_set and self._sort_by_sortedset_before:<EOL><INDENT>base_tmp_key, tmp_keys = self._prepare_sort_by_score(None, sort_options)<EOL>if not keys_to_delete_later:<EOL><INDENT>keys_to_delete_later = []<EOL><DEDENT>keys_to_delete_later.append(base_tmp_key)<EOL>keys_to_delete_later += tmp_keys<EOL><DEDENT>return final_set, keys_to_delete_later<EOL>
Add intersects fo sets and call parent's _get_final_set. If we have to sort by sorted score, and we have a slice, we have to convert the whole sorted set to keys now.
f15571:c0:m18
def _add_filters(self, **filters):
string_filters = filters.copy()<EOL>for key, value in filters.items():<EOL><INDENT>is_extended = False<EOL>if isinstance(value, RedisField):<EOL><INDENT>if (not isinstance(value, SingleValueField)<EOL>or getattr(value, '<STR_LIT>', None) is None):<EOL><INDENT>raise ValueError('<STR_LIT>'<EOL>'<STR_LIT>'<EOL>'<STR_LIT>')<EOL><DEDENT>is_extended = True<EOL><DEDENT>elif isinstance(value, RedisModel):<EOL><INDENT>is_extended = True<EOL><DEDENT>if is_extended:<EOL><INDENT>if self._field_is_pk(key):<EOL><INDENT>raw_filter = RawFilter(key, value)<EOL>self._lazy_collection['<STR_LIT>'].add(raw_filter)<EOL><DEDENT>else:<EOL><INDENT>index, suffix, extra_field_parts = self._parse_filter_key(key)<EOL>parsed_filter = ParsedFilter(index, suffix, extra_field_parts, value)<EOL>self._lazy_collection['<STR_LIT>'].append(parsed_filter)<EOL><DEDENT>string_filters.pop(key)<EOL><DEDENT><DEDENT>super(ExtendedCollectionManager, self)._add_filters(**string_filters)<EOL>return self<EOL>
In addition to the normal _add_filters, this one accept RedisField objects on the right part of a filter. The value will be fetched from redis when calling the collection. The filter value can also be a model instance, in which case its PK will be fetched when calling the collection, too.
f15571:c0:m19
def _get_pk(self):
pk = super(ExtendedCollectionManager, self)._get_pk()<EOL>if pk is not None and isinstance(pk, RawFilter):<EOL><INDENT>if isinstance(pk.value, RedisModel):<EOL><INDENT>pk = pk.value.pk.get()<EOL><DEDENT>elif isinstance(pk.value, SingleValueField):<EOL><INDENT>pk = pk.value.proxy_get()<EOL><DEDENT>else:<EOL><INDENT>raise ValueError(u'<STR_LIT>' % pk.value)<EOL><DEDENT><DEDENT>return pk<EOL>
Override the default _get_pk method to retrieve the real pk value if we have a SingleValueField or a RedisModel instead of a real PK value
f15571:c0:m20
def _coerce_fields_parameters(self, fields):
try:<EOL><INDENT>sorted_score_pos = fields.index(SORTED_SCORE)<EOL><DEDENT>except:<EOL><INDENT>sorted_score_pos = None<EOL><DEDENT>else:<EOL><INDENT>fields = list(fields)<EOL>fields.pop(sorted_score_pos)<EOL><DEDENT>final_fields = {'<STR_LIT>': [], '<STR_LIT>': []}<EOL>for field_name in fields:<EOL><INDENT>if self._field_is_pk(field_name):<EOL><INDENT>final_fields['<STR_LIT>'].append(field_name)<EOL>final_fields['<STR_LIT>'].append('<STR_LIT:#>')<EOL><DEDENT>else:<EOL><INDENT>if not self.cls.has_field(field_name):<EOL><INDENT>raise ValueError("<STR_LIT>"<EOL>"<STR_LIT>" % (field_name, self.cls.__name__))<EOL><DEDENT>field = self.cls.get_field(field_name)<EOL>if isinstance(field, MultiValuesField):<EOL><INDENT>raise ValueError("<STR_LIT>"<EOL>"<STR_LIT>" % field_name)<EOL><DEDENT>final_fields['<STR_LIT>'].append(field_name)<EOL>final_fields['<STR_LIT>'].append(field.sort_wildcard)<EOL><DEDENT><DEDENT>if sorted_score_pos is not None:<EOL><INDENT>final_fields['<STR_LIT>'].insert(sorted_score_pos, SORTED_SCORE)<EOL>final_fields['<STR_LIT>'].insert(sorted_score_pos, SORTED_SCORE)<EOL><DEDENT>return final_fields<EOL>
Used by values and values_list to get the list of fields to use in the redis sort command to retrieve fields. The result is a dict with two lists: - 'names', with wanted field names - 'keys', with keys to use in the sort command When sorting by score, we allow to retrieve the score in values/values_list. For this, just pass SORTED_SCORE (importable from contrib.collection) as a name to retrieve. If finally the result is not sorted by score, the value for this part will be None
f15571:c0:m21
def store(self, key=None, ttl=DEFAULT_STORE_TTL):
old_sort_limits_and_len_mode = None if self._sort_limits is None else self._sort_limits.copy(), self._len_mode<EOL>try:<EOL><INDENT>self._store = True<EOL>sort_options = None<EOL>if self._sort is not None:<EOL><INDENT>sort_options = self._sort.copy()<EOL><DEDENT>values = None<EOL>if self._values is not None:<EOL><INDENT>values = self._values<EOL>self._values = None<EOL><DEDENT>store_key = key or self._unique_key()<EOL>if self._sort is None:<EOL><INDENT>self._sort = {}<EOL><DEDENT>self._sort['<STR_LIT:store>'] = store_key<EOL>if self._lazy_collection['<STR_LIT>'] and not self._values:<EOL><INDENT>self.values('<STR_LIT>')<EOL><DEDENT>self._len_mode = False<EOL>self._collection<EOL>self._store = False<EOL>self._sort = sort_options<EOL>self._values = values<EOL>stored_collection = self.__class__(self.cls)<EOL>stored_collection.from_stored(store_key)<EOL>if ttl is not None:<EOL><INDENT>self.cls.get_connection().expire(store_key, ttl)<EOL><DEDENT>for attr in ('<STR_LIT>', '<STR_LIT>', '<STR_LIT>'):<EOL><INDENT>setattr(stored_collection, attr, deepcopy(getattr(self, attr)))<EOL><DEDENT>return stored_collection<EOL><DEDENT>finally:<EOL><INDENT>self._sort_limits, self._len_mode = old_sort_limits_and_len_mode<EOL><DEDENT>
Will call the collection and store the result in Redis, and return a new collection based on this stored result. Note that only primary keys are stored, ie calls to values/values_list are ignored when storing result. But choices about instances/values_list are transmited to the new collection. If no key is given, a new one will be generated. The ttl is the time redis will keep the new key. By default its DEFAULT_STORE_TTL, which is 60 secondes. You can pass None if you don't want expiration.
f15571:c0:m22
def from_stored(self, key):
<EOL>if self.stored_key:<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>self.stored_key = key<EOL>self.intersect(_StoredCollection(self.cls.get_connection(), key))<EOL>self.sort(by='<STR_LIT>') <EOL>self._stored_len = self.cls.get_connection().llen(key)<EOL>return self<EOL>
Set the current collection as based on a stored one. The key argument is the key off the stored collection.
f15571:c0:m23
def stored_key_exists(self):
return self.cls.get_connection().exists(self.stored_key)<EOL>
Check the existence of the stored key (useful if the collection is based on a stored one, to check if the redis key still exists)
f15571:c0:m24
def reset_result_type(self):
self._values = None<EOL>return super(ExtendedCollectionManager, self).reset_result_type()<EOL>
Reset the type of values attened for the collection (ie cancel a previous "instances" or "values" call)
f15571:c0:m25
def values(self, *fields):
if not fields:<EOL><INDENT>fields = self._get_simple_fields()<EOL><DEDENT>fields = self._coerce_fields_parameters(fields)<EOL>self._instances = False<EOL>self._values = {'<STR_LIT>': fields, '<STR_LIT>': '<STR_LIT>'}<EOL>return self<EOL>
Ask the collection to return a list of dict of given fields for each instance found in the collection. If no fields are given, all "simple value" fields are used.
f15571:c0:m26
def values_list(self, *fields, **kwargs):
flat = kwargs.pop('<STR_LIT>', False)<EOL>if kwargs:<EOL><INDENT>raise ValueError('<STR_LIT>'<EOL>% list(kwargs))<EOL><DEDENT>if not fields:<EOL><INDENT>fields = self._get_simple_fields()<EOL><DEDENT>if flat and len(fields) > <NUM_LIT:1>:<EOL><INDENT>raise ValueError("<STR_LIT>")<EOL><DEDENT>fields = self._coerce_fields_parameters(fields)<EOL>self._instances = False<EOL>self._values = {'<STR_LIT>': fields, '<STR_LIT>': '<STR_LIT>' if flat else '<STR_LIT>'}<EOL>return self<EOL>
Ask the collection to return a list of tuples of given fields (in the given order) for each instance found in the collection. If 'flat=True' is passed, the resulting list will be flat, ie without tuple. It's a valid kwarg only if only one field is given. If no fields are given, all "simple value" fields are used.
f15571:c0:m27
def lmembers(self):
return self.connection.lrange(self.key, <NUM_LIT:0>, -<NUM_LIT:1>)<EOL>
Return the list of all members of the list, used by _list_to_set if no scripting
f15571:c1:m1
def pipeline(self, transaction=True, share_in_threads=False):
return _Pipeline(self, transaction=transaction, share_in_threads=share_in_threads)<EOL>
A replacement to the default redis pipeline method which manage saving and restoring of the connection. ALL calls to redis for the current database will pass via the pipeline. So if you don't use watches, all getters will not return any value, but results will be available via the pipe.execute() call. Please refer to the redis-py documentation. To use with "with": ### # Simple multi/exec with database.pipeline() as pipe: # do some stuff... result = pipe.execute() ### # Simple pipeline (no transaction, no atomicity) with database.pipeline(transaction=False) as pipe: # do some stuff... result = pipe.execute() ### # Advanced use with watch with database.pipeline() as pipe: while 1: try: if watches: pipe.watch(watches) # get watched stuff pipe.multi() # do some stuff... return pipe.execute() except WatchError: continue
f15572:c0:m1
def transaction(self, func, *watches, **kwargs):
with self.pipeline(True, share_in_threads=kwargs.get('<STR_LIT>', False)) as pipe:<EOL><INDENT>while <NUM_LIT:1>:<EOL><INDENT>try:<EOL><INDENT>if watches:<EOL><INDENT>pipe.watch(*watches)<EOL><DEDENT>func(pipe)<EOL>return pipe.execute()<EOL><DEDENT>except WatchError:<EOL><INDENT>continue<EOL><DEDENT><DEDENT><DEDENT>
Convenience method for executing the callable `func` as a transaction while watching all keys specified in `watches`. The 'func' callable should expect a single arguement which is a Pipeline object.
f15572:c0:m2
@property<EOL><INDENT>def _connection(self):<DEDENT>
if self._pipelined_connection is not None:<EOL><INDENT>if self._pipelined_connection.share_in_threads orthreading.current_thread().ident == self._pipelined_connection.current_thread_id:<EOL><INDENT>return self._pipelined_connection<EOL><DEDENT><DEDENT>return self._direct_connection<EOL>
If we have a pipeline, shared in thread, or in the good thread, return it else use the direct connection
f15572:c0:m3
@_connection.setter<EOL><INDENT>def _connection(self, value):<DEDENT>
if isinstance(value, _Pipeline):<EOL><INDENT>self._pipelined_connection = value<EOL><DEDENT>else:<EOL><INDENT>self._direct_connection = value<EOL>self._pipelined_connection = None<EOL><DEDENT>
If the value is a pipeline, save it as the connection to use for pipelines if to be shared in threads or goot thread. Do not remove the direct connection. If it is not a pipeline, clear it, and set the direct connection again.
f15572:c0:m4
def watch(self, *names):
watches = []<EOL>for watch in names:<EOL><INDENT>if isinstance(watch, RedisField):<EOL><INDENT>watch = watch.key<EOL><DEDENT>watches.append(watch)<EOL><DEDENT>return super(_Pipeline, self).watch(*watches)<EOL>
Override the default watch method to allow the user to pass RedisField objects as names, which will be translated to their real keys and passed to the default watch method
f15572:c1:m1
def __init__(self, *args, **kwargs):
self.lockable = self.__class__.lockable<EOL>self._connected = False<EOL>for attr_name in self._fields:<EOL><INDENT>attr = getattr(self, "<STR_LIT>" % attr_name)<EOL>newattr = copy(attr)<EOL>newattr._attach_to_instance(self)<EOL>setattr(self, attr_name, newattr)<EOL><DEDENT>pk_field_name = getattr(self, "<STR_LIT>").name<EOL>if pk_field_name != '<STR_LIT>':<EOL><INDENT>self.pk = getattr(self, pk_field_name)<EOL><DEDENT>self._pk = None<EOL>self.get_field = self.get_instance_field<EOL>if len(args) > <NUM_LIT:0> and len(kwargs) > <NUM_LIT:0>:<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>self._init_fields = set()<EOL>if len(kwargs) > <NUM_LIT:0>:<EOL><INDENT>kwargs_pk_field_name = None<EOL>for field_name, value in iteritems(kwargs):<EOL><INDENT>if self._field_is_pk(field_name):<EOL><INDENT>if kwargs_pk_field_name:<EOL><INDENT>raise ValueError(u'<STR_LIT>'<EOL>'<STR_LIT>' % pk_field_name)<EOL><DEDENT>kwargs_pk_field_name = field_name<EOL>field_name = pk_field_name<EOL><DEDENT>if not self.has_field(field_name):<EOL><INDENT>raise ValueError(u"<STR_LIT>"<EOL>"<STR_LIT>" % (field_name, self.__class__.__name__))<EOL><DEDENT>field = self.get_field(field_name)<EOL>if field.unique:<EOL><INDENT>field.check_uniqueness(value)<EOL><DEDENT>self._init_fields.add(field_name)<EOL><DEDENT>if kwargs_pk_field_name:<EOL><INDENT>self.pk.set(kwargs[kwargs_pk_field_name])<EOL><DEDENT>for field_name in self._fields:<EOL><INDENT>if field_name not in kwargs or self._field_is_pk(field_name):<EOL><INDENT>continue<EOL><DEDENT>field = self.get_field(field_name)<EOL>field.proxy_set(kwargs[field_name])<EOL><DEDENT><DEDENT>if len(args) == <NUM_LIT:1>:<EOL><INDENT>self._pk = self.pk.normalize(args[<NUM_LIT:0>])<EOL>self.connect()<EOL><DEDENT>
Init or retrieve an object storage in Redis. Here whats init manages: - no args, no kwargs: just instanciate in a python way, no connection to redis - some kwargs == instanciate, connect, and set the properties received - one arg == get from pk
f15573:c1:m0
def connect(self):
if self.connected:<EOL><INDENT>return<EOL><DEDENT>pk = self._pk<EOL>if self.exists(pk=pk):<EOL><INDENT>self._connected = True<EOL><DEDENT>else:<EOL><INDENT>self._pk = None<EOL>self._connected = False<EOL>raise DoesNotExist("<STR_LIT>" % (self.__class__.__name__, pk))<EOL><DEDENT>
Connect the instance to redis by checking the existence of its primary key. Do nothing if already connected.
f15573:c1:m2
@classmethod<EOL><INDENT>def lazy_connect(cls, pk):<DEDENT>
instance = cls()<EOL>instance._pk = instance.pk.normalize(pk)<EOL>instance._connected = False<EOL>return instance<EOL>
Create an object, setting its primary key without testing it. So the instance is not connected
f15573:c1:m3
@property<EOL><INDENT>def connected(self):<DEDENT>
return self._connected<EOL>
A property to check if the model is connected to redis (ie if it as a primary key checked for existence)
f15573:c1:m4
@classmethod<EOL><INDENT>def use_database(cls, database):<DEDENT>
return database._use_for_model(cls)<EOL>
Transfert the current model to the new database. Move subclasses to the new database two if they actually share the same one (so it's easy to call use_database on an abstract model to use the new database for all subclasses)
f15573:c1:m5
@classmethod<EOL><INDENT>def has_field(cls, field_name):<DEDENT>
return field_name == '<STR_LIT>' or field_name in cls._fields<EOL>
Return True if the given field name is an allowed field for this model
f15573:c1:m6