signature stringlengths 8 3.44k | body stringlengths 0 1.41M | docstring stringlengths 1 122k | id stringlengths 5 17 |
|---|---|---|---|
def __init__(self): | super().__init__('''<STR_LIT>''', dependencies=(_find_inverse_gamma(), igam_fac(), igam()))<EOL> | Copied from Scipy (https://github.com/scipy/scipy/blob/master/scipy/special/cephes/igami.c), 05-05-2018. | f15550:c10:m0 |
def __init__(self): | super().__init__('''<STR_LIT>''', dependencies=(_find_inverse_gamma(), igam_fac(), igam()))<EOL> | Copied from Scipy (https://github.com/scipy/scipy/blob/master/scipy/special/cephes/igami.c), 05-05-2018. | f15550:c11:m0 |
def __init__(self): | super().__init__('''<STR_LIT>''', dependencies=(_igami_impl(), _igamci_impl(), _find_inverse_gamma(), igam_fac(), igamc()))<EOL> | Copied from Scipy (https://github.com/scipy/scipy/blob/master/scipy/special/cephes/igami.c), 05-05-2018. | f15550:c12:m0 |
def __init__(self): | super().__init__('''<STR_LIT>''', dependencies=(igam_series(), igamc(), igam_igamc_asymptotic_series()))<EOL> | Complemented incomplete Gamma integral
Also known as the regularized lower incomplete gamma function.
Copied from Scipy (https://github.com/scipy/scipy/blob/master/scipy/special/cephes/igam.c), 05-05-2018::
/* igam.c
*
* Incomplete Gamma integral
*
*
*
* SYNOPSIS:
*
* double a, x, y, igam();
*
* y = igam( a, x );
*
* DESCRIPTION:
*
* The function is defined by
*
* x
* -
* 1 | | -t a-1
* igam(a,x) = ----- | e t dt.
* - | |
* | (a) -
* 0
*
*
* In this implementation both arguments must be positive.
* The integral is evaluated by either a power series or
* continued fraction expansion, depending on the relative
* values of a and x.
*
* ACCURACY:
*
* Relative error:
* arithmetic domain # trials peak rms
* IEEE 0,30 200000 3.6e-14 2.9e-15
* IEEE 0,100 300000 9.9e-14 1.5e-14
*/ | f15550:c13:m0 |
def __init__(self): | super().__init__('''<STR_LIT>''', dependencies=(igam_igamc_asymptotic_series(), igamc_series(),<EOL>igam_series(), igamc_continued_fraction()))<EOL> | Complemented incomplete Gamma integral
Also known as the regularized upper incomplete gamma function.
Copied from Scipy (https://github.com/scipy/scipy/blob/master/scipy/special/cephes/igam.c), 05-05-2018::
/* igamc()
*
* Complemented incomplete Gamma integral
*
*
*
* SYNOPSIS:
*
* double a, x, y, igamc();
*
* y = igamc( a, x );
*
* DESCRIPTION:
*
* The function is defined by
*
*
* igamc(a,x) = 1 - igam(a,x)
*
* inf.
* -
* 1 | | -t a-1
* = ----- | e t dt.
* - | |
* | (a) -
* x
*
*
* In this implementation both arguments must be positive.
* The integral is evaluated by either a power series or
* continued fraction expansion, depending on the relative
* values of a and x.
*
* ACCURACY:
*
* Tested at random a, x.
* a x Relative error:
* arithmetic domain domain # trials peak rms
* IEEE 0.5,100 0,100 200000 1.9e-14 1.7e-15
* IEEE 0.01,0.5 0,100 200000 1.4e-13 1.6e-15
*/ | f15550:c14:m0 |
def __init__(self): | super().__init__('''<STR_LIT>''', dependencies=(log1pmx(), lanczos_sum_expg_scaled()))<EOL> | Compute x^a * exp(-x) / gamma(a)
Copied from Scipy (https://github.com/scipy/scipy/blob/master/scipy/special/cephes/igam.c), 05-05-2018. | f15550:c15:m0 |
def __init__(self): | super().__init__('''<STR_LIT>''', dependencies=(igam_fac(),))<EOL> | Compute igamc using DLMF 8.9.2.
Copied from Scipy (https://github.com/scipy/scipy/blob/master/scipy/special/cephes/igam.c), 05-05-2018. | f15550:c16:m0 |
def __init__(self): | super().__init__('''<STR_LIT>''', dependencies=(igam_fac(),))<EOL> | Compute igamc using DLMF 8.11.4
Copied from Scipy (https://github.com/scipy/scipy/blob/master/scipy/special/cephes/igam.c), 05-05-2018. | f15550:c17:m0 |
def __init__(self): | super().__init__('''<STR_LIT>''', dependencies=(lgam1p(),))<EOL> | Compute igamc using DLMF 8.7.3.
This is related to the series in igam_series but extra care is taken to avoid cancellation.
Copied from Scipy (https://github.com/scipy/scipy/blob/master/scipy/special/cephes/igam.c), 05-05-2018. | f15550:c18:m0 |
def __init__(self): | super().__init__('''<STR_LIT>''', dependencies=(log1pmx(),))<EOL> | Compute igam/igamc using DLMF 8.12.3/8.12.4.
Copied from Scipy (https://github.com/scipy/scipy/blob/master/scipy/special/cephes/igam.c), 05-05-2018.
The argument ``func`` should be 1 when computing for IGAM and 0 when computing for IGAMC. | f15550:c19:m0 |
def __init__(self): | super().__init__('''<STR_LIT>''')<EOL> | Compute the first term of the legendre polynomial for the given value x and the polynomial degree n.
The Legendre polynomials, Pn(x), are orthogonal on the interval [-1,1] with weight function w(x) = 1
for -1 <= x <= 1 and 0 elsewhere. They are normalized so that Pn(1) = 1. The inner products are:
.. code-block:: c
<Pn,Pm> = 0 if n != m,
<Pn,Pn> = 2/(2n+1) if n >= 0.
This routine calculates Pn(x) using the following recursion:
.. code-block:: c
(k+1) P[k+1](x) = (2k+1)x P[k](x) - k P[k-1](x), k = 1,...,n-1
P[0](x) = 1, P[1](x) = x.
The function arguments are:
* x: The argument of the Legendre polynomial Pn.
* n: The degree of the Legendre polynomial Pn.
The return value is Pn(x) if n is a non negative integer. If n is negative, 0 is returned. | f15551:c0:m0 |
def __init__(self): | super().__init__('''<STR_LIT>''')<EOL> | Compute a range of Legendre terms for the given value x and the polynomial degree n.
The Legendre polynomials, Pn(x), are orthogonal on the interval [-1,1] with weight function w(x) = 1
for -1 <= x <= 1 and 0 elsewhere. They are normalized so that Pn(1) = 1. The inner products are:
.. code-block:: c
<Pn,Pm> = 0 if n != m,
<Pn,Pn> = 2/(2n+1) if n >= 0.
This routine calculates Pn(x) for all n in [0, 1, 2, ..., n] using the recursion:
.. code-block:: c
(k+1) P[k+1](x) = (2k+1)x P[k](x) - k P[k-1](x), k = 1,...,n-1
P[0](x) = 1, P[1](x) = x.
That is, this function will fill the given array legendre_terms with the values:
[0] = firstLegendreTerm(x, 0)
[1] = firstLegendreTerm(x, 1)
[2] = firstLegendreTerm(x, 2)
[3] = firstLegendreTerm(x, 3)
...
The function arguments are:
x: The argument of the Legendre polynomial Pn.
n: The number of terms to fill.
legendre_terms: an matrix of length n for storing the legendre terms | f15551:c1:m0 |
def __init__(self): | super().__init__('''<STR_LIT>''')<EOL> | Compute a range of even legendre terms for the given value x and the polynomial degree n.
The Legendre polynomials, Pn(x), are orthogonal on the interval [-1,1] with weight function w(x) = 1
for -1 <= x <= 1 and 0 elsewhere. They are normalized so that Pn(1) = 1. The inner products are:
.. code-block:: c
<Pn,Pm> = 0 if n != m,
<Pn,Pn> = 2/(2n+1) if n >= 0.
This routine calculates Pn(x) for all n in [0, 2, 4, ..., n] using the recursion:
.. code-block:: c
(k+1) P[k+1](x) = (2k+1)x P[k](x) - k P[k-1](x), k = 1,...,n-1
P[0](x) = 1, P[1](x) = x.
That is, this function will fill the given array legendre_terms with the values:
[0] = firstLegendreTerm(x, 0)
[1] = firstLegendreTerm(x, 2)
[2] = firstLegendreTerm(x, 4)
[3] = firstLegendreTerm(x, 8)
...
The function arguments are:
x: The argument of the Legendre polynomial Pn.
n: The number of terms to fill.
legendre_terms: an matrix of length n/2 for storing the even legendre terms | f15551:c2:m0 |
def __init__(self): | super().__init__('''<STR_LIT>''')<EOL> | Compute a range of odd legendre terms for the given value x and the polynomial degree n.
The Legendre polynomials, Pn(x), are orthogonal on the interval [-1,1] with weight function w(x) = 1
for -1 <= x <= 1 and 0 elsewhere. They are normalized so that Pn(1) = 1. The inner products are:
.. code-block:: c
<Pn,Pm> = 0 if n != m,
<Pn,Pn> = 2/(2n+1) if n >= 0.
This routine calculates Pn(x) for all n in [1, 3, 5, ..., n] using the recursion:
.. code-block:: c
(k+1) P[k+1](x) = (2k+1)x P[k](x) - k P[k-1](x), k = 1,...,n-1
P[0](x) = 1, P[1](x) = x.
That is, this function will fill the given array legendre_terms with the values:
[0] = firstLegendreTerm(x, 1)
[1] = firstLegendreTerm(x, 3)
[2] = firstLegendreTerm(x, 5)
[3] = firstLegendreTerm(x, 7)
...
The function arguments are:
x: The argument of the Legendre polynomial Pn.
n: The number of terms to fill.
legendre_terms: an matrix of length n/2 for storing the odd legendre terms | f15551:c3:m0 |
def __init__(self, return_type, cl_function_name, parameter_list, cl_code_file,<EOL>var_replace_dict=None, **kwargs): | self._var_replace_dict = var_replace_dict<EOL>with open(os.path.abspath(cl_code_file), '<STR_LIT:r>') as f:<EOL><INDENT>code = f.read()<EOL><DEDENT>if var_replace_dict is not None:<EOL><INDENT>code = code % var_replace_dict<EOL><DEDENT>super().__init__(return_type, cl_function_name, parameter_list, code, **kwargs)<EOL>self._code = code<EOL> | Create a CL function for a library function.
These functions are not meant to be optimized, but can be used a helper functions in models.
Args:
cl_function_name (str): The name of the CL function
cl_code_file (str): The location of the code file
var_replace_dict (dict): In the cl_code file these replacements will be made
(using the % format function of Python) | f15552:c2:m0 |
def __init__(self): | super().__init__('''<STR_LIT>''', dependencies=(ratevl(),))<EOL> | Copied from Scipy (https://github.com/scipy/scipy/blob/master/scipy/special/cephes/lanczos.c), 2018-05-07. | f15553:c0:m0 |
def __init__(self): | super().__init__('''<STR_LIT>''')<EOL> | log(1 + x) - x | f15554:c0:m0 |
def __init__(self): | super().__init__('<STR_LIT>')<EOL> | Compute lgam(x + 1).
This is a simplification of the corresponding function in scipy
https://github.com/scipy/scipy/blob/master/scipy/special/cephes/unity.c 2018-05-14 | f15554:c1:m0 |
def __init__(self): | super().__init__('''<STR_LIT>''')<EOL> | Returns (only) the real roots of a cubic polynomial.
This computes :math:`p(x) = \\sum_i c[i] * x^i = 0`, i.e. tries to find x such that :math:`p(x) = 0`
using the algebraic method.
The coefficients and the roots may point to the same address space to save memory.
This code is an OpenCL translation from the Python code to be found at:
https://github.com/shril/CubicEquationSolver/blob/master/CubicEquationSolver.py
Args:
coefficients: array of length 4, with the coefficients (a, b, c, d)
roots: array of length 3, for the return values. Please note that only the first *n* values will be set,
with n the number of returned real roots.
Returns:
the number of real roots | f15555:c0:m0 |
def __init__(self): | super().__init__('''<STR_LIT>''')<EOL> | Routines for computing polynomials.
Copied from Scipy (https://github.com/scipy/scipy/blob/master/scipy/special/cephes/polevl.h), 05-05-2018.
Evaluates polynomial of degree N::
2 N
y = C + C x + C x +...+ C x
0 1 2 N
Coefficients are stored in reverse order::
coef[0] = C , ..., coef[N] = C .
N 0 | f15555:c1:m0 |
def __init__(self): | super().__init__('''<STR_LIT>''')<EOL> | Routines for computing polynomials when coefficient of x^N is 1.0.
Copied from Scipy (https://github.com/scipy/scipy/blob/master/scipy/special/cephes/polevl.h), 05-05-2018.
Evaluates polynomial of degree N::
2 N
y = C + C x + C x +...+ C x
0 1 2 N
Coefficients are stored in reverse order::
coef[0] = C , ..., coef[N] = C .
N 0
In contrast to ``polevl``, this function assumes that coef[N] = 1.0 and is omitted from the array.
Its calling arguments are otherwise the same as polevl(). | f15555:c2:m0 |
def __init__(self): | super().__init__('''<STR_LIT>''')<EOL> | Evaluates a rational integral.
Copied from Scipy (https://github.com/scipy/scipy/blob/master/scipy/special/cephes/polevl.h), 2018-05-07. | f15555:c3:m0 |
def __init__(self): | super().__init__('''<STR_LIT>''')<EOL> | Return the zeroth-order modified Bessel function of the first kind
Original author of C code: M.G.R. Vogelaar | f15556:c0:m0 |
def __init__(self): | super().__init__('''<STR_LIT>''', dependencies=(Besseli0(),))<EOL> | Return the log of the zeroth-order modified Bessel function of the first kind. | f15556:c1:m0 |
def __init__(self): | super().__init__('''<STR_LIT>''')<EOL> | Computes :math:`log(cosh(x))`
For large x this will try to estimate it without overflow. For small x we use the opencl functions log and cos.
The estimation for large numbers has been taken from:
https://github.com/JaneliaSciComp/tmt/blob/master/basics/logcosh.m | f15556:c2:m0 |
def __init__(self, memspace='<STR_LIT>', memtype='<STR_LIT>'): | super().__init__(<EOL>memtype,<EOL>self.__class__.__name__ + '<STR_LIT:_>' + memspace + '<STR_LIT:_>' + memtype,<EOL>[],<EOL>resource_filename('<STR_LIT>', '<STR_LIT>'),<EOL>var_replace_dict={'<STR_LIT>': memspace, '<STR_LIT>': memtype})<EOL> | A CL functions for calculating the Euclidian distance between n values.
Args:
memspace (str): The memory space of the memtyped array (private, constant, global).
memtype (str): the memory type to use, double, float, mot_float_type, ... | f15556:c4:m0 |
def __init__(self, function_name): | super().__init__('''<STR_LIT>'''.format(f=function_name))<EOL> | Create a CL function for integrating a function using Simpson's rule.
This creates a CL function specifically meant for integrating the function of the given name.
The name of the generated CL function will be 'simpsons_rule_<function_name>'.
Args:
function_name (str): the name of the function to integrate, accepting the arguments:
- a: the lower bound of the integral
- b: the upper bound of the integral
- n: the number of steps, i.e. the number of approximations to make
- data: a pointer to some data, this is passed on to the function we are integrating. | f15556:c5:m0 |
def __init__(self): | super().__init__('''<STR_LIT>''')<EOL> | Cubic interpolation for a one-dimensional grid.
This uses the theory of Cubic Hermite splines for interpolating a one-dimensional grid of values.
At the borders, it will clip the values to the nearest border.
For more information on this method, see https://en.wikipedia.org/wiki/Cubic_Hermite_spline.
Example usage:
constant float data[] = {1.0, 2.0, 5.0, 6.0};
linear_cubic_interpolation(1.5, 4, data); | f15556:c6:m0 |
def __init__(self): | super().__init__('''<STR_LIT>''')<EOL> | Calculate the eigenvalues of a symmetric 3x3 matrix.
This simple algorithm only works in case of a real and symmetric matrix.
The input to this function is an array with the upper triangular elements of the matrix.
The output are the eigenvalues such that eig1 >= eig2 >= eig3, i.e. from large to small.
Args:
A: the upper triangle of an 3x3 matrix
v: the output eigenvalues as a vector of three elements.
References:
[1]: https://en.wikipedia.org/wiki/Eigenvalue_algorithm#3.C3.973_matrices
[2]: Smith, Oliver K. (April 1961), "Eigenvalues of a symmetric 3 × 3 matrix.", Communications of the ACM,
4 (4): 168, doi:10.1145/355578.366316 | f15556:c7:m0 |
def __init__(self): | super().__init__('''<STR_LIT>''')<EOL> | Matrix multiplication of two square matrices.
Having this as a special function is slightly faster than a more generic matrix multiplication algorithm.
All matrices are expected to be in c/row-major order.
Parameters:
n: the rectangular size of the matrix (the number of rows / the number of columns).
A[n*n]: the left matrix
B[n*n]: the right matrix
C[n*n]: the output matrix | f15556:c8:m0 |
def __init__(self): | super().__init__('''<STR_LIT>''', dependencies=[eispack_tred2(), eispack_tql2()])<EOL> | Computes eigenvalues and eigenvectors of real symmetric matrix.
This uses the RS routine from the EISPACK code, to be found at:
https://people.sc.fsu.edu/~jburkardt/c_src/eispack/eispack.html
It first tri-diagonalizes the matrix using Householder transformations and then computes the diagonal
using QR transformations.
This routine only works with real symmetric matrices as input.
Parameters:
n: input, the order of the matrix.
A[n*n]: input, the real symmetric matrix to invert
W[n]: output, the eigenvalues in ascending order.
Z[n*n]: output, the eigenvectors, can coincide with A[n*n].
scratch[n]: input, scratch data
Output:
error code: the error for the the TQL2 routine (see EISPACK).
The no-error, normal completion code is zero. | f15556:c9:m0 |
def __init__(self): | super().__init__('''<STR_LIT>''', dependencies=[eigen_decompose_real_symmetric_matrix()])<EOL> | Compute the pseudo-inverse of a real symmetric matrix stored as an upper triangular vector.
Results are placed in the upper triangular input vector.
Parameters:
n: the size of the symmetric matrix
A[n*(n+1)/2]: on input, the matrix to inverse. On output, the inverse of the matrix.
scratch[2*n + 2*n*n]: scratch data | f15556:c10:m0 |
def __init__(self, eval_func, nmr_parameters, patience=<NUM_LIT:2>, patience_line_search=None,<EOL>reset_method='<STR_LIT>', **kwargs): | dependencies = list(kwargs.get('<STR_LIT>', []))<EOL>dependencies.append(eval_func)<EOL>kwargs['<STR_LIT>'] = dependencies<EOL>params = {<EOL>'<STR_LIT>': eval_func.get_cl_function_name(),<EOL>'<STR_LIT>': nmr_parameters,<EOL>'<STR_LIT>': reset_method.upper(),<EOL>'<STR_LIT>': patience,<EOL>'<STR_LIT>': patience if patience_line_search is None else patience_line_search<EOL>}<EOL>super().__init__(<EOL>'<STR_LIT:int>', '<STR_LIT>', [<EOL>'<STR_LIT>',<EOL>'<STR_LIT>',<EOL>'<STR_LIT>'<EOL>],<EOL>resource_filename('<STR_LIT>', '<STR_LIT>'),<EOL>var_replace_dict=params, **kwargs)<EOL> | The Powell CL implementation.
Args:
eval_func (mot.lib.cl_function.CLFunction): the function we want to optimize, Should be of signature:
``double evaluate(local mot_float_type* x, void* data_void);``
nmr_parameters (int): the number of parameters in the model, this will be hardcoded in the method
patience (int): the patience of the Powell algorithm
patience_line_search (int): the patience of the line search algorithm. If None, we set it equal to the
patience.
reset_method (str): one of ``RESET_TO_IDENTITY`` or ``EXTRAPOLATED_POINT``. The method used to
reset the search directions every iteration. | f15556:c11:m0 |
def get_kernel_data(self): | return {<EOL>'<STR_LIT>': LocalMemory(<EOL>'<STR_LIT>', <NUM_LIT:3> * self._var_replace_dict['<STR_LIT>'] + self._var_replace_dict['<STR_LIT>']**<NUM_LIT:2>)<EOL>}<EOL> | Get the kernel data needed for this optimization routine to work. | f15556:c11:m1 |
def __init__(self, function_name): | params = {<EOL>'<STR_LIT>': function_name<EOL>}<EOL>super().__init__(<EOL>'<STR_LIT:int>', '<STR_LIT>', [],<EOL>resource_filename('<STR_LIT>', '<STR_LIT>'),<EOL>var_replace_dict=params)<EOL> | The NMSimplex algorithm as a reusable library component.
Args:
function_name (str): the name of the evaluation function to call, defaults to 'evaluate'.
This should point to a function with signature:
``double evaluate(local mot_float_type* x, void* data_void);`` | f15556:c12:m0 |
def get_kernel_data(self): | return {<EOL>'<STR_LIT>': LocalMemory(<EOL>'<STR_LIT>', self._nmr_parameters * <NUM_LIT:2> + (self._nmr_parameters + <NUM_LIT:1>) ** <NUM_LIT:2> + <NUM_LIT:1>),<EOL>'<STR_LIT>': LocalMemory('<STR_LIT>', self._nmr_parameters)<EOL>}<EOL> | Get the kernel data needed for this optimization routine to work. | f15556:c13:m1 |
def __init__(self, eval_func, nmr_parameters, patience=<NUM_LIT:10>,<EOL>patience_nmsimplex=<NUM_LIT:100>, alpha=<NUM_LIT:1.0>, beta=<NUM_LIT:0.5>, gamma=<NUM_LIT>, delta=<NUM_LIT:0.5>, scale=<NUM_LIT:1.0>, psi=<NUM_LIT>, omega=<NUM_LIT>,<EOL>adaptive_scales=True, min_subspace_length='<STR_LIT>', max_subspace_length='<STR_LIT>', **kwargs): | dependencies = list(kwargs.get('<STR_LIT>', []))<EOL>dependencies.append(eval_func)<EOL>dependencies.append(LibNMSimplex('<STR_LIT>'))<EOL>kwargs['<STR_LIT>'] = dependencies<EOL>params = {<EOL>'<STR_LIT>': eval_func.get_cl_function_name(),<EOL>'<STR_LIT>': patience,<EOL>'<STR_LIT>': patience_nmsimplex,<EOL>'<STR_LIT>': alpha,<EOL>'<STR_LIT>': beta,<EOL>'<STR_LIT>': gamma,<EOL>'<STR_LIT>': delta,<EOL>'<STR_LIT>': psi,<EOL>'<STR_LIT>': omega,<EOL>'<STR_LIT>': nmr_parameters,<EOL>'<STR_LIT>': int(bool(adaptive_scales)),<EOL>'<STR_LIT>': (min(<NUM_LIT:2>, nmr_parameters) if min_subspace_length == '<STR_LIT>' else min_subspace_length),<EOL>'<STR_LIT>': (min(<NUM_LIT:5>, nmr_parameters) if max_subspace_length == '<STR_LIT>' else max_subspace_length)<EOL>}<EOL>s = '<STR_LIT>'<EOL>for ind in range(nmr_parameters):<EOL><INDENT>s += '<STR_LIT>'.format(ind, scale)<EOL><DEDENT>params['<STR_LIT>'] = s<EOL>super().__init__(<EOL>'<STR_LIT:int>', '<STR_LIT>', [<EOL>'<STR_LIT>',<EOL>'<STR_LIT>',<EOL>'<STR_LIT>',<EOL>'<STR_LIT>',<EOL>'<STR_LIT>'<EOL>],<EOL>resource_filename('<STR_LIT>', '<STR_LIT>'), var_replace_dict=params, **kwargs)<EOL> | The Subplex optimization routines. | f15556:c14:m0 |
def get_kernel_data(self): | return {<EOL>'<STR_LIT>': LocalMemory(<EOL>'<STR_LIT>', <NUM_LIT:4> + self._var_replace_dict['<STR_LIT>'] * <NUM_LIT:2><EOL>+ self._var_replace_dict['<STR_LIT>'] * <NUM_LIT:2><EOL>+ (self._var_replace_dict['<STR_LIT>'] * <NUM_LIT:2><EOL>+ self._var_replace_dict['<STR_LIT>']+<NUM_LIT:1>)**<NUM_LIT:2> + <NUM_LIT:1>),<EOL>'<STR_LIT>': LocalMemory(<EOL>'<STR_LIT:int>', <NUM_LIT:2> + self._var_replace_dict['<STR_LIT>']<EOL>+ (self._var_replace_dict['<STR_LIT>'] // self._var_replace_dict['<STR_LIT>'])),<EOL>'<STR_LIT>': LocalMemory('<STR_LIT>', self._var_replace_dict['<STR_LIT>'])<EOL>}<EOL> | Get the kernel data needed for this optimization routine to work. | f15556:c14:m1 |
def __init__(self, eval_func, nmr_parameters, nmr_observations, jacobian_func, patience=<NUM_LIT>,<EOL>step_bound=<NUM_LIT>, scale_diag=<NUM_LIT:1>, usertol_mult=<NUM_LIT:30>, **kwargs): | dependencies = list(kwargs.get('<STR_LIT>', []))<EOL>dependencies.append(eval_func)<EOL>dependencies.append(jacobian_func)<EOL>kwargs['<STR_LIT>'] = dependencies<EOL>var_replace_dict = {<EOL>'<STR_LIT>': eval_func.get_cl_function_name(),<EOL>'<STR_LIT>': jacobian_func.get_cl_function_name(),<EOL>'<STR_LIT>': nmr_parameters,<EOL>'<STR_LIT>': patience,<EOL>'<STR_LIT>': nmr_observations,<EOL>'<STR_LIT>': int(bool(scale_diag)),<EOL>'<STR_LIT>': step_bound,<EOL>'<STR_LIT>': usertol_mult<EOL>}<EOL>super().__init__(<EOL>'<STR_LIT:int>', '<STR_LIT>', ['<STR_LIT>',<EOL>'<STR_LIT>',<EOL>'<STR_LIT>',<EOL>'<STR_LIT>'],<EOL>resource_filename('<STR_LIT>', '<STR_LIT>'),<EOL>var_replace_dict=var_replace_dict, **kwargs)<EOL> | The Powell CL implementation.
Args:
eval_func (mot.lib.cl_function.CLFunction): the function we want to optimize, Should be of signature:
``void evaluate(local mot_float_type* x, void* data_void, local mot_float_type* result);``
nmr_parameters (int): the number of parameters in the model, this will be hardcoded in the method
nmr_observations (int): the number of observations in the model
jacobian_func (mot.lib.cl_function.CLFunction): the function used to compute the Jacobian.
patience (int): the patience of the Powell algorithm
patience_line_search (int): the patience of the line search algorithm
reset_method (str): one of ``RESET_TO_IDENTITY`` or ``EXTRAPOLATED_POINT``. The method used to
reset the search directions every iteration. | f15556:c15:m0 |
def get_kernel_data(self): | return {<EOL>'<STR_LIT>': LocalMemory(<EOL>'<STR_LIT>', <NUM_LIT:8> +<EOL><NUM_LIT:2> * self._var_replace_dict['<STR_LIT>'] +<EOL><NUM_LIT:5> * self._var_replace_dict['<STR_LIT>'] +<EOL>self._var_replace_dict['<STR_LIT>'] * self._var_replace_dict['<STR_LIT>']),<EOL>'<STR_LIT>': LocalMemory('<STR_LIT:int>', self._var_replace_dict['<STR_LIT>'])<EOL>}<EOL> | Get the kernel data needed for this optimization routine to work. | f15556:c15:m1 |
def __init__(self, platform, device): | self._platform = platform<EOL>self._device = device<EOL>if (self._platform, self._device) not in _context_cache:<EOL><INDENT>context = cl.Context([device])<EOL>_context_cache[(self._platform, self._device)] = context<EOL><DEDENT>self._context = _context_cache[(self._platform, self._device)]<EOL>self._queue = cl.CommandQueue(self._context, device=device)<EOL> | Storage unit for an OpenCL environment.
Args:
platform (pyopencl platform): An PyOpenCL platform.
device (pyopencl device): An PyOpenCL device | f15558:c0:m0 |
@property<EOL><INDENT>def context(self):<DEDENT> | return self._context<EOL> | Get a CL context containing this device.
Returns:
cl.Context: a PyOpenCL device context | f15558:c0:m1 |
@property<EOL><INDENT>def queue(self):<DEDENT> | return self._queue<EOL> | Get a CL queue for this device and context.
Returns:
cl.Queue: a PyOpenCL queue | f15558:c0:m2 |
@property<EOL><INDENT>def supports_double(self):<DEDENT> | return device_supports_double(self.device)<EOL> | Check if the device listed by this environment supports double
Returns:
boolean: True if the device supports double, false otherwise. | f15558:c0:m3 |
@property<EOL><INDENT>def platform(self):<DEDENT> | return self._platform<EOL> | Get the platform associated with this environment.
Returns:
pyopencl platform: The platform associated with this environment. | f15558:c0:m4 |
@property<EOL><INDENT>def device(self):<DEDENT> | return self._device<EOL> | Get the device associated with this environment.
Returns:
pyopencl device: The device associated with this environment. | f15558:c0:m5 |
@property<EOL><INDENT>def is_gpu(self):<DEDENT> | return self._device.get_info(cl.device_info.TYPE) == cl.device_type.GPU<EOL> | Check if the device associated with this environment is a GPU.
Returns:
boolean: True if the device is an GPU, false otherwise. | f15558:c0:m6 |
@property<EOL><INDENT>def is_cpu(self):<DEDENT> | return self._device.get_info(cl.device_info.TYPE) == cl.device_type.CPU<EOL> | Check if the device associated with this environment is a CPU.
Returns:
boolean: True if the device is an CPU, false otherwise. | f15558:c0:m7 |
@property<EOL><INDENT>def device_type(self):<DEDENT> | return self._device.get_info(cl.device_info.TYPE)<EOL> | Get the device type of the device in this environment.
Returns:
the device type of this device. | f15558:c0:m8 |
def __eq__(self, other): | if isinstance(other, CLEnvironment):<EOL><INDENT>return other.platform == self.platform and other.device == self.device<EOL><DEDENT>return False<EOL> | A device is equal to another if the platform and the device are equal. | f15558:c0:m12 |
@staticmethod<EOL><INDENT>def single_device(cl_device_type='<STR_LIT>', platform=None, fallback_to_any_device_type=False):<DEDENT> | if isinstance(cl_device_type, str):<EOL><INDENT>cl_device_type = device_type_from_string(cl_device_type)<EOL><DEDENT>device = None<EOL>if platform is None:<EOL><INDENT>platforms = cl.get_platforms()<EOL><DEDENT>else:<EOL><INDENT>platforms = [platform]<EOL><DEDENT>for platform in platforms:<EOL><INDENT>devices = platform.get_devices(device_type=cl_device_type)<EOL>for dev in devices:<EOL><INDENT>if device_supports_double(dev):<EOL><INDENT>try:<EOL><INDENT>env = CLEnvironment(platform, dev)<EOL>return [env]<EOL><DEDENT>except cl.RuntimeError:<EOL><INDENT>pass<EOL><DEDENT><DEDENT><DEDENT><DEDENT>if not device:<EOL><INDENT>if fallback_to_any_device_type:<EOL><INDENT>return cl.get_platforms()[<NUM_LIT:0>].get_devices()<EOL><DEDENT>else:<EOL><INDENT>raise ValueError('<STR_LIT>'.format(<EOL>cl.device_type.to_string(cl_device_type)))<EOL><DEDENT><DEDENT>raise ValueError('<STR_LIT>')<EOL> | Get a list containing a single device environment, for a device of the given type on the given platform.
This will only fetch devices that support double (possibly only double with a pragma
defined, but still, it should support double).
Args:
cl_device_type (cl.device_type.* or string): The type of the device we want,
can be a opencl device type or a string matching 'GPU', 'CPU' or 'ALL'.
platform (opencl platform): The opencl platform to select the devices from
fallback_to_any_device_type (boolean): If True, try to fallback to any possible device in the system.
Returns:
list of CLEnvironment: List with one element, the CL runtime environment requested. | f15558:c1:m0 |
@staticmethod<EOL><INDENT>def all_devices(cl_device_type=None, platform=None):<DEDENT> | if isinstance(cl_device_type, str):<EOL><INDENT>cl_device_type = device_type_from_string(cl_device_type)<EOL><DEDENT>runtime_list = []<EOL>if platform is None:<EOL><INDENT>platforms = cl.get_platforms()<EOL><DEDENT>else:<EOL><INDENT>platforms = [platform]<EOL><DEDENT>for platform in platforms:<EOL><INDENT>if cl_device_type:<EOL><INDENT>devices = platform.get_devices(device_type=cl_device_type)<EOL><DEDENT>else:<EOL><INDENT>devices = platform.get_devices()<EOL><DEDENT>for device in devices:<EOL><INDENT>if device_supports_double(device):<EOL><INDENT>env = CLEnvironment(platform, device)<EOL>runtime_list.append(env)<EOL><DEDENT><DEDENT><DEDENT>return runtime_list<EOL> | Get multiple device environments, optionally only of the indicated type.
This will only fetch devices that support double point precision.
Args:
cl_device_type (cl.device_type.* or string): The type of the device we want,
can be a opencl device type or a string matching 'GPU' or 'CPU'.
platform (opencl platform): The opencl platform to select the devices from
Returns:
list of CLEnvironment: List with the CL device environments. | f15558:c1:m1 |
@staticmethod<EOL><INDENT>def smart_device_selection(preferred_device_type=None):<DEDENT> | cl_environments = CLEnvironmentFactory.all_devices(cl_device_type=preferred_device_type)<EOL>platform_names = [env.platform.name for env in cl_environments]<EOL>has_amd_pro_platform = any('<STR_LIT>' in name for name in platform_names)<EOL>if has_amd_pro_platform:<EOL><INDENT>return list(filter(lambda env: '<STR_LIT>' not in env.platform.name, cl_environments))<EOL><DEDENT>if preferred_device_type is not None and not len(cl_environments):<EOL><INDENT>return CLEnvironmentFactory.all_devices()<EOL><DEDENT>return cl_environments<EOL> | Get a list of device environments that is suitable for use in MOT.
Basically this gets the total list of devices using all_devices() and applies a filter on it.
This filter does the following:
1) if the 'AMD Accelerated Parallel Processing' is available remove all environments using the 'Clover'
platform.
More things may be implemented in the future.
Args:
preferred_device_type (str): the preferred device type, one of 'CPU', 'GPU' or 'APU'.
If no devices of this type can be found, we will use any other device available.
Returns:
list of CLEnvironment: List with the CL device environments. | f15558:c1:m2 |
def set_mot_float_dtype(self, mot_float_dtype): | raise NotImplementedError()<EOL> | Set the numpy data type corresponding to the ``mot_float_type`` ctype.
This is set just prior to using this kernel data in the kernel.
Args:
mot_float_dtype (dtype): the numpy data type that is to correspond with the ``mot_float_type`` used in the
kernels. | f15559:c0:m0 |
def get_data(self): | raise NotImplementedError()<EOL> | Get the underlying data of this kernel data object.
Returns:
dict, ndarray, scalar: the underlying data object, can return None if this input data has no actual data. | f15559:c0:m1 |
def get_scalar_arg_dtypes(self): | raise NotImplementedError()<EOL> | Get the numpy data types we should report in the kernel call for scalar elements.
If we are inserting scalars in the kernel we need to provide the CL runtime with the correct data type
of the function. If the kernel parameter is not a scalar, we should return None. If the kernel data does not
require a kernel input parameter, return an empty list.
This list should match the list of parameters of :meth:`get_kernel_parameters`.
Returns:
List[Union[dtype, None]]: the numpy data type for this element, or None if this is not a scalar. | f15559:c0:m2 |
def enqueue_readouts(self, queue, buffers, range_start, range_end): | raise NotImplementedError()<EOL> | Enqueue readouts for this kernel input data object.
This should add non-blocking readouts to the given queue.
Args:
queue (opencl queue): the queue on which to add the unmap buffer command
buffers (List[pyopencl._cl.Buffer.Buffer]): the list of buffers corresponding to this kernel data.
These buffers are obtained earlier from the method :meth:`get_kernel_inputs`.
range_start (int): the start of the range to read out (in the first dimension)
range_end (int): the end of the range to read out (in the first dimension) | f15559:c0:m3 |
def get_type_definitions(self): | raise NotImplementedError()<EOL> | Get possible type definitions needed to load this data into the kernel.
These types are defined at the head of the CL script, before any functions.
Returns:
str: a CL compatible type declaration. This can for example be used for defining struct types.
If no extra types are needed, this function should return the empty string. | f15559:c0:m4 |
def initialize_variable(self, variable_name, kernel_param_name, problem_id_substitute, address_space): | raise NotImplementedError()<EOL> | Initialize the variable inside the kernel function.
This should initialize the variable as such that we can use it when calling the function acting on this data.
Args:
variable_name (str): the name for this variable
kernel_param_name (str): the kernel parameter name (given in :meth:`get_kernel_parameters`).
problem_id_substitute (str): the substitute for the ``{problem_id}`` in the kernel data info elements.
address_space (str): the desired address space for this variable, defined by the parameter of the called
function.
Returns:
str: the necessary CL code to initialize this variable | f15559:c0:m5 |
def get_function_call_input(self, variable_name, kernel_param_name, problem_id_substitute, address_space): | raise NotImplementedError()<EOL> | How this kernel data is used as input to the function that operates on the data.
Args:
variable_name (str): the name for this variable
kernel_param_name (str): the kernel parameter name (given in :meth:`get_kernel_parameters`).
problem_id_substitute (str): the substitute for the ``{problem_id}`` in the kernel data info elements.
address_space (str): the desired address space for this variable, defined by the parameter of the called
function.
Returns:
str: a single string representing how this kernel data is used as input to the function we are applying | f15559:c0:m6 |
def post_function_callback(self, variable_name, kernel_param_name, problem_id_substitute, address_space): | raise NotImplementedError()<EOL> | A callback to update or change data after the function has been applied
Args:
variable_name (str): the name for this variable
kernel_param_name (str): the kernel parameter name (given in :meth:`get_kernel_parameters`).
problem_id_substitute (str): the substitute for the ``{problem_id}`` in the kernel data info elements.
address_space (str): the desired address space for this variable, defined by the parameter of the called
function.
Returns:
str: CL code that needs to be run after the function has been applied. | f15559:c0:m7 |
def get_struct_declaration(self, name): | raise NotImplementedError()<EOL> | Get the variable declaration of this data object for use in a Struct.
Args:
name (str): the name for this data
Returns:
str: the variable declaration of this kernel data object | f15559:c0:m8 |
def get_struct_initialization(self, variable_name, kernel_param_name, problem_id_substitute): | raise NotImplementedError()<EOL> | Initialize the variable inside a struct.
This should initialize the variable for use in a struct (should correspond to :meth:`get_struct_declaration`).
Args:
variable_name (str): the name for this variable
kernel_param_name (str): the kernel parameter name (given in :meth:`get_kernel_parameters`).
problem_id_substitute (str): the substitute for the ``{problem_id}`` in the kernel data info elements.
address_space (str): the desired address space for this variable, defined by the parameter of the called
function.
Returns:
str: the necessary CL code to initialize this variable | f15559:c0:m9 |
def get_kernel_parameters(self, kernel_param_name): | raise NotImplementedError()<EOL> | Get the kernel argument declarations for this kernel data.
Args:
kernel_param_name (str): the parameter name for the parameter in the kernel function call
Returns:
List[str]: a list of kernel parameter declarations, or an empty list | f15559:c0:m10 |
def get_kernel_inputs(self, cl_context, workgroup_size): | raise NotImplementedError()<EOL> | Get the kernel input data matching the list of parameters of :meth:`get_kernel_parameters`.
Since the kernels follow the map/unmap paradigm make sure to use the ``USE_HOST_PTR`` when making
writable data objects.
Args:
cl_context (pyopencl.Context): the CL context in which we are working.
workgroup_size (int): the workgroup size the kernel will use.
Returns:
List: a list of buffers, local memory objects, scalars, etc., anything that can be loaded into the kernel.
If no data should be entered, return an empty list. | f15559:c0:m11 |
def get_nmr_kernel_inputs(self): | raise NotImplementedError()<EOL> | Get the number of kernel inputs this input data object has.
Returns:
int: the number of kernel inputs | f15559:c0:m12 |
def __init__(self, elements, ctype, anonymous=False): | self._elements = OrderedDict(sorted(elements.items()))<EOL>for key in list(self._elements):<EOL><INDENT>value = self._elements[key]<EOL>if isinstance(value, Mapping):<EOL><INDENT>self._elements[key] = Struct(value, key, anonymous=True)<EOL><DEDENT><DEDENT>self._ctype = ctype<EOL>self._anonymous = anonymous<EOL> | A kernel data element for structs.
Please be aware that structs will always be passed as a pointer to the calling function.
Args:
elements (Dict[str, Union[Dict, KernelData]]): the kernel data elements to load into the kernel
Can be a nested dictionary, in which case we load the nested elements as anonymous structs.
Alternatively, you can nest Structs in Structs, yielding named structs.
ctype (str): the name of this structure
anonymous (boolean): if this struct is to be loaded anonymously, this is only meant for nested Structs. | f15559:c1:m0 |
def __init__(self, value, ctype=None): | if isinstance(value, str) and value == '<STR_LIT>':<EOL><INDENT>self._value = np.inf<EOL><DEDENT>elif isinstance(value, str) and value == '<STR_LIT>':<EOL><INDENT>self._value = -np.inf<EOL><DEDENT>else:<EOL><INDENT>self._value = np.array(value)<EOL><DEDENT>self._ctype = ctype or dtype_to_ctype(self._value.dtype)<EOL>self._mot_float_dtype = None<EOL> | A kernel input scalar.
This will insert the given value directly into the kernel's source code, and will not load it as a buffer.
Args:
value (number): the number to insert into the kernel as a scalar.
ctype (str): the desired c-type for in use in the kernel, like ``int``, ``float`` or ``mot_float_type``.
If None it is implied from the value. | f15559:c2:m0 |
def __init__(self, ctype, nmr_items=None): | self._ctype = ctype<EOL>self._mot_float_dtype = None<EOL>if nmr_items is None:<EOL><INDENT>self._size_func = lambda workgroup_size: workgroup_size<EOL><DEDENT>elif isinstance(nmr_items, numbers.Number):<EOL><INDENT>self._size_func = lambda _: nmr_items<EOL><DEDENT>else:<EOL><INDENT>self._size_func = nmr_items<EOL><DEDENT> | Indicates that a local memory array of the indicated size must be loaded as kernel input data.
By default, this will create a local memory object the size of the local work group.
Args:
ctype (str): the desired c-type for this local memory object, like ``int``, ``float`` or ``mot_float_type``.
nmr_items (int or Callable[[int], int]): either the size directly or a function that can calculate the
required local memory size given the work group size. This will independently be multiplied with the
item size of the ctype for the final size in bytes. | f15559:c3:m0 |
def __init__(self, nmr_items, ctype): | self._ctype = ctype<EOL>self._mot_float_dtype = None<EOL>self._nmr_items = nmr_items<EOL> | Adds a private memory array of the indicated size to the kernel data elements.
This is useful if you want to have private memory arrays in kernel data structs.
Args:
nmr_items (int): the size of the private memory array
ctype (str): the desired c-type for this local memory object, like ``int``, ``float`` or ``mot_float_type``. | f15559:c4:m0 |
def __init__(self, data, ctype=None, mode='<STR_LIT:r>', offset_str=None, ensure_zero_copy=False, as_scalar=False): | self._is_readable = '<STR_LIT:r>' in mode<EOL>self._is_writable = '<STR_LIT:w>' in mode<EOL>self._requirements = ['<STR_LIT:C>', '<STR_LIT:A>', '<STR_LIT:O>']<EOL>if self._is_writable:<EOL><INDENT>self._requirements.append('<STR_LIT>')<EOL><DEDENT>self._data = np.require(data, requirements=self._requirements)<EOL>if ctype and not ctype.startswith('<STR_LIT>'):<EOL><INDENT>self._data = convert_data_to_dtype(self._data, ctype)<EOL><DEDENT>self._offset_str = offset_str<EOL>self._ctype = ctype or dtype_to_ctype(self._data.dtype)<EOL>self._mot_float_dtype = None<EOL>self._backup_data_reference = None<EOL>self._ensure_zero_copy = ensure_zero_copy<EOL>self._as_scalar = as_scalar<EOL>self._data_length = <NUM_LIT:1><EOL>if len(self._data.shape):<EOL><INDENT>self._data_length = self._data.strides[<NUM_LIT:0>] // self._data.itemsize<EOL><DEDENT>if self._offset_str == '<STR_LIT:0>' or self._offset_str == <NUM_LIT:0>:<EOL><INDENT>self._data_length = self._data.size<EOL><DEDENT>if self._as_scalar and len(np.squeeze(self._data).shape) > <NUM_LIT:1>:<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>if self._is_writable and self._ensure_zero_copy and self._data is not data:<EOL><INDENT>raise ValueError('<STR_LIT>'<EOL>'<STR_LIT>')<EOL><DEDENT> | Loads the given array as a buffer into the kernel.
By default, this will try to offset the data in the kernel by the stride of the first dimension multiplied
with the problem id by the kernel. For example, if a (n, m) matrix is provided, this will offset the data
by ``{problem_id} * m``.
This class will adapt the data to match the ctype (if necessary) and it might copy the data as a consecutive
array for direct memory access by the CL environment. Depending on those transformations, a copy of the original
array may be made. As such, if ``is_writable`` would have been set, the return values might be written to
a different array. To retrieve the output data after kernel execution, use the method :meth:`get_data`.
Alternatively, set ``ensure_zero_copy`` to True, this ensures that the return values are written to the
same reference by raising a ValueError if the data has to be copied to be used in the kernel.
Args:
data (ndarray): the data to load in the kernel
ctype (str): the desired c-type for in use in the kernel, like ``int``, ``float`` or ``mot_float_type``.
If None it is implied from the provided data.
mode (str): one of 'r', 'w' or 'rw', for respectively read, write or read and write. This sets the
mode of how the data is loaded into the compute device's memory.
offset_str (str): the offset definition, can use ``{problem_id}`` for multiplication purposes. Set to 0
for no offset.
ensure_zero_copy (boolean): only used if ``is_writable`` is set to True. If set, we guarantee that the
return values are written to the same input array. This allows the user of this class to user their
reference to the underlying data, relieving the user of having to use :meth:`get_data`.
as_scalar (boolean): if given and if the data is only a 1d, we will load the value as a scalar in the
data struct. As such, one does not need to evaluate as a pointer. | f15559:c5:m0 |
def __init__(self, shape, ctype, offset_str=None, mode='<STR_LIT:w>'): | super().__init__(np.zeros(shape, dtype=ctype_to_dtype(ctype)), ctype, offset_str=offset_str,<EOL>mode=mode, as_scalar=False)<EOL> | Allocate an output buffer of the given shape.
This is meant to quickly allocate a buffer large enough to hold the data requested. After running an OpenCL
kernel you can get the written data using the method :meth:`get_data`.
Args:
shape (int or tuple): the shape of the output array
offset_str (str): the offset definition, can use ``{problem_id}`` for multiplication purposes. Set to 0
for no offset.
mode (str): one of 'r', 'w' or 'rw', for respectively read, write or read and write. This sets the
mode of how the data is loaded into the compute device's memory. | f15559:c6:m0 |
def __init__(self, elements, ctype, address_space='<STR_LIT>'): | self._elements = elements<EOL>self._ctype = ctype<EOL>self._address_space = address_space<EOL>if self._address_space == '<STR_LIT>':<EOL><INDENT>self._composite_array = PrivateMemory(len(self._elements), self._ctype)<EOL><DEDENT>elif self._address_space == '<STR_LIT>':<EOL><INDENT>self._composite_array = LocalMemory(self._ctype, len(self._elements))<EOL><DEDENT>elif self._address_space == '<STR_LIT>':<EOL><INDENT>self._composite_array = Zeros(len(self._elements), self._ctype, offset_str='<STR_LIT:0>', mode='<STR_LIT>')<EOL><DEDENT> | An array filled with the given kernel data elements.
Each of the given elements should be a :class:`Scalar` or an :class:`Array` with the property `as_scalar`
set to True. We will load each value of the given elements into a private array.
Args:
elements (List[KernelData]): the kernel data elements to load into the private array
ctype (str): the data type of this structure
address_space (str): the address space for the allocation of the main array | f15559:c7:m0 |
def add_include_guards(cl_str, guard_name=None): | if not guard_name:<EOL><INDENT>guard_name = '<STR_LIT>' + hashlib.md5(cl_str.encode('<STR_LIT:utf-8>')).hexdigest()<EOL><DEDENT>return '''<STR_LIT>'''.format(func_str=cl_str, guard_name=guard_name)<EOL> | Add include guards to the given string.
If you are including the same body of CL code multiple times in a Kernel, it is important to add include
guards (https://en.wikipedia.org/wiki/Include_guard) around them to prevent the kernel from registering the function
twice.
Args:
cl_str (str): the piece of CL code as a string to which we add the include guards
guard_name (str): the name of the C pre-processor guard. If not given we use the MD5 hash of the
given cl string.
Returns:
str: the same string but then with include guards around them. | f15561:m0 |
def dtype_to_ctype(dtype): | from pyopencl.tools import dtype_to_ctype<EOL>return dtype_to_ctype(dtype)<EOL> | Get the CL type of the given numpy data type.
Args:
dtype (np.dtype): the numpy data type
Returns:
str: the CL type string for the corresponding type | f15561:m1 |
def ctype_to_dtype(cl_type, mot_float_type='<STR_LIT:float>'): | if is_vector_ctype(cl_type):<EOL><INDENT>raw_type, vector_length = split_vector_ctype(cl_type)<EOL>if raw_type == '<STR_LIT>':<EOL><INDENT>if is_vector_ctype(mot_float_type):<EOL><INDENT>raw_type, _ = split_vector_ctype(mot_float_type)<EOL><DEDENT>else:<EOL><INDENT>raw_type = mot_float_type<EOL><DEDENT><DEDENT>vector_type = raw_type + str(vector_length)<EOL>return getattr(cl_array.vec, vector_type)<EOL><DEDENT>else:<EOL><INDENT>if cl_type == '<STR_LIT>':<EOL><INDENT>cl_type = mot_float_type<EOL><DEDENT>data_types = [<EOL>('<STR_LIT>', np.int8),<EOL>('<STR_LIT>', np.uint8),<EOL>('<STR_LIT>', np.int16),<EOL>('<STR_LIT>', np.uint16),<EOL>('<STR_LIT:int>', np.int32),<EOL>('<STR_LIT>', np.uint32),<EOL>('<STR_LIT>', np.int64),<EOL>('<STR_LIT>', np.uint64),<EOL>('<STR_LIT:float>', np.float32),<EOL>('<STR_LIT>', np.float64),<EOL>]<EOL>for ctype, dtype in data_types:<EOL><INDENT>if ctype == cl_type:<EOL><INDENT>return dtype<EOL><DEDENT><DEDENT><DEDENT> | Get the numpy dtype of the given cl_type string.
Args:
cl_type (str): the CL data type to match, for example 'float' or 'float4'.
mot_float_type (str): the C name of the ``mot_float_type``. The dtype will be looked up recursively.
Returns:
dtype: the numpy datatype | f15561:m2 |
def convert_data_to_dtype(data, data_type, mot_float_type='<STR_LIT:float>'): | scalar_dtype = ctype_to_dtype(data_type, mot_float_type)<EOL>if isinstance(data, numbers.Number):<EOL><INDENT>data = scalar_dtype(data)<EOL><DEDENT>if is_vector_ctype(data_type):<EOL><INDENT>shape = data.shape<EOL>dtype = ctype_to_dtype(data_type, mot_float_type)<EOL>ve = np.zeros(shape[:-<NUM_LIT:1>], dtype=dtype)<EOL>if len(shape) == <NUM_LIT:1>:<EOL><INDENT>for vector_ind in range(shape[<NUM_LIT:0>]):<EOL><INDENT>ve[<NUM_LIT:0>][vector_ind] = data[vector_ind]<EOL><DEDENT><DEDENT>elif len(shape) == <NUM_LIT:2>:<EOL><INDENT>for i in range(data.shape[<NUM_LIT:0>]):<EOL><INDENT>for vector_ind in range(data.shape[<NUM_LIT:1>]):<EOL><INDENT>ve[i][vector_ind] = data[i, vector_ind]<EOL><DEDENT><DEDENT><DEDENT>elif len(shape) == <NUM_LIT:3>:<EOL><INDENT>for i in range(data.shape[<NUM_LIT:0>]):<EOL><INDENT>for j in range(data.shape[<NUM_LIT:1>]):<EOL><INDENT>for vector_ind in range(data.shape[<NUM_LIT:2>]):<EOL><INDENT>ve[i, j][vector_ind] = data[i, j, vector_ind]<EOL><DEDENT><DEDENT><DEDENT><DEDENT>return np.require(ve, requirements=['<STR_LIT:C>', '<STR_LIT:A>', '<STR_LIT:O>'])<EOL><DEDENT>return np.require(data, scalar_dtype, ['<STR_LIT:C>', '<STR_LIT:A>', '<STR_LIT:O>'])<EOL> | Convert the given input data to the correct numpy type.
Args:
data (ndarray): The value to convert to the correct numpy type
data_type (str): the data type we need to convert the data to
mot_float_type (str): the data type of the current ``mot_float_type``
Returns:
ndarray: the input data but then converted to the desired numpy data type | f15561:m3 |
def split_vector_ctype(ctype): | if not is_vector_ctype(ctype):<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>for vector_length in [<NUM_LIT:2>, <NUM_LIT:3>, <NUM_LIT:4>, <NUM_LIT:8>, <NUM_LIT:16>]:<EOL><INDENT>if ctype.endswith(str(vector_length)):<EOL><INDENT>vector_str_len = len(str(vector_length))<EOL>return ctype[:-vector_str_len], int(ctype[-vector_str_len:])<EOL><DEDENT><DEDENT> | Split a vector ctype into a raw ctype and the vector length.
If the given ctype is not a vector type, we raise an error. I
Args:
ctype (str): the ctype to possibly split into a raw ctype and the vector length
Returns:
tuple: the raw ctype and the vector length | f15561:m4 |
def is_vector_ctype(ctype): | return any(ctype.endswith(str(i)) for i in [<NUM_LIT:2>, <NUM_LIT:3>, <NUM_LIT:4>, <NUM_LIT:8>, <NUM_LIT:16>])<EOL> | Test if the given ctype is a vector type. That is, if it ends with 2, 3, 4, 8 or 16.
Args:
ctype (str): the ctype to test if it is an OpenCL vector type
Returns:
bool: if it is a vector type or not | f15561:m5 |
def device_type_from_string(cl_device_type_str): | cl_device_type_str = cl_device_type_str.upper()<EOL>if hasattr(cl.device_type, cl_device_type_str):<EOL><INDENT>return getattr(cl.device_type, cl_device_type_str)<EOL><DEDENT>return None<EOL> | Converts values like ``gpu`` to a pyopencl device type string.
Supported values are: ``accelerator``, ``cpu``, ``custom``, ``gpu``. If ``all`` is given, None is returned.
Args:
cl_device_type_str (str): The string we want to convert to a device type.
Returns:
cl.device_type: the pyopencl device type. | f15561:m6 |
def device_supports_double(cl_device): | dev_extensions = cl_device.extensions.strip().split('<STR_LIT:U+0020>')<EOL>return '<STR_LIT>' in dev_extensions<EOL> | Check if the given CL device supports double
Args:
cl_device (pyopencl cl device): The device to check if it supports double.
Returns:
boolean: True if the given cl_device supports double, false otherwise. | f15561:m7 |
def get_float_type_def(double_precision, include_complex=True): | if include_complex:<EOL><INDENT>with open(os.path.abspath(resource_filename('<STR_LIT>', '<STR_LIT>')), '<STR_LIT:r>') as f:<EOL><INDENT>complex_number_support = f.read()<EOL><DEDENT><DEDENT>else:<EOL><INDENT>complex_number_support = '<STR_LIT>'<EOL><DEDENT>scipy_constants = '''<STR_LIT>'''<EOL>if double_precision:<EOL><INDENT>return '''<STR_LIT>''' + scipy_constants + complex_number_support<EOL><DEDENT>else:<EOL><INDENT>return '''<STR_LIT>''' + scipy_constants + complex_number_support<EOL><DEDENT> | Get the model floating point type definition.
Args:
double_precision (boolean): if True we will use the double type for the mot_float_type type.
Else, we will use the single precision float type for the mot_float_type type.
include_complex (boolean): if we include support for complex numbers
Returns:
str: defines the mot_float_type types, the epsilon and the MIN and MAX values. | f15561:m8 |
def topological_sort(data): | def check_self_dependencies(input_data):<EOL><INDENT>"""<STR_LIT>"""<EOL>for k, v in input_data.items():<EOL><INDENT>if k in v:<EOL><INDENT>raise ValueError('<STR_LIT>'.format(k))<EOL><DEDENT><DEDENT><DEDENT>def prepare_input_data(input_data):<EOL><INDENT>"""<STR_LIT>"""<EOL>return {k: set(v) for k, v in input_data.items()}<EOL><DEDENT>def find_items_without_dependencies(input_data):<EOL><INDENT>"""<STR_LIT>"""<EOL>return list(reduce(set.union, input_data.values()) - set(input_data.keys()))<EOL><DEDENT>def add_empty_dependencies(data):<EOL><INDENT>items_without_dependencies = find_items_without_dependencies(data)<EOL>data.update({item: set() for item in items_without_dependencies})<EOL><DEDENT>def get_sorted(input_data):<EOL><INDENT>data = input_data<EOL>while True:<EOL><INDENT>ordered = set(item for item, dep in data.items() if len(dep) == <NUM_LIT:0>)<EOL>if not ordered:<EOL><INDENT>break<EOL><DEDENT>yield ordered<EOL>data = {item: (dep - ordered) for item, dep in data.items() if item not in ordered}<EOL><DEDENT>if len(data) != <NUM_LIT:0>:<EOL><INDENT>raise ValueError('<STR_LIT>'<EOL>'<STR_LIT>'.format('<STR_LIT:U+002CU+0020>'.join(repr(x) for x in data.items())))<EOL><DEDENT><DEDENT>check_self_dependencies(data)<EOL>if not len(data):<EOL><INDENT>return []<EOL><DEDENT>data_copy = prepare_input_data(data)<EOL>add_empty_dependencies(data_copy)<EOL>result = []<EOL>for d in get_sorted(data_copy):<EOL><INDENT>try:<EOL><INDENT>d = sorted(d)<EOL><DEDENT>except TypeError:<EOL><INDENT>d = list(d)<EOL><DEDENT>result.extend(d)<EOL><DEDENT>return result<EOL> | Topological sort the given dictionary structure.
Args:
data (dict); dictionary structure where the value is a list of dependencies for that given key.
For example: ``{'a': (), 'b': ('a',)}``, where ``a`` depends on nothing and ``b`` depends on ``a``.
Returns:
tuple: the dependencies in constructor order | f15561:m9 |
def is_scalar(value): | return np.isscalar(value) or (isinstance(value, np.ndarray) and (len(np.squeeze(value).shape) == <NUM_LIT:0>))<EOL> | Test if the given value is a scalar.
This function also works with memory mapped array values, in contrast to the numpy is_scalar method.
Args:
value: the value to test for being a scalar value
Returns:
boolean: if the given value is a scalar or not | f15561:m10 |
def all_elements_equal(value): | if is_scalar(value):<EOL><INDENT>return True<EOL><DEDENT>return np.array(value == value.flatten()[<NUM_LIT:0>]).all()<EOL> | Checks if all elements in the given value are equal to each other.
If the input is a single value the result is trivial. If not, we compare all the values to see
if they are exactly the same.
Args:
value (ndarray or number): a numpy array or a single number.
Returns:
bool: true if all elements are equal to each other, false otherwise | f15561:m11 |
def get_single_value(value): | if not all_elements_equal(value):<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>if is_scalar(value):<EOL><INDENT>return value<EOL><DEDENT>return value.item(<NUM_LIT:0>)<EOL> | Get a single value out of the given value.
This is meant to be used after a call to :func:`all_elements_equal` that returned True. With this
function we return a single number from the input value.
Args:
value (ndarray or number): a numpy array or a single number.
Returns:
number: a single number from the input
Raises:
ValueError: if not all elements are equal | f15561:m12 |
@contextmanager<EOL>def all_logging_disabled(highest_level=logging.CRITICAL): | previous_level = logging.root.manager.disable<EOL>logging.disable(highest_level)<EOL>try:<EOL><INDENT>yield<EOL><DEDENT>finally:<EOL><INDENT>logging.disable(previous_level)<EOL><DEDENT> | Disable all logging temporarily.
A context manager that will prevent any logging messages triggered during the body from being processed.
Args:
highest_level: the maximum logging level that is being blocked | f15561:m13 |
def cartesian(arrays, out=None): | arrays = [np.asarray(x) for x in arrays]<EOL>dtype = arrays[<NUM_LIT:0>].dtype<EOL>nmr_elements = np.prod([x.size for x in arrays])<EOL>if out is None:<EOL><INDENT>out = np.zeros([nmr_elements, len(arrays)], dtype=dtype)<EOL><DEDENT>m = nmr_elements // arrays[<NUM_LIT:0>].size<EOL>out[:, <NUM_LIT:0>] = np.repeat(arrays[<NUM_LIT:0>], m)<EOL>if arrays[<NUM_LIT:1>:]:<EOL><INDENT>cartesian(arrays[<NUM_LIT:1>:], out=out[<NUM_LIT:0>:m,<NUM_LIT:1>:])<EOL>for j in range(<NUM_LIT:1>, arrays[<NUM_LIT:0>].size):<EOL><INDENT>out[j*m:(j+<NUM_LIT:1>)*m, <NUM_LIT:1>:] = out[<NUM_LIT:0>:m, <NUM_LIT:1>:]<EOL><DEDENT><DEDENT>return out<EOL> | Generate a cartesian product of input arrays.
Args:
arrays (list of array-like): 1-D arrays to form the cartesian product of.
out (ndarray): Array to place the cartesian product in.
Returns:
ndarray: 2-D array of shape (M, len(arrays)) containing cartesian products formed of input arrays.
Examples:
>>> cartesian(([1, 2, 3], [4, 5], [6, 7]))
array([[1, 4, 6],
[1, 4, 7],
[1, 5, 6],
[1, 5, 7],
[2, 4, 6],
[2, 4, 7],
[2, 5, 6],
[2, 5, 7],
[3, 4, 6],
[3, 4, 7],
[3, 5, 6],
[3, 5, 7]]) | f15561:m14 |
def split_in_batches(nmr_elements, max_batch_size): | offset = <NUM_LIT:0><EOL>elements_left = nmr_elements<EOL>while elements_left > <NUM_LIT:0>:<EOL><INDENT>next_batch = (offset, offset + min(elements_left, max_batch_size))<EOL>yield next_batch<EOL>batch_size = min(elements_left, max_batch_size)<EOL>elements_left -= batch_size<EOL>offset += batch_size<EOL><DEDENT> | Split the total number of elements into batches of the specified maximum size.
Examples::
split_in_batches(30, 8) -> [(0, 8), (8, 15), (16, 23), (24, 29)]
for batch_start, batch_end in split_in_batches(2000, 100):
array[batch_start:batch_end]
Yields:
tuple: the start and end point of the next batch | f15561:m15 |
def covariance_to_correlations(covariance): | diagonal_ind = np.arange(covariance.shape[<NUM_LIT:1>])<EOL>diagonal_els = covariance[:, diagonal_ind, diagonal_ind]<EOL>result = covariance / np.sqrt(diagonal_els[:, :, None] * diagonal_els[:, None, :])<EOL>result[np.isinf(result)] = <NUM_LIT:0><EOL>return np.clip(np.nan_to_num(result), -<NUM_LIT:1>, <NUM_LIT:1>)<EOL> | Transform a covariance matrix into a correlations matrix.
This can be seen as dividing a covariance matrix by the outer product of the diagonal.
As post processing we replace the infinities and the NaNs with zeros and clip the result to [-1, 1].
Args:
covariance (ndarray): a matrix of shape (n, p, p) with for n problems the covariance matrix of shape (p, p).
Returns:
ndarray: the correlations matrix | f15561:m16 |
def multiprocess_mapping(func, iterable): | if os.name == '<STR_LIT>': <EOL><INDENT>return list(map(func, iterable))<EOL><DEDENT>try:<EOL><INDENT>p = multiprocessing.Pool()<EOL>return_data = list(p.imap(func, iterable))<EOL>p.close()<EOL>p.join()<EOL>return return_data<EOL><DEDENT>except OSError:<EOL><INDENT>return list(map(func, iterable))<EOL><DEDENT> | Multiprocess mapping the given function on the given iterable.
This only works in Linux and Mac systems since Windows has no forking capability. On Windows we fall back on
single processing. Also, if we reach memory limits we fall back on single cpu processing.
Args:
func (func): the function to apply
iterable (iterable): the iterable with the elements we want to apply the function on | f15561:m17 |
def parse_cl_function(cl_code, dependencies=()): | from mot.lib.cl_function import SimpleCLFunction<EOL>def separate_cl_functions(input_str):<EOL><INDENT>"""<STR_LIT>"""<EOL>class Semantics:<EOL><INDENT>def __init__(self):<EOL><INDENT>self._functions = []<EOL><DEDENT>def result(self, ast):<EOL><INDENT>return self._functions<EOL><DEDENT>def arglist(self, ast):<EOL><INDENT>return '<STR_LIT>'.format('<STR_LIT:U+002CU+0020>'.join(ast))<EOL><DEDENT>def function(self, ast):<EOL><INDENT>def join(items):<EOL><INDENT>result = '<STR_LIT>'<EOL>for item in items:<EOL><INDENT>if isinstance(item, str):<EOL><INDENT>result += item<EOL><DEDENT>else:<EOL><INDENT>result += join(item)<EOL><DEDENT><DEDENT>return result<EOL><DEDENT>self._functions.append(join(ast).strip())<EOL>return ast<EOL><DEDENT><DEDENT>return _extract_cl_functions_parser.parse(input_str, semantics=Semantics())<EOL><DEDENT>functions = separate_cl_functions(cl_code)<EOL>return SimpleCLFunction.from_string(functions[-<NUM_LIT:1>], dependencies=list(dependencies or []) + [<EOL>SimpleCLFunction.from_string(s) for s in functions[:-<NUM_LIT:1>]])<EOL> | Parse the given OpenCL string to a single SimpleCLFunction.
If the string contains more than one function, we will return only the last, with all the other added as a
dependency.
Args:
cl_code (str): the input string containing one or more functions.
dependencies (Iterable[CLCodeObject]): The list of CL libraries this function depends on
Returns:
mot.lib.cl_function.SimpleCLFunction: the CL function for the last function in the given strings. | f15561:m18 |
def split_cl_function(cl_str): | class Semantics:<EOL><INDENT>def __init__(self):<EOL><INDENT>self._return_type = '<STR_LIT>'<EOL>self._function_name = '<STR_LIT>'<EOL>self._parameter_list = []<EOL>self._cl_body = '<STR_LIT>'<EOL><DEDENT>def result(self, ast):<EOL><INDENT>return self._return_type, self._function_name, self._parameter_list, self._cl_body<EOL><DEDENT>def address_space(self, ast):<EOL><INDENT>self._return_type = ast.strip() + '<STR_LIT:U+0020>'<EOL>return ast<EOL><DEDENT>def data_type(self, ast):<EOL><INDENT>self._return_type += '<STR_LIT>'.join(ast).strip()<EOL>return ast<EOL><DEDENT>def function_name(self, ast):<EOL><INDENT>self._function_name = ast.strip()<EOL>return ast<EOL><DEDENT>def arglist(self, ast):<EOL><INDENT>if ast != '<STR_LIT>':<EOL><INDENT>self._parameter_list = ast<EOL><DEDENT>return ast<EOL><DEDENT>def body(self, ast):<EOL><INDENT>def join(items):<EOL><INDENT>result = '<STR_LIT>'<EOL>for item in items:<EOL><INDENT>if isinstance(item, str):<EOL><INDENT>result += item<EOL><DEDENT>else:<EOL><INDENT>result += join(item)<EOL><DEDENT><DEDENT>return result<EOL><DEDENT>self._cl_body = join(ast).strip()[<NUM_LIT:1>:-<NUM_LIT:1>]<EOL>return ast<EOL><DEDENT><DEDENT>return _split_cl_function_parser.parse(cl_str, semantics=Semantics())<EOL> | Split an CL function into a return type, function name, parameters list and the body.
Args:
cl_str (str): the CL code to parse and plit into components
Returns:
tuple: string elements for the return type, function name, parameter list and the body | f15561:m19 |
def apply_cl_function(cl_function, kernel_data, nmr_instances, use_local_reduction=False, cl_runtime_info=None): | cl_runtime_info = cl_runtime_info or CLRuntimeInfo()<EOL>cl_environments = cl_runtime_info.cl_environments<EOL>for param in cl_function.get_parameters():<EOL><INDENT>if param.name not in kernel_data:<EOL><INDENT>names = [param.name for param in cl_function.get_parameters()]<EOL>missing_names = [name for name in names if name not in kernel_data]<EOL>raise ValueError('<STR_LIT>'<EOL>'<STR_LIT>'.format(names, missing_names))<EOL><DEDENT><DEDENT>if cl_function.get_return_type() != '<STR_LIT>':<EOL><INDENT>kernel_data['<STR_LIT>'] = Zeros((nmr_instances,), cl_function.get_return_type())<EOL><DEDENT>workers = []<EOL>for ind, cl_environment in enumerate(cl_environments):<EOL><INDENT>worker = _ProcedureWorker(cl_environment, cl_runtime_info.compile_flags,<EOL>cl_function, kernel_data, cl_runtime_info.double_precision, use_local_reduction)<EOL>workers.append(worker)<EOL><DEDENT>def enqueue_batch(batch_size, offset):<EOL><INDENT>items_per_worker = [batch_size // len(cl_environments) for _ in range(len(cl_environments) - <NUM_LIT:1>)]<EOL>items_per_worker.append(batch_size - sum(items_per_worker))<EOL>for ind, worker in enumerate(workers):<EOL><INDENT>worker.calculate(offset, offset + items_per_worker[ind])<EOL>offset += items_per_worker[ind]<EOL>worker.cl_queue.flush()<EOL><DEDENT>for worker in workers:<EOL><INDENT>worker.cl_queue.finish()<EOL><DEDENT>return offset<EOL><DEDENT>total_offset = <NUM_LIT:0><EOL>for batch_start, batch_end in split_in_batches(nmr_instances, <NUM_LIT> * len(workers)):<EOL><INDENT>total_offset = enqueue_batch(batch_end - batch_start, total_offset)<EOL><DEDENT>if cl_function.get_return_type() != '<STR_LIT>':<EOL><INDENT>return kernel_data['<STR_LIT>'].get_data()<EOL><DEDENT> | Run the given function/procedure on the given set of data.
This class will wrap the given CL function in a kernel call and execute that that for every data instance using
the provided kernel data. This class will respect the read write setting of the kernel data elements such that
output can be written back to the according kernel data elements.
Args:
cl_function (mot.lib.cl_function.CLFunction): the function to
run on the datasets. Either a name function tuple or an actual CLFunction object.
kernel_data (dict[str: mot.lib.kernel_data.KernelData]): the data to use as input to the function.
nmr_instances (int): the number of parallel threads to run (used as ``global_size``)
use_local_reduction (boolean): set this to True if you want to use local memory reduction in
your CL procedure. If this is set to True we will multiply the global size (given by the nmr_instances)
by the work group sizes.
cl_runtime_info (mot.configuration.CLRuntimeInfo): the runtime information | f15562:m0 |
def get_cl_code(self): | raise NotImplementedError()<EOL> | Get the CL code for this code object and all its dependencies, with include guards.
Returns:
str: The CL code for inclusion in a kernel. | f15562:c0:m0 |
def get_return_type(self): | raise NotImplementedError()<EOL> | Get the type (in CL naming) of the returned value from this function.
Returns:
str: The return type of this CL function. (Examples: double, int, double4, ...) | f15562:c1:m0 |
def get_cl_function_name(self): | raise NotImplementedError()<EOL> | Return the calling name of the implemented CL function
Returns:
str: The name of this CL function | f15562:c1:m1 |
def get_parameters(self): | raise NotImplementedError()<EOL> | Return the list of parameters from this CL function.
Returns:
list of :class:`mot.lib.cl_function.CLFunctionParameter`: list of the parameters in this
model in the same order as in the CL function | f15562:c1:m2 |
def get_signature(self): | raise NotImplementedError()<EOL> | Get the CL signature of this function.
Returns:
str: the CL code for the signature of this CL function. | f15562:c1:m3 |
def get_cl_code(self): | raise NotImplementedError()<EOL> | Get the function code for this function and all its dependencies, with include guards.
Returns:
str: The CL code for inclusion in a kernel. | f15562:c1:m4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.