body
stringlengths
26
98.2k
body_hash
int64
-9,222,864,604,528,158,000
9,221,803,474B
docstring
stringlengths
1
16.8k
path
stringlengths
5
230
name
stringlengths
1
96
repository_name
stringlengths
7
89
lang
stringclasses
1 value
body_without_docstring
stringlengths
20
98.2k
@tf_export('math.imag', v1=['math.imag', 'imag']) @deprecation.deprecated_endpoints('imag') @dispatch.add_dispatch_support def imag(input, name=None): 'Returns the imaginary part of a complex (or real) tensor.\n\n Given a tensor `input`, this operation returns a tensor of type `float` that\n is the imaginary part...
551,267,326,716,524,900
Returns the imaginary part of a complex (or real) tensor. Given a tensor `input`, this operation returns a tensor of type `float` that is the imaginary part of each element in `input` considered as a complex number. If `input` is real, a tensor of all zeros is returned. For example: ```python x = tf.constant([-2.25 ...
tensorflow/python/ops/math_ops.py
imag
minminsun/tensorflow
python
@tf_export('math.imag', v1=['math.imag', 'imag']) @deprecation.deprecated_endpoints('imag') @dispatch.add_dispatch_support def imag(input, name=None): 'Returns the imaginary part of a complex (or real) tensor.\n\n Given a tensor `input`, this operation returns a tensor of type `float` that\n is the imaginary part...
@tf_export('math.angle', v1=['math.angle', 'angle']) @deprecation.deprecated_endpoints('angle') @dispatch.add_dispatch_support def angle(input, name=None): "Returns the element-wise argument of a complex (or real) tensor.\n\n Given a tensor `input`, this operation returns a tensor of type `float` that\n is the ar...
694,283,158,599,546,800
Returns the element-wise argument of a complex (or real) tensor. Given a tensor `input`, this operation returns a tensor of type `float` that is the argument of each element in `input` considered as a complex number. The elements in `input` are considered to be complex numbers of the form \\(a + bj\\), where *a* is t...
tensorflow/python/ops/math_ops.py
angle
minminsun/tensorflow
python
@tf_export('math.angle', v1=['math.angle', 'angle']) @deprecation.deprecated_endpoints('angle') @dispatch.add_dispatch_support def angle(input, name=None): "Returns the element-wise argument of a complex (or real) tensor.\n\n Given a tensor `input`, this operation returns a tensor of type `float` that\n is the ar...
@tf_export('math.round', 'round') @dispatch.add_dispatch_support def round(x, name=None): 'Rounds the values of a tensor to the nearest integer, element-wise.\n\n Rounds half to even. Also known as bankers rounding. If you want to round\n according to the current system rounding mode use tf::cint.\n For example...
4,454,927,360,459,215,400
Rounds the values of a tensor to the nearest integer, element-wise. Rounds half to even. Also known as bankers rounding. If you want to round according to the current system rounding mode use tf::cint. For example: ```python x = tf.constant([0.9, 2.5, 2.3, 1.5, -4.5]) tf.round(x) # [ 1.0, 2.0, 2.0, 2.0, -4.0 ] ``` ...
tensorflow/python/ops/math_ops.py
round
minminsun/tensorflow
python
@tf_export('math.round', 'round') @dispatch.add_dispatch_support def round(x, name=None): 'Rounds the values of a tensor to the nearest integer, element-wise.\n\n Rounds half to even. Also known as bankers rounding. If you want to round\n according to the current system rounding mode use tf::cint.\n For example...
@tf_export('dtypes.cast', 'cast') @dispatch.add_dispatch_support def cast(x, dtype, name=None): 'Casts a tensor to a new type.\n\n The operation casts `x` (in case of `Tensor`) or `x.values`\n (in case of `SparseTensor` or `IndexedSlices`) to `dtype`.\n\n For example:\n\n ```python\n x = tf.constant([1.8, 2.2]...
4,099,450,131,418,936,000
Casts a tensor to a new type. The operation casts `x` (in case of `Tensor`) or `x.values` (in case of `SparseTensor` or `IndexedSlices`) to `dtype`. For example: ```python x = tf.constant([1.8, 2.2], dtype=tf.float32) tf.cast(x, tf.int32) # [1, 2], dtype=tf.int32 ``` The operation supports data types (for `x` and ...
tensorflow/python/ops/math_ops.py
cast
minminsun/tensorflow
python
@tf_export('dtypes.cast', 'cast') @dispatch.add_dispatch_support def cast(x, dtype, name=None): 'Casts a tensor to a new type.\n\n The operation casts `x` (in case of `Tensor`) or `x.values`\n (in case of `SparseTensor` or `IndexedSlices`) to `dtype`.\n\n For example:\n\n ```python\n x = tf.constant([1.8, 2.2]...
@tf_export('dtypes.saturate_cast', 'saturate_cast') @dispatch.add_dispatch_support def saturate_cast(value, dtype, name=None): 'Performs a safe saturating cast of `value` to `dtype`.\n\n This function casts the input to `dtype` without applying any scaling. If\n there is a danger that values would over or underf...
2,521,787,870,536,432,000
Performs a safe saturating cast of `value` to `dtype`. This function casts the input to `dtype` without applying any scaling. If there is a danger that values would over or underflow in the cast, this op applies the appropriate clamping before the cast. Args: value: A `Tensor`. dtype: The desired output `DType`....
tensorflow/python/ops/math_ops.py
saturate_cast
minminsun/tensorflow
python
@tf_export('dtypes.saturate_cast', 'saturate_cast') @dispatch.add_dispatch_support def saturate_cast(value, dtype, name=None): 'Performs a safe saturating cast of `value` to `dtype`.\n\n This function casts the input to `dtype` without applying any scaling. If\n there is a danger that values would over or underf...
@deprecation.deprecated(date=None, instructions='Use tf.cast instead.') @tf_export(v1=['to_float']) def to_float(x, name='ToFloat'): 'Casts a tensor to type `float32`.\n\n Args:\n x: A `Tensor` or `SparseTensor` or `IndexedSlices`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or...
-813,214,755,723,743,600
Casts a tensor to type `float32`. Args: x: A `Tensor` or `SparseTensor` or `IndexedSlices`. name: A name for the operation (optional). Returns: A `Tensor` or `SparseTensor` or `IndexedSlices` with same shape as `x` with type `float32`. Raises: TypeError: If `x` cannot be cast to the `float32`.
tensorflow/python/ops/math_ops.py
to_float
minminsun/tensorflow
python
@deprecation.deprecated(date=None, instructions='Use tf.cast instead.') @tf_export(v1=['to_float']) def to_float(x, name='ToFloat'): 'Casts a tensor to type `float32`.\n\n Args:\n x: A `Tensor` or `SparseTensor` or `IndexedSlices`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or...
@deprecation.deprecated(date=None, instructions='Use tf.cast instead.') @tf_export(v1=['to_double']) def to_double(x, name='ToDouble'): 'Casts a tensor to type `float64`.\n\n Args:\n x: A `Tensor` or `SparseTensor` or `IndexedSlices`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`...
3,930,520,254,991,824,400
Casts a tensor to type `float64`. Args: x: A `Tensor` or `SparseTensor` or `IndexedSlices`. name: A name for the operation (optional). Returns: A `Tensor` or `SparseTensor` or `IndexedSlices` with same shape as `x` with type `float64`. Raises: TypeError: If `x` cannot be cast to the `float64`.
tensorflow/python/ops/math_ops.py
to_double
minminsun/tensorflow
python
@deprecation.deprecated(date=None, instructions='Use tf.cast instead.') @tf_export(v1=['to_double']) def to_double(x, name='ToDouble'): 'Casts a tensor to type `float64`.\n\n Args:\n x: A `Tensor` or `SparseTensor` or `IndexedSlices`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor`...
@deprecation.deprecated(date=None, instructions='Use tf.cast instead.') @tf_export(v1=['to_int32']) def to_int32(x, name='ToInt32'): 'Casts a tensor to type `int32`.\n\n Args:\n x: A `Tensor` or `SparseTensor` or `IndexedSlices`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `...
-4,256,511,591,987,859,000
Casts a tensor to type `int32`. Args: x: A `Tensor` or `SparseTensor` or `IndexedSlices`. name: A name for the operation (optional). Returns: A `Tensor` or `SparseTensor` or `IndexedSlices` with same shape as `x` with type `int32`. Raises: TypeError: If `x` cannot be cast to the `int32`.
tensorflow/python/ops/math_ops.py
to_int32
minminsun/tensorflow
python
@deprecation.deprecated(date=None, instructions='Use tf.cast instead.') @tf_export(v1=['to_int32']) def to_int32(x, name='ToInt32'): 'Casts a tensor to type `int32`.\n\n Args:\n x: A `Tensor` or `SparseTensor` or `IndexedSlices`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `...
@deprecation.deprecated(date=None, instructions='Use tf.cast instead.') @tf_export(v1=['to_int64']) def to_int64(x, name='ToInt64'): 'Casts a tensor to type `int64`.\n\n Args:\n x: A `Tensor` or `SparseTensor` or `IndexedSlices`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `...
-619,806,351,360,548,900
Casts a tensor to type `int64`. Args: x: A `Tensor` or `SparseTensor` or `IndexedSlices`. name: A name for the operation (optional). Returns: A `Tensor` or `SparseTensor` or `IndexedSlices` with same shape as `x` with type `int64`. Raises: TypeError: If `x` cannot be cast to the `int64`.
tensorflow/python/ops/math_ops.py
to_int64
minminsun/tensorflow
python
@deprecation.deprecated(date=None, instructions='Use tf.cast instead.') @tf_export(v1=['to_int64']) def to_int64(x, name='ToInt64'): 'Casts a tensor to type `int64`.\n\n Args:\n x: A `Tensor` or `SparseTensor` or `IndexedSlices`.\n name: A name for the operation (optional).\n\n Returns:\n A `Tensor` or `...
@deprecation.deprecated(date=None, instructions='Use tf.cast instead.') @tf_export(v1=['to_bfloat16']) def to_bfloat16(x, name='ToBFloat16'): 'Casts a tensor to type `bfloat16`.\n\n Args:\n x: A `Tensor` or `SparseTensor` or `IndexedSlices`.\n name: A name for the operation (optional).\n\n Returns:\n A `...
4,192,086,204,877,583,400
Casts a tensor to type `bfloat16`. Args: x: A `Tensor` or `SparseTensor` or `IndexedSlices`. name: A name for the operation (optional). Returns: A `Tensor` or `SparseTensor` or `IndexedSlices` with same shape as `x` with type `bfloat16`. Raises: TypeError: If `x` cannot be cast to the `bfloat16`.
tensorflow/python/ops/math_ops.py
to_bfloat16
minminsun/tensorflow
python
@deprecation.deprecated(date=None, instructions='Use tf.cast instead.') @tf_export(v1=['to_bfloat16']) def to_bfloat16(x, name='ToBFloat16'): 'Casts a tensor to type `bfloat16`.\n\n Args:\n x: A `Tensor` or `SparseTensor` or `IndexedSlices`.\n name: A name for the operation (optional).\n\n Returns:\n A `...
@deprecation.deprecated(date=None, instructions='Use tf.cast instead.') @tf_export(v1=['to_complex64']) def to_complex64(x, name='ToComplex64'): 'Casts a tensor to type `complex64`.\n\n Args:\n x: A `Tensor` or `SparseTensor` or `IndexedSlices`.\n name: A name for the operation (optional).\n\n Returns:\n ...
2,519,490,909,626,641,000
Casts a tensor to type `complex64`. Args: x: A `Tensor` or `SparseTensor` or `IndexedSlices`. name: A name for the operation (optional). Returns: A `Tensor` or `SparseTensor` or `IndexedSlices` with same shape as `x` with type `complex64`. Raises: TypeError: If `x` cannot be cast to the `complex64`.
tensorflow/python/ops/math_ops.py
to_complex64
minminsun/tensorflow
python
@deprecation.deprecated(date=None, instructions='Use tf.cast instead.') @tf_export(v1=['to_complex64']) def to_complex64(x, name='ToComplex64'): 'Casts a tensor to type `complex64`.\n\n Args:\n x: A `Tensor` or `SparseTensor` or `IndexedSlices`.\n name: A name for the operation (optional).\n\n Returns:\n ...
@deprecation.deprecated(date=None, instructions='Use tf.cast instead.') @tf_export(v1=['to_complex128']) def to_complex128(x, name='ToComplex128'): 'Casts a tensor to type `complex128`.\n\n Args:\n x: A `Tensor` or `SparseTensor` or `IndexedSlices`.\n name: A name for the operation (optional).\n\n Returns:\...
-7,577,377,853,692,675,000
Casts a tensor to type `complex128`. Args: x: A `Tensor` or `SparseTensor` or `IndexedSlices`. name: A name for the operation (optional). Returns: A `Tensor` or `SparseTensor` or `IndexedSlices` with same shape as `x` with type `complex128`. Raises: TypeError: If `x` cannot be cast to the `complex128`.
tensorflow/python/ops/math_ops.py
to_complex128
minminsun/tensorflow
python
@deprecation.deprecated(date=None, instructions='Use tf.cast instead.') @tf_export(v1=['to_complex128']) def to_complex128(x, name='ToComplex128'): 'Casts a tensor to type `complex128`.\n\n Args:\n x: A `Tensor` or `SparseTensor` or `IndexedSlices`.\n name: A name for the operation (optional).\n\n Returns:\...
def _OverrideBinaryOperatorHelper(func, op_name, clazz_object=ops.Tensor): 'Register operators with different tensor and scalar versions.\n\n If `clazz_object` is `SparseTensor`, assumes `func` takes `(sp_indices,\n sp_values, sp_shape, dense)` and outputs `(new_sp_values)`.\n\n Args:\n func: the operator\n ...
3,399,817,921,854,198,000
Register operators with different tensor and scalar versions. If `clazz_object` is `SparseTensor`, assumes `func` takes `(sp_indices, sp_values, sp_shape, dense)` and outputs `(new_sp_values)`. Args: func: the operator op_name: name of the operator being overridden clazz_object: class to override for. Either `...
tensorflow/python/ops/math_ops.py
_OverrideBinaryOperatorHelper
minminsun/tensorflow
python
def _OverrideBinaryOperatorHelper(func, op_name, clazz_object=ops.Tensor): 'Register operators with different tensor and scalar versions.\n\n If `clazz_object` is `SparseTensor`, assumes `func` takes `(sp_indices,\n sp_values, sp_shape, dense)` and outputs `(new_sp_values)`.\n\n Args:\n func: the operator\n ...
def _sparse_dense_truediv(sp_indices, sp_values, sp_shape, y, name=None): "Internal helper function for 'sp_t / dense_t'." with ops.name_scope(name, 'truediv', [sp_indices, sp_values, sp_shape, y]) as name: sp_values = ops.convert_to_tensor(sp_values, name='sp_values') y = ops.convert_to_tensor(...
-6,240,260,722,322,193,000
Internal helper function for 'sp_t / dense_t'.
tensorflow/python/ops/math_ops.py
_sparse_dense_truediv
minminsun/tensorflow
python
def _sparse_dense_truediv(sp_indices, sp_values, sp_shape, y, name=None): with ops.name_scope(name, 'truediv', [sp_indices, sp_values, sp_shape, y]) as name: sp_values = ops.convert_to_tensor(sp_values, name='sp_values') y = ops.convert_to_tensor(y, name='y') x_dtype = sp_values.dtype.b...
def _div_python2(x, y, name=None): 'Divide two values using Python 2 semantics. Used for Tensor.__div__.\n\n Args:\n x: `Tensor` numerator of real numeric type.\n y: `Tensor` denominator of real numeric type.\n name: A name for the operation (optional).\n Returns:\n `x / y` returns the quotient of x a...
3,866,777,411,707,680,000
Divide two values using Python 2 semantics. Used for Tensor.__div__. Args: x: `Tensor` numerator of real numeric type. y: `Tensor` denominator of real numeric type. name: A name for the operation (optional). Returns: `x / y` returns the quotient of x and y.
tensorflow/python/ops/math_ops.py
_div_python2
minminsun/tensorflow
python
def _div_python2(x, y, name=None): 'Divide two values using Python 2 semantics. Used for Tensor.__div__.\n\n Args:\n x: `Tensor` numerator of real numeric type.\n y: `Tensor` denominator of real numeric type.\n name: A name for the operation (optional).\n Returns:\n `x / y` returns the quotient of x a...
@tf_export('math.truediv', 'truediv') @dispatch.add_dispatch_support def truediv(x, y, name=None): 'Divides x / y elementwise (using Python 3 division operator semantics).\n\n NOTE: Prefer using the Tensor operator or tf.divide which obey Python\n division operator semantics.\n\n This function forces Python 3 di...
4,563,443,741,959,751,000
Divides x / y elementwise (using Python 3 division operator semantics). NOTE: Prefer using the Tensor operator or tf.divide which obey Python division operator semantics. This function forces Python 3 division operator semantics where all integer arguments are cast to floating types first. This op is generated by n...
tensorflow/python/ops/math_ops.py
truediv
minminsun/tensorflow
python
@tf_export('math.truediv', 'truediv') @dispatch.add_dispatch_support def truediv(x, y, name=None): 'Divides x / y elementwise (using Python 3 division operator semantics).\n\n NOTE: Prefer using the Tensor operator or tf.divide which obey Python\n division operator semantics.\n\n This function forces Python 3 di...
@deprecation.deprecated(date=None, instructions='Deprecated in favor of operator or tf.math.divide.') @tf_export(v1=['div']) def div(x, y, name=None): 'Divides x / y elementwise (using Python 2 division operator semantics).\n\n NOTE: Prefer using the Tensor division operator or tf.divide which obey Python\n divis...
-2,647,516,656,319,070,000
Divides x / y elementwise (using Python 2 division operator semantics). NOTE: Prefer using the Tensor division operator or tf.divide which obey Python division operator semantics. This function divides `x` and `y`, forcing Python 2.7 semantics. That is, if one of `x` or `y` is a float, then the result will be a float...
tensorflow/python/ops/math_ops.py
div
minminsun/tensorflow
python
@deprecation.deprecated(date=None, instructions='Deprecated in favor of operator or tf.math.divide.') @tf_export(v1=['div']) def div(x, y, name=None): 'Divides x / y elementwise (using Python 2 division operator semantics).\n\n NOTE: Prefer using the Tensor division operator or tf.divide which obey Python\n divis...
@tf_export('div_no_nan') @dispatch.add_dispatch_support def div_no_nan(x, y, name=None): 'Computes an unsafe divide which returns 0 if the y is zero.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n y: A `Tensor` whose dtype is compatible with `x`.\n name: A name for ...
1,129,862,942,993,974,500
Computes an unsafe divide which returns 0 if the y is zero. Args: x: A `Tensor`. Must be one of the following types: `float32`, `float64`. y: A `Tensor` whose dtype is compatible with `x`. name: A name for the operation (optional). Returns: The element-wise value of the x divided by y.
tensorflow/python/ops/math_ops.py
div_no_nan
minminsun/tensorflow
python
@tf_export('div_no_nan') @dispatch.add_dispatch_support def div_no_nan(x, y, name=None): 'Computes an unsafe divide which returns 0 if the y is zero.\n\n Args:\n x: A `Tensor`. Must be one of the following types: `float32`, `float64`.\n y: A `Tensor` whose dtype is compatible with `x`.\n name: A name for ...
@tf_export('math.floordiv', v1=['math.floordiv', 'floordiv']) @dispatch.add_dispatch_support @deprecation.deprecated_endpoints('floordiv') def floordiv(x, y, name=None): 'Divides `x / y` elementwise, rounding toward the most negative integer.\n\n The same as `tf.div(x,y)` for integers, but uses `tf.floor(tf.div(x,...
7,450,040,279,089,519,000
Divides `x / y` elementwise, rounding toward the most negative integer. The same as `tf.div(x,y)` for integers, but uses `tf.floor(tf.div(x,y))` for floating point arguments so that the result is always an integer (though possibly an integer represented as floating point). This op is generated by `x // y` floor divis...
tensorflow/python/ops/math_ops.py
floordiv
minminsun/tensorflow
python
@tf_export('math.floordiv', v1=['math.floordiv', 'floordiv']) @dispatch.add_dispatch_support @deprecation.deprecated_endpoints('floordiv') def floordiv(x, y, name=None): 'Divides `x / y` elementwise, rounding toward the most negative integer.\n\n The same as `tf.div(x,y)` for integers, but uses `tf.floor(tf.div(x,...
def _mul_dispatch(x, y, name=None): 'Dispatches cwise mul for "Dense*Dense" and "Dense*Sparse".' is_tensor_y = isinstance(y, ops.Tensor) if is_tensor_y: return gen_math_ops.mul(x, y, name=name) else: assert isinstance(y, sparse_tensor.SparseTensor) new_vals = gen_sparse_ops.spars...
3,161,422,901,298,774,000
Dispatches cwise mul for "Dense*Dense" and "Dense*Sparse".
tensorflow/python/ops/math_ops.py
_mul_dispatch
minminsun/tensorflow
python
def _mul_dispatch(x, y, name=None): is_tensor_y = isinstance(y, ops.Tensor) if is_tensor_y: return gen_math_ops.mul(x, y, name=name) else: assert isinstance(y, sparse_tensor.SparseTensor) new_vals = gen_sparse_ops.sparse_dense_cwise_mul(y.indices, y.values, y.dense_shape, x, nam...
@tf_export('math.logical_xor', v1=['math.logical_xor', 'logical_xor']) @dispatch.add_dispatch_support @deprecation.deprecated_endpoints('logical_xor') def logical_xor(x, y, name='LogicalXor'): 'x ^ y = (x | y) & ~(x & y).' return gen_math_ops.logical_and(gen_math_ops.logical_or(x, y), gen_math_ops.logical_not(g...
7,408,680,893,332,393,000
x ^ y = (x | y) & ~(x & y).
tensorflow/python/ops/math_ops.py
logical_xor
minminsun/tensorflow
python
@tf_export('math.logical_xor', v1=['math.logical_xor', 'logical_xor']) @dispatch.add_dispatch_support @deprecation.deprecated_endpoints('logical_xor') def logical_xor(x, y, name='LogicalXor'): return gen_math_ops.logical_and(gen_math_ops.logical_or(x, y), gen_math_ops.logical_not(gen_math_ops.logical_and(x, y)...
@tf_export('range') def range(start, limit=None, delta=1, dtype=None, name='range'): 'Creates a sequence of numbers.\n\n Creates a sequence of numbers that begins at `start` and extends by\n increments of `delta` up to but not including `limit`.\n\n The dtype of the resulting tensor is inferred from the inputs u...
2,806,946,857,738,272,000
Creates a sequence of numbers. Creates a sequence of numbers that begins at `start` and extends by increments of `delta` up to but not including `limit`. The dtype of the resulting tensor is inferred from the inputs unless it is provided explicitly. Like the Python builtin `range`, `start` defaults to 0, so that `ra...
tensorflow/python/ops/math_ops.py
range
minminsun/tensorflow
python
@tf_export('range') def range(start, limit=None, delta=1, dtype=None, name='range'): 'Creates a sequence of numbers.\n\n Creates a sequence of numbers that begins at `start` and extends by\n increments of `delta` up to but not including `limit`.\n\n The dtype of the resulting tensor is inferred from the inputs u...
def _ReductionDims(x, axis, reduction_indices=None): 'Returns range(0, rank(x)) if reduction_indices is None.' if (reduction_indices is not None): if (axis is not None): raise ValueError("Can't specify both axis' and 'reduction_indices'.") axis = reduction_indices if (axis is not...
-1,727,942,052,001,307,100
Returns range(0, rank(x)) if reduction_indices is None.
tensorflow/python/ops/math_ops.py
_ReductionDims
minminsun/tensorflow
python
def _ReductionDims(x, axis, reduction_indices=None): if (reduction_indices is not None): if (axis is not None): raise ValueError("Can't specify both axis' and 'reduction_indices'.") axis = reduction_indices if (axis is not None): return axis else: rank = comm...
def _may_reduce_to_scalar(keepdims, axis, output): "Set a reduction's output shape to be a scalar if we are certain." if ((not common_shapes.has_fully_defined_shape(output)) and (not keepdims) and (axis is None)): output.set_shape(()) return output
-7,311,594,891,318,815,000
Set a reduction's output shape to be a scalar if we are certain.
tensorflow/python/ops/math_ops.py
_may_reduce_to_scalar
minminsun/tensorflow
python
def _may_reduce_to_scalar(keepdims, axis, output): if ((not common_shapes.has_fully_defined_shape(output)) and (not keepdims) and (axis is None)): output.set_shape(()) return output
@tf_export(v1=['math.reduce_sum', 'reduce_sum']) @deprecation.deprecated_args(None, 'keep_dims is deprecated, use keepdims instead', 'keep_dims') def reduce_sum_v1(input_tensor, axis=None, keepdims=None, name=None, reduction_indices=None, keep_dims=None): 'Computes the sum of elements across dimensions of a tensor....
-643,192,823,238,918,900
Computes the sum of elements across dimensions of a tensor. Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1. If `axis` is None, all dimensio...
tensorflow/python/ops/math_ops.py
reduce_sum_v1
minminsun/tensorflow
python
@tf_export(v1=['math.reduce_sum', 'reduce_sum']) @deprecation.deprecated_args(None, 'keep_dims is deprecated, use keepdims instead', 'keep_dims') def reduce_sum_v1(input_tensor, axis=None, keepdims=None, name=None, reduction_indices=None, keep_dims=None): 'Computes the sum of elements across dimensions of a tensor....
@tf_export('math.reduce_sum', 'reduce_sum', v1=[]) @dispatch.add_dispatch_support def reduce_sum(input_tensor, axis=None, keepdims=False, name=None): 'Computes the sum of elements across dimensions of a tensor.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank ...
-6,350,055,839,824,817,000
Computes the sum of elements across dimensions of a tensor. Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1. If `axis` is None, all dimensio...
tensorflow/python/ops/math_ops.py
reduce_sum
minminsun/tensorflow
python
@tf_export('math.reduce_sum', 'reduce_sum', v1=[]) @dispatch.add_dispatch_support def reduce_sum(input_tensor, axis=None, keepdims=False, name=None): 'Computes the sum of elements across dimensions of a tensor.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank ...
@tf_export(v1=['math.count_nonzero', 'count_nonzero']) @deprecation.deprecated_args(None, 'keep_dims is deprecated, use keepdims instead', 'keep_dims') @deprecation.deprecated_args(None, 'reduction_indices is deprecated, use axis instead', 'axis') def count_nonzero(input_tensor, axis=None, keepdims=None, dtype=dtypes.i...
7,200,931,276,229,210,000
Computes number of nonzero elements across dimensions of a tensor. Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1. If `axis` has no entries...
tensorflow/python/ops/math_ops.py
count_nonzero
minminsun/tensorflow
python
@tf_export(v1=['math.count_nonzero', 'count_nonzero']) @deprecation.deprecated_args(None, 'keep_dims is deprecated, use keepdims instead', 'keep_dims') @deprecation.deprecated_args(None, 'reduction_indices is deprecated, use axis instead', 'axis') def count_nonzero(input_tensor, axis=None, keepdims=None, dtype=dtypes.i...
@tf_export('math.count_nonzero', v1=[]) def count_nonzero_v2(input, axis=None, keepdims=None, dtype=dtypes.int64, name=None): 'Computes number of nonzero elements across dimensions of a tensor.\n\n Reduces `input` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced...
-7,720,529,422,064,440,000
Computes number of nonzero elements across dimensions of a tensor. Reduces `input` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1. If `axis` has no entries, all d...
tensorflow/python/ops/math_ops.py
count_nonzero_v2
minminsun/tensorflow
python
@tf_export('math.count_nonzero', v1=[]) def count_nonzero_v2(input, axis=None, keepdims=None, dtype=dtypes.int64, name=None): 'Computes number of nonzero elements across dimensions of a tensor.\n\n Reduces `input` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced...
@tf_export(v1=['math.reduce_mean', 'reduce_mean']) def reduce_mean_v1(input_tensor, axis=None, keepdims=None, name=None, reduction_indices=None, keep_dims=None): 'Computes the mean of elements across dimensions of a tensor.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is tr...
-7,600,130,728,508,035,000
Computes the mean of elements across dimensions of a tensor. Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1. If `axis` is None, all dimensi...
tensorflow/python/ops/math_ops.py
reduce_mean_v1
minminsun/tensorflow
python
@tf_export(v1=['math.reduce_mean', 'reduce_mean']) def reduce_mean_v1(input_tensor, axis=None, keepdims=None, name=None, reduction_indices=None, keep_dims=None): 'Computes the mean of elements across dimensions of a tensor.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is tr...
@tf_export('math.reduce_mean', 'reduce_mean', v1=[]) @dispatch.add_dispatch_support def reduce_mean(input_tensor, axis=None, keepdims=False, name=None): 'Computes the mean of elements across dimensions of a tensor.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the r...
2,426,798,984,264,452,600
Computes the mean of elements across dimensions of a tensor. Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1. If `axis` is None, all dimensi...
tensorflow/python/ops/math_ops.py
reduce_mean
minminsun/tensorflow
python
@tf_export('math.reduce_mean', 'reduce_mean', v1=[]) @dispatch.add_dispatch_support def reduce_mean(input_tensor, axis=None, keepdims=False, name=None): 'Computes the mean of elements across dimensions of a tensor.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the r...
@tf_export('math.reduce_variance') def reduce_variance(input_tensor, axis=None, keepdims=False, name=None): 'Computes the variance of elements across dimensions of a tensor.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced by 1 for eac...
6,836,177,446,070,157,000
Computes the variance of elements across dimensions of a tensor. Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1. If `axis` is None, all dim...
tensorflow/python/ops/math_ops.py
reduce_variance
minminsun/tensorflow
python
@tf_export('math.reduce_variance') def reduce_variance(input_tensor, axis=None, keepdims=False, name=None): 'Computes the variance of elements across dimensions of a tensor.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced by 1 for eac...
@tf_export('math.reduce_std') def reduce_std(input_tensor, axis=None, keepdims=False, name=None): 'Computes the standard deviation of elements across dimensions of a tensor.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced by 1 for eac...
-6,180,980,226,246,193,000
Computes the standard deviation of elements across dimensions of a tensor. Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1. If `axis` is Non...
tensorflow/python/ops/math_ops.py
reduce_std
minminsun/tensorflow
python
@tf_export('math.reduce_std') def reduce_std(input_tensor, axis=None, keepdims=False, name=None): 'Computes the standard deviation of elements across dimensions of a tensor.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank of the tensor is reduced by 1 for eac...
@tf_export('math.reduce_prod', 'reduce_prod', v1=[]) @dispatch.add_dispatch_support def reduce_prod(input_tensor, axis=None, keepdims=False, name=None): 'Computes the product of elements across dimensions of a tensor.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, th...
-7,169,107,470,110,706,000
Computes the product of elements across dimensions of a tensor. Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1. If `axis` is None, all dime...
tensorflow/python/ops/math_ops.py
reduce_prod
minminsun/tensorflow
python
@tf_export('math.reduce_prod', 'reduce_prod', v1=[]) @dispatch.add_dispatch_support def reduce_prod(input_tensor, axis=None, keepdims=False, name=None): 'Computes the product of elements across dimensions of a tensor.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, th...
@tf_export(v1=['math.reduce_prod', 'reduce_prod']) @deprecation.deprecated_args(None, 'keep_dims is deprecated, use keepdims instead', 'keep_dims') def reduce_prod_v1(input_tensor, axis=None, keepdims=None, name=None, reduction_indices=None, keep_dims=None): 'Computes the product of elements across dimensions of a ...
-1,712,806,314,986,321,400
Computes the product of elements across dimensions of a tensor. Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1. If `axis` is None, all dime...
tensorflow/python/ops/math_ops.py
reduce_prod_v1
minminsun/tensorflow
python
@tf_export(v1=['math.reduce_prod', 'reduce_prod']) @deprecation.deprecated_args(None, 'keep_dims is deprecated, use keepdims instead', 'keep_dims') def reduce_prod_v1(input_tensor, axis=None, keepdims=None, name=None, reduction_indices=None, keep_dims=None): 'Computes the product of elements across dimensions of a ...
@tf_export(v1=['math.reduce_min', 'reduce_min']) @deprecation.deprecated_args(None, 'keep_dims is deprecated, use keepdims instead', 'keep_dims') def reduce_min_v1(input_tensor, axis=None, keepdims=None, name=None, reduction_indices=None, keep_dims=None): 'Computes the minimum of elements across dimensions of a ten...
-684,596,964,939,524,000
Computes the minimum of elements across dimensions of a tensor. Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1. If `axis` is None, all dime...
tensorflow/python/ops/math_ops.py
reduce_min_v1
minminsun/tensorflow
python
@tf_export(v1=['math.reduce_min', 'reduce_min']) @deprecation.deprecated_args(None, 'keep_dims is deprecated, use keepdims instead', 'keep_dims') def reduce_min_v1(input_tensor, axis=None, keepdims=None, name=None, reduction_indices=None, keep_dims=None): 'Computes the minimum of elements across dimensions of a ten...
@tf_export('math.reduce_min', 'reduce_min', v1=[]) @dispatch.add_dispatch_support def reduce_min(input_tensor, axis=None, keepdims=False, name=None): 'Computes the minimum of elements across dimensions of a tensor.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the r...
5,341,900,889,916,194,000
Computes the minimum of elements across dimensions of a tensor. Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1. If `axis` is None, all dime...
tensorflow/python/ops/math_ops.py
reduce_min
minminsun/tensorflow
python
@tf_export('math.reduce_min', 'reduce_min', v1=[]) @dispatch.add_dispatch_support def reduce_min(input_tensor, axis=None, keepdims=False, name=None): 'Computes the minimum of elements across dimensions of a tensor.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the r...
@tf_export(v1=['math.reduce_max', 'reduce_max']) @deprecation.deprecated_args(None, 'keep_dims is deprecated, use keepdims instead', 'keep_dims') def reduce_max_v1(input_tensor, axis=None, keepdims=None, name=None, reduction_indices=None, keep_dims=None): 'Computes the maximum of elements across dimensions of a ten...
-4,380,389,479,488,748,000
Computes the maximum of elements across dimensions of a tensor. Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1. If `axis` is None, all dime...
tensorflow/python/ops/math_ops.py
reduce_max_v1
minminsun/tensorflow
python
@tf_export(v1=['math.reduce_max', 'reduce_max']) @deprecation.deprecated_args(None, 'keep_dims is deprecated, use keepdims instead', 'keep_dims') def reduce_max_v1(input_tensor, axis=None, keepdims=None, name=None, reduction_indices=None, keep_dims=None): 'Computes the maximum of elements across dimensions of a ten...
@tf_export('math.reduce_max', 'reduce_max', v1=[]) @dispatch.add_dispatch_support def reduce_max(input_tensor, axis=None, keepdims=False, name=None): 'Computes the maximum of elements across dimensions of a tensor.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the r...
-1,403,142,209,296,636,400
Computes the maximum of elements across dimensions of a tensor. Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1. If `axis` is None, all dime...
tensorflow/python/ops/math_ops.py
reduce_max
minminsun/tensorflow
python
@tf_export('math.reduce_max', 'reduce_max', v1=[]) @dispatch.add_dispatch_support def reduce_max(input_tensor, axis=None, keepdims=False, name=None): 'Computes the maximum of elements across dimensions of a tensor.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the r...
@tf_export(v1=['math.reduce_all', 'reduce_all']) @deprecation.deprecated_args(None, 'keep_dims is deprecated, use keepdims instead', 'keep_dims') def reduce_all_v1(input_tensor, axis=None, keepdims=None, name=None, reduction_indices=None, keep_dims=None): 'Computes the "logical and" of elements across dimensions of...
-326,992,824,029,111,400
Computes the "logical and" of elements across dimensions of a tensor. Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1. If `axis` is None, al...
tensorflow/python/ops/math_ops.py
reduce_all_v1
minminsun/tensorflow
python
@tf_export(v1=['math.reduce_all', 'reduce_all']) @deprecation.deprecated_args(None, 'keep_dims is deprecated, use keepdims instead', 'keep_dims') def reduce_all_v1(input_tensor, axis=None, keepdims=None, name=None, reduction_indices=None, keep_dims=None): 'Computes the "logical and" of elements across dimensions of...
@tf_export('reduce_all', 'math.reduce_all', v1=[]) @dispatch.add_dispatch_support def reduce_all(input_tensor, axis=None, keepdims=False, name=None): 'Computes the "logical and" of elements across dimensions of a tensor.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true,...
-1,228,370,869,425,369,600
Computes the "logical and" of elements across dimensions of a tensor. Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1. If `axis` is None, al...
tensorflow/python/ops/math_ops.py
reduce_all
minminsun/tensorflow
python
@tf_export('reduce_all', 'math.reduce_all', v1=[]) @dispatch.add_dispatch_support def reduce_all(input_tensor, axis=None, keepdims=False, name=None): 'Computes the "logical and" of elements across dimensions of a tensor.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true,...
@tf_export(v1=['math.reduce_any', 'reduce_any']) @deprecation.deprecated_args(None, 'keep_dims is deprecated, use keepdims instead', 'keep_dims') def reduce_any_v1(input_tensor, axis=None, keepdims=None, name=None, reduction_indices=None, keep_dims=None): 'Computes the "logical or" of elements across dimensions of ...
4,959,863,152,156,496,000
Computes the "logical or" of elements across dimensions of a tensor. Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1. If `axis` is None, all...
tensorflow/python/ops/math_ops.py
reduce_any_v1
minminsun/tensorflow
python
@tf_export(v1=['math.reduce_any', 'reduce_any']) @deprecation.deprecated_args(None, 'keep_dims is deprecated, use keepdims instead', 'keep_dims') def reduce_any_v1(input_tensor, axis=None, keepdims=None, name=None, reduction_indices=None, keep_dims=None): 'Computes the "logical or" of elements across dimensions of ...
@tf_export('math.reduce_any', 'reduce_any', v1=[]) @dispatch.add_dispatch_support def reduce_any(input_tensor, axis=None, keepdims=False, name=None): 'Computes the "logical or" of elements across dimensions of a tensor.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, ...
4,303,524,323,399,926,000
Computes the "logical or" of elements across dimensions of a tensor. Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1. If `axis` is None, all...
tensorflow/python/ops/math_ops.py
reduce_any
minminsun/tensorflow
python
@tf_export('math.reduce_any', 'reduce_any', v1=[]) @dispatch.add_dispatch_support def reduce_any(input_tensor, axis=None, keepdims=False, name=None): 'Computes the "logical or" of elements across dimensions of a tensor.\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, ...
@tf_export(v1=['math.reduce_logsumexp', 'reduce_logsumexp']) @deprecation.deprecated_args(None, 'keep_dims is deprecated, use keepdims instead', 'keep_dims') def reduce_logsumexp_v1(input_tensor, axis=None, keepdims=None, name=None, reduction_indices=None, keep_dims=None): 'Computes log(sum(exp(elements across dime...
3,363,810,572,624,813,000
Computes log(sum(exp(elements across dimensions of a tensor))). Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1. If `axis` has no entries, a...
tensorflow/python/ops/math_ops.py
reduce_logsumexp_v1
minminsun/tensorflow
python
@tf_export(v1=['math.reduce_logsumexp', 'reduce_logsumexp']) @deprecation.deprecated_args(None, 'keep_dims is deprecated, use keepdims instead', 'keep_dims') def reduce_logsumexp_v1(input_tensor, axis=None, keepdims=None, name=None, reduction_indices=None, keep_dims=None): 'Computes log(sum(exp(elements across dime...
@tf_export('math.reduce_logsumexp', 'reduce_logsumexp', v1=[]) def reduce_logsumexp(input_tensor, axis=None, keepdims=False, name=None): 'Computes log(sum(exp(elements across dimensions of a tensor))).\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank of the te...
-5,735,506,923,689,613,000
Computes log(sum(exp(elements across dimensions of a tensor))). Reduces `input_tensor` along the dimensions given in `axis`. Unless `keepdims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keepdims` is true, the reduced dimensions are retained with length 1. If `axis` has no entries, a...
tensorflow/python/ops/math_ops.py
reduce_logsumexp
minminsun/tensorflow
python
@tf_export('math.reduce_logsumexp', 'reduce_logsumexp', v1=[]) def reduce_logsumexp(input_tensor, axis=None, keepdims=False, name=None): 'Computes log(sum(exp(elements across dimensions of a tensor))).\n\n Reduces `input_tensor` along the dimensions given in `axis`.\n Unless `keepdims` is true, the rank of the te...
@tf_export('linalg.trace', v1=['linalg.trace', 'trace']) @deprecation.deprecated_endpoints('trace') def trace(x, name=None): 'Compute the trace of a tensor `x`.\n\n `trace(x)` returns the sum along the main diagonal of each inner-most matrix\n in x. If x is of rank `k` with shape `[I, J, K, ..., L, M, N]`, then o...
-3,322,411,102,757,272,000
Compute the trace of a tensor `x`. `trace(x)` returns the sum along the main diagonal of each inner-most matrix in x. If x is of rank `k` with shape `[I, J, K, ..., L, M, N]`, then output is a tensor of rank `k-2` with dimensions `[I, J, K, ..., L]` where `output[i, j, k, ..., l] = trace(x[i, j, i, ..., l, :, :])` F...
tensorflow/python/ops/math_ops.py
trace
minminsun/tensorflow
python
@tf_export('linalg.trace', v1=['linalg.trace', 'trace']) @deprecation.deprecated_endpoints('trace') def trace(x, name=None): 'Compute the trace of a tensor `x`.\n\n `trace(x)` returns the sum along the main diagonal of each inner-most matrix\n in x. If x is of rank `k` with shape `[I, J, K, ..., L, M, N]`, then o...
@tf_export('linalg.matmul', 'matmul') def matmul(a, b, transpose_a=False, transpose_b=False, adjoint_a=False, adjoint_b=False, a_is_sparse=False, b_is_sparse=False, name=None): 'Multiplies matrix `a` by matrix `b`, producing `a` * `b`.\n\n The inputs must, following any transpositions, be tensors of rank >= 2\n w...
-7,635,559,913,485,626,000
Multiplies matrix `a` by matrix `b`, producing `a` * `b`. The inputs must, following any transpositions, be tensors of rank >= 2 where the inner 2 dimensions specify valid matrix multiplication arguments, and any further outer dimensions match. Both matrices must be of the same type. The supported types are: `float16...
tensorflow/python/ops/math_ops.py
matmul
minminsun/tensorflow
python
@tf_export('linalg.matmul', 'matmul') def matmul(a, b, transpose_a=False, transpose_b=False, adjoint_a=False, adjoint_b=False, a_is_sparse=False, b_is_sparse=False, name=None): 'Multiplies matrix `a` by matrix `b`, producing `a` * `b`.\n\n The inputs must, following any transpositions, be tensors of rank >= 2\n w...
@tf_export('linalg.matvec') def matvec(a, b, transpose_a=False, adjoint_a=False, a_is_sparse=False, b_is_sparse=False, name=None): 'Multiplies matrix `a` by vector `b`, producing `a` * `b`.\n\n The matrix `a` must, following any transpositions, be a tensor of rank >= 2,\n and we must have `shape(b) = shape(a)[:-2...
-1,994,279,374,956,680,400
Multiplies matrix `a` by vector `b`, producing `a` * `b`. The matrix `a` must, following any transpositions, be a tensor of rank >= 2, and we must have `shape(b) = shape(a)[:-2] + [shape(a)[-1]]`. Both `a` and `b` must be of the same type. The supported types are: `float16`, `float32`, `float64`, `int32`, `complex64`...
tensorflow/python/ops/math_ops.py
matvec
minminsun/tensorflow
python
@tf_export('linalg.matvec') def matvec(a, b, transpose_a=False, adjoint_a=False, a_is_sparse=False, b_is_sparse=False, name=None): 'Multiplies matrix `a` by vector `b`, producing `a` * `b`.\n\n The matrix `a` must, following any transpositions, be a tensor of rank >= 2,\n and we must have `shape(b) = shape(a)[:-2...
@ops.RegisterStatistics('MatMul', 'flops') def _calc_mat_mul_flops(graph, node): 'Calculates the compute resources needed for MatMul.' transpose_a = node.attr['transpose_a'].b a_shape = graph_util.tensor_shape_from_node_def_name(graph, node.input[0]) a_shape.assert_is_fully_defined() if transpose_a:...
4,539,289,546,934,779,400
Calculates the compute resources needed for MatMul.
tensorflow/python/ops/math_ops.py
_calc_mat_mul_flops
minminsun/tensorflow
python
@ops.RegisterStatistics('MatMul', 'flops') def _calc_mat_mul_flops(graph, node): transpose_a = node.attr['transpose_a'].b a_shape = graph_util.tensor_shape_from_node_def_name(graph, node.input[0]) a_shape.assert_is_fully_defined() if transpose_a: k = int(a_shape[0]) else: k = in...
def _as_indexed_slices(x, optimize=True): "Convert 'x' to IndexedSlices.\n\n Convert a dense Tensor to a block-sparse IndexedSlices.\n\n Args:\n x: Either a Tensor object, or an IndexedSlices object.\n optimize: if true, attempt to optimize the conversion of 'x'.\n\n Returns:\n An IndexedSlices object.\...
5,001,471,169,256,508,000
Convert 'x' to IndexedSlices. Convert a dense Tensor to a block-sparse IndexedSlices. Args: x: Either a Tensor object, or an IndexedSlices object. optimize: if true, attempt to optimize the conversion of 'x'. Returns: An IndexedSlices object. Raises: TypeError: If 'x' is not a Tensor or an IndexedSlices obj...
tensorflow/python/ops/math_ops.py
_as_indexed_slices
minminsun/tensorflow
python
def _as_indexed_slices(x, optimize=True): "Convert 'x' to IndexedSlices.\n\n Convert a dense Tensor to a block-sparse IndexedSlices.\n\n Args:\n x: Either a Tensor object, or an IndexedSlices object.\n optimize: if true, attempt to optimize the conversion of 'x'.\n\n Returns:\n An IndexedSlices object.\...
def _as_indexed_slices_list(inputs, optimize=True): "Convert all elements of 'inputs' to IndexedSlices.\n\n Additionally, homogenize the types of all the indices to\n either int32 or int64.\n\n Args:\n inputs: List containing either Tensor or IndexedSlices objects.\n optimize: if true, attempt to optimize ...
-6,402,197,960,556,010,000
Convert all elements of 'inputs' to IndexedSlices. Additionally, homogenize the types of all the indices to either int32 or int64. Args: inputs: List containing either Tensor or IndexedSlices objects. optimize: if true, attempt to optimize the conversion of each input. Returns: A list of IndexedSlices objects....
tensorflow/python/ops/math_ops.py
_as_indexed_slices_list
minminsun/tensorflow
python
def _as_indexed_slices_list(inputs, optimize=True): "Convert all elements of 'inputs' to IndexedSlices.\n\n Additionally, homogenize the types of all the indices to\n either int32 or int64.\n\n Args:\n inputs: List containing either Tensor or IndexedSlices objects.\n optimize: if true, attempt to optimize ...
@tf_export('math.add_n', 'add_n') @dispatch.add_dispatch_support def add_n(inputs, name=None): "Adds all input tensors element-wise.\n\n Converts `IndexedSlices` objects into dense tensors prior to adding.\n\n Args:\n inputs: A list of `Tensor` or `IndexedSlices` objects, each with same shape\n and type.\...
7,431,452,468,637,580,000
Adds all input tensors element-wise. Converts `IndexedSlices` objects into dense tensors prior to adding. Args: inputs: A list of `Tensor` or `IndexedSlices` objects, each with same shape and type. name: A name for the operation (optional). Returns: A `Tensor` of same shape and type as the elements of `inp...
tensorflow/python/ops/math_ops.py
add_n
minminsun/tensorflow
python
@tf_export('math.add_n', 'add_n') @dispatch.add_dispatch_support def add_n(inputs, name=None): "Adds all input tensors element-wise.\n\n Converts `IndexedSlices` objects into dense tensors prior to adding.\n\n Args:\n inputs: A list of `Tensor` or `IndexedSlices` objects, each with same shape\n and type.\...
@tf_export('math.accumulate_n', v1=['math.accumulate_n', 'accumulate_n']) @deprecation.deprecated_endpoints('accumulate_n') def accumulate_n(inputs, shape=None, tensor_dtype=None, name=None): "Returns the element-wise sum of a list of tensors.\n\n Optionally, pass `shape` and `tensor_dtype` for shape and type chec...
3,628,942,482,496,208,400
Returns the element-wise sum of a list of tensors. Optionally, pass `shape` and `tensor_dtype` for shape and type checking, otherwise, these are inferred. `tf.math.accumulate_n` performs the same operation as `tf.add_n`, but does not wait for all of its inputs to be ready before beginning to sum. This can save memory...
tensorflow/python/ops/math_ops.py
accumulate_n
minminsun/tensorflow
python
@tf_export('math.accumulate_n', v1=['math.accumulate_n', 'accumulate_n']) @deprecation.deprecated_endpoints('accumulate_n') def accumulate_n(inputs, shape=None, tensor_dtype=None, name=None): "Returns the element-wise sum of a list of tensors.\n\n Optionally, pass `shape` and `tensor_dtype` for shape and type chec...
@ops.RegisterGradient('AccumulateNV2') def _accumulate_n_grad(op, grad): 'Same as gradient for AddN. Copies the gradient to all inputs.' return ([grad] * len(op.inputs))
-6,715,794,146,925,564,000
Same as gradient for AddN. Copies the gradient to all inputs.
tensorflow/python/ops/math_ops.py
_accumulate_n_grad
minminsun/tensorflow
python
@ops.RegisterGradient('AccumulateNV2') def _accumulate_n_grad(op, grad): return ([grad] * len(op.inputs))
@tf_export('math.sigmoid', 'nn.sigmoid', 'sigmoid') def sigmoid(x, name=None): 'Computes sigmoid of `x` element-wise.\n\n Specifically, `y = 1 / (1 + exp(-x))`.\n\n Args:\n x: A Tensor with type `float16`, `float32`, `float64`, `complex64`,\n or `complex128`.\n name: A name for the operation (optional)...
-5,913,921,996,781,770,000
Computes sigmoid of `x` element-wise. Specifically, `y = 1 / (1 + exp(-x))`. Args: x: A Tensor with type `float16`, `float32`, `float64`, `complex64`, or `complex128`. name: A name for the operation (optional). Returns: A Tensor with the same type as `x`. @compatibility(scipy) Equivalent to scipy.special....
tensorflow/python/ops/math_ops.py
sigmoid
minminsun/tensorflow
python
@tf_export('math.sigmoid', 'nn.sigmoid', 'sigmoid') def sigmoid(x, name=None): 'Computes sigmoid of `x` element-wise.\n\n Specifically, `y = 1 / (1 + exp(-x))`.\n\n Args:\n x: A Tensor with type `float16`, `float32`, `float64`, `complex64`,\n or `complex128`.\n name: A name for the operation (optional)...
@tf_export('math.log_sigmoid', v1=['math.log_sigmoid', 'log_sigmoid']) @dispatch.add_dispatch_support @deprecation.deprecated_endpoints('log_sigmoid') def log_sigmoid(x, name=None): 'Computes log sigmoid of `x` element-wise.\n\n Specifically, `y = log(1 / (1 + exp(-x)))`. For numerical stability,\n we use `y = -...
2,684,433,138,987,555,000
Computes log sigmoid of `x` element-wise. Specifically, `y = log(1 / (1 + exp(-x)))`. For numerical stability, we use `y = -tf.nn.softplus(-x)`. Args: x: A Tensor with type `float32` or `float64`. name: A name for the operation (optional). Returns: A Tensor with the same type as `x`.
tensorflow/python/ops/math_ops.py
log_sigmoid
minminsun/tensorflow
python
@tf_export('math.log_sigmoid', v1=['math.log_sigmoid', 'log_sigmoid']) @dispatch.add_dispatch_support @deprecation.deprecated_endpoints('log_sigmoid') def log_sigmoid(x, name=None): 'Computes log sigmoid of `x` element-wise.\n\n Specifically, `y = log(1 / (1 + exp(-x)))`. For numerical stability,\n we use `y = -...
@tf_export('math.bincount', v1=[]) def bincount(arr, weights=None, minlength=None, maxlength=None, dtype=dtypes.int32, name=None): 'Counts the number of occurrences of each value in an integer array.\n\n If `minlength` and `maxlength` are not given, returns a vector with length\n `tf.reduce_max(arr) + 1` if `arr`...
2,066,914,436,022,351,400
Counts the number of occurrences of each value in an integer array. If `minlength` and `maxlength` are not given, returns a vector with length `tf.reduce_max(arr) + 1` if `arr` is non-empty, and length 0 otherwise. If `weights` are non-None, then index `i` of the output stores the sum of the value in `weights` at each...
tensorflow/python/ops/math_ops.py
bincount
minminsun/tensorflow
python
@tf_export('math.bincount', v1=[]) def bincount(arr, weights=None, minlength=None, maxlength=None, dtype=dtypes.int32, name=None): 'Counts the number of occurrences of each value in an integer array.\n\n If `minlength` and `maxlength` are not given, returns a vector with length\n `tf.reduce_max(arr) + 1` if `arr`...
@tf_export(v1=['math.bincount', 'bincount']) @deprecation.deprecated_endpoints('bincount') def bincount_v1(arr, weights=None, minlength=None, maxlength=None, dtype=dtypes.int32): 'Counts the number of occurrences of each value in an integer array.\n\n If `minlength` and `maxlength` are not given, returns a vector ...
1,028,371,735,247,038,600
Counts the number of occurrences of each value in an integer array. If `minlength` and `maxlength` are not given, returns a vector with length `tf.reduce_max(arr) + 1` if `arr` is non-empty, and length 0 otherwise. If `weights` are non-None, then index `i` of the output stores the sum of the value in `weights` at each...
tensorflow/python/ops/math_ops.py
bincount_v1
minminsun/tensorflow
python
@tf_export(v1=['math.bincount', 'bincount']) @deprecation.deprecated_endpoints('bincount') def bincount_v1(arr, weights=None, minlength=None, maxlength=None, dtype=dtypes.int32): 'Counts the number of occurrences of each value in an integer array.\n\n If `minlength` and `maxlength` are not given, returns a vector ...
@tf_export('math.cumsum', 'cumsum') def cumsum(x, axis=0, exclusive=False, reverse=False, name=None): 'Compute the cumulative sum of the tensor `x` along `axis`.\n\n By default, this op performs an inclusive cumsum, which means that the first\n element of the input is identical to the first element of the output:...
-5,548,327,466,838,296,000
Compute the cumulative sum of the tensor `x` along `axis`. By default, this op performs an inclusive cumsum, which means that the first element of the input is identical to the first element of the output: ```python tf.cumsum([a, b, c]) # [a, a + b, a + b + c] ``` By setting the `exclusive` kwarg to `True`, an excl...
tensorflow/python/ops/math_ops.py
cumsum
minminsun/tensorflow
python
@tf_export('math.cumsum', 'cumsum') def cumsum(x, axis=0, exclusive=False, reverse=False, name=None): 'Compute the cumulative sum of the tensor `x` along `axis`.\n\n By default, this op performs an inclusive cumsum, which means that the first\n element of the input is identical to the first element of the output:...
@tf_export('math.cumprod', v1=['math.cumprod', 'cumprod']) @deprecation.deprecated_endpoints('cumprod') def cumprod(x, axis=0, exclusive=False, reverse=False, name=None): 'Compute the cumulative product of the tensor `x` along `axis`.\n\n By default, this op performs an inclusive cumprod, which means that the\n f...
-7,603,428,310,472,397,000
Compute the cumulative product of the tensor `x` along `axis`. By default, this op performs an inclusive cumprod, which means that the first element of the input is identical to the first element of the output: ```python tf.math.cumprod([a, b, c]) # [a, a * b, a * b * c] ``` By setting the `exclusive` kwarg to `Tru...
tensorflow/python/ops/math_ops.py
cumprod
minminsun/tensorflow
python
@tf_export('math.cumprod', v1=['math.cumprod', 'cumprod']) @deprecation.deprecated_endpoints('cumprod') def cumprod(x, axis=0, exclusive=False, reverse=False, name=None): 'Compute the cumulative product of the tensor `x` along `axis`.\n\n By default, this op performs an inclusive cumprod, which means that the\n f...
@tf_export('math.conj', v1=['math.conj', 'conj']) @dispatch.add_dispatch_support @deprecation.deprecated_endpoints('conj') def conj(x, name=None): "Returns the complex conjugate of a complex number.\n\n Given a tensor `input` of complex numbers, this operation returns a tensor of\n complex numbers that are the co...
2,591,929,951,766,534,700
Returns the complex conjugate of a complex number. Given a tensor `input` of complex numbers, this operation returns a tensor of complex numbers that are the complex conjugate of each element in `input`. The complex numbers in `input` must be of the form \\(a + bj\\), where *a* is the real part and *b* is the imaginar...
tensorflow/python/ops/math_ops.py
conj
minminsun/tensorflow
python
@tf_export('math.conj', v1=['math.conj', 'conj']) @dispatch.add_dispatch_support @deprecation.deprecated_endpoints('conj') def conj(x, name=None): "Returns the complex conjugate of a complex number.\n\n Given a tensor `input` of complex numbers, this operation returns a tensor of\n complex numbers that are the co...
def _BroadcastShape(op): 'Common shape function for binary operators that broadcast their inputs.' return [common_shapes.broadcast_shape(op.inputs[0].get_shape(), op.inputs[1].get_shape())]
7,801,190,168,870,443,000
Common shape function for binary operators that broadcast their inputs.
tensorflow/python/ops/math_ops.py
_BroadcastShape
minminsun/tensorflow
python
def _BroadcastShape(op): return [common_shapes.broadcast_shape(op.inputs[0].get_shape(), op.inputs[1].get_shape())]
def reduced_shape(input_shape, axes): 'Helper function for reduction ops.\n\n Args:\n input_shape: 1-D Tensor, the shape of the Tensor being reduced.\n axes: 1-D Tensor, the reduction axes.\n Returns:\n A 1-D Tensor, the output shape as if keepdims were set to True.\n ' if context.executing_eagerly(...
-6,701,013,897,436,533,000
Helper function for reduction ops. Args: input_shape: 1-D Tensor, the shape of the Tensor being reduced. axes: 1-D Tensor, the reduction axes. Returns: A 1-D Tensor, the output shape as if keepdims were set to True.
tensorflow/python/ops/math_ops.py
reduced_shape
minminsun/tensorflow
python
def reduced_shape(input_shape, axes): 'Helper function for reduction ops.\n\n Args:\n input_shape: 1-D Tensor, the shape of the Tensor being reduced.\n axes: 1-D Tensor, the reduction axes.\n Returns:\n A 1-D Tensor, the output shape as if keepdims were set to True.\n ' if context.executing_eagerly(...
def _unsorted_segment_N(data, segment_ids, num_segments): ' Helper function for unsorted_segment_mean/_sqrtN. Computes the number\n of segment entries with 0-entries set to 1 to allow division by N.\n ' segment_ids_shape = array_ops.shape_internal(segment_ids) ones_tensor = array_ops.ones(segment_ids_...
1,557,372,864,016,889,000
Helper function for unsorted_segment_mean/_sqrtN. Computes the number of segment entries with 0-entries set to 1 to allow division by N.
tensorflow/python/ops/math_ops.py
_unsorted_segment_N
minminsun/tensorflow
python
def _unsorted_segment_N(data, segment_ids, num_segments): ' Helper function for unsorted_segment_mean/_sqrtN. Computes the number\n of segment entries with 0-entries set to 1 to allow division by N.\n ' segment_ids_shape = array_ops.shape_internal(segment_ids) ones_tensor = array_ops.ones(segment_ids_...
@tf_export('math.unsorted_segment_mean', v1=['math.unsorted_segment_mean', 'unsorted_segment_mean']) @deprecation.deprecated_endpoints('unsorted_segment_mean') @dispatch.add_dispatch_support def unsorted_segment_mean(data, segment_ids, num_segments, name=None): 'Computes the mean along segments of a tensor.\n\n Re...
3,424,219,195,254,692,000
Computes the mean along segments of a tensor. Read [the section on segmentation](https://tensorflow.org/api_guides/python/math_ops#segmentation) for an explanation of segments. This operator is similar to the unsorted segment sum operator found [here](../../../api_docs/python/math_ops.md#UnsortedSegmentSum). Instead ...
tensorflow/python/ops/math_ops.py
unsorted_segment_mean
minminsun/tensorflow
python
@tf_export('math.unsorted_segment_mean', v1=['math.unsorted_segment_mean', 'unsorted_segment_mean']) @deprecation.deprecated_endpoints('unsorted_segment_mean') @dispatch.add_dispatch_support def unsorted_segment_mean(data, segment_ids, num_segments, name=None): 'Computes the mean along segments of a tensor.\n\n Re...
@tf_export('math.unsorted_segment_sqrt_n', v1=['math.unsorted_segment_sqrt_n', 'unsorted_segment_sqrt_n']) @deprecation.deprecated_endpoints('unsorted_segment_sqrt_n') @dispatch.add_dispatch_support def unsorted_segment_sqrt_n(data, segment_ids, num_segments, name=None): 'Computes the sum along segments of a tensor...
-8,094,678,367,791,078,000
Computes the sum along segments of a tensor divided by the sqrt(N). Read [the section on segmentation](https://tensorflow.org/api_guides/python/math_ops#segmentation) for an explanation of segments. This operator is similar to the unsorted segment sum operator found [here](../../../api_docs/python/math_ops.md#Unsorte...
tensorflow/python/ops/math_ops.py
unsorted_segment_sqrt_n
minminsun/tensorflow
python
@tf_export('math.unsorted_segment_sqrt_n', v1=['math.unsorted_segment_sqrt_n', 'unsorted_segment_sqrt_n']) @deprecation.deprecated_endpoints('unsorted_segment_sqrt_n') @dispatch.add_dispatch_support def unsorted_segment_sqrt_n(data, segment_ids, num_segments, name=None): 'Computes the sum along segments of a tensor...
@tf_export(v1=['sparse.segment_sum', 'sparse_segment_sum']) @deprecation.deprecated_endpoints('sparse_segment_sum') def sparse_segment_sum(data, indices, segment_ids, name=None, num_segments=None): "Computes the sum along sparse segments of a tensor.\n\n Read [the section on\n segmentation](https://tensorflow.org...
-8,370,364,508,005,443,000
Computes the sum along sparse segments of a tensor. Read [the section on segmentation](https://tensorflow.org/api_guides/python/math_ops#Segmentation) for an explanation of segments. Like `SegmentSum`, but `segment_ids` can have rank less than `data`'s first dimension, selecting a subset of dimension 0, specified by ...
tensorflow/python/ops/math_ops.py
sparse_segment_sum
minminsun/tensorflow
python
@tf_export(v1=['sparse.segment_sum', 'sparse_segment_sum']) @deprecation.deprecated_endpoints('sparse_segment_sum') def sparse_segment_sum(data, indices, segment_ids, name=None, num_segments=None): "Computes the sum along sparse segments of a tensor.\n\n Read [the section on\n segmentation](https://tensorflow.org...
@tf_export(v1=['sparse.segment_mean', 'sparse_segment_mean']) @deprecation.deprecated_endpoints('sparse_segment_mean') def sparse_segment_mean(data, indices, segment_ids, name=None, num_segments=None): "Computes the mean along sparse segments of a tensor.\n\n Read [the section on\n segmentation](https://tensorflo...
6,740,956,599,249,067,000
Computes the mean along sparse segments of a tensor. Read [the section on segmentation](https://tensorflow.org/api_guides/python/math_ops#Segmentation) for an explanation of segments. Like `SegmentMean`, but `segment_ids` can have rank less than `data`'s first dimension, selecting a subset of dimension 0, specified b...
tensorflow/python/ops/math_ops.py
sparse_segment_mean
minminsun/tensorflow
python
@tf_export(v1=['sparse.segment_mean', 'sparse_segment_mean']) @deprecation.deprecated_endpoints('sparse_segment_mean') def sparse_segment_mean(data, indices, segment_ids, name=None, num_segments=None): "Computes the mean along sparse segments of a tensor.\n\n Read [the section on\n segmentation](https://tensorflo...
@tf_export('sparse.segment_mean', v1=[]) def sparse_segment_mean_v2(data, indices, segment_ids, num_segments=None, name=None): "Computes the mean along sparse segments of a tensor.\n\n Read [the section on\n segmentation](https://tensorflow.org/api_guides/python/math_ops#Segmentation)\n for an explanation of seg...
-4,683,962,204,175,395,000
Computes the mean along sparse segments of a tensor. Read [the section on segmentation](https://tensorflow.org/api_guides/python/math_ops#Segmentation) for an explanation of segments. Like `SegmentMean`, but `segment_ids` can have rank less than `data`'s first dimension, selecting a subset of dimension 0, specified b...
tensorflow/python/ops/math_ops.py
sparse_segment_mean_v2
minminsun/tensorflow
python
@tf_export('sparse.segment_mean', v1=[]) def sparse_segment_mean_v2(data, indices, segment_ids, num_segments=None, name=None): "Computes the mean along sparse segments of a tensor.\n\n Read [the section on\n segmentation](https://tensorflow.org/api_guides/python/math_ops#Segmentation)\n for an explanation of seg...
@tf_export(v1=['sparse.segment_sqrt_n', 'sparse_segment_sqrt_n']) @deprecation.deprecated_endpoints('sparse_segment_sqrt_n') def sparse_segment_sqrt_n(data, indices, segment_ids, name=None, num_segments=None): 'Computes the sum along sparse segments of a tensor divided by the sqrt(N).\n\n `N` is the size of the se...
6,518,435,009,501,153,000
Computes the sum along sparse segments of a tensor divided by the sqrt(N). `N` is the size of the segment being reduced. Args: data: A `Tensor` with data that will be assembled in the output. indices: A 1-D `Tensor` with indices into `data`. Has same rank as `segment_ids`. segment_ids: A 1-D `Tensor` with i...
tensorflow/python/ops/math_ops.py
sparse_segment_sqrt_n
minminsun/tensorflow
python
@tf_export(v1=['sparse.segment_sqrt_n', 'sparse_segment_sqrt_n']) @deprecation.deprecated_endpoints('sparse_segment_sqrt_n') def sparse_segment_sqrt_n(data, indices, segment_ids, name=None, num_segments=None): 'Computes the sum along sparse segments of a tensor divided by the sqrt(N).\n\n `N` is the size of the se...
@tf_export('sparse.segment_sqrt_n', v1=[]) def sparse_segment_sqrt_n_v2(data, indices, segment_ids, num_segments=None, name=None): 'Computes the sum along sparse segments of a tensor divided by the sqrt(N).\n\n `N` is the size of the segment being reduced.\n\n Args:\n data: A `Tensor` with data that will be as...
-5,794,129,291,528,013,000
Computes the sum along sparse segments of a tensor divided by the sqrt(N). `N` is the size of the segment being reduced. Args: data: A `Tensor` with data that will be assembled in the output. indices: A 1-D `Tensor` with indices into `data`. Has same rank as `segment_ids`. segment_ids: A 1-D `Tensor` with i...
tensorflow/python/ops/math_ops.py
sparse_segment_sqrt_n_v2
minminsun/tensorflow
python
@tf_export('sparse.segment_sqrt_n', v1=[]) def sparse_segment_sqrt_n_v2(data, indices, segment_ids, num_segments=None, name=None): 'Computes the sum along sparse segments of a tensor divided by the sqrt(N).\n\n `N` is the size of the segment being reduced.\n\n Args:\n data: A `Tensor` with data that will be as...
@tf_export('tensordot', 'linalg.tensordot') def tensordot(a, b, axes, name=None): 'Tensor contraction of a and b along specified axes.\n\n Tensordot (also known as tensor contraction) sums the product of elements\n from `a` and `b` over the indices specified by `a_axes` and `b_axes`.\n The lists `a_axes` and `b_...
-722,363,931,524,994,300
Tensor contraction of a and b along specified axes. Tensordot (also known as tensor contraction) sums the product of elements from `a` and `b` over the indices specified by `a_axes` and `b_axes`. The lists `a_axes` and `b_axes` specify those pairs of axes along which to contract the tensors. The axis `a_axes[i]` of `a...
tensorflow/python/ops/math_ops.py
tensordot
minminsun/tensorflow
python
@tf_export('tensordot', 'linalg.tensordot') def tensordot(a, b, axes, name=None): 'Tensor contraction of a and b along specified axes.\n\n Tensordot (also known as tensor contraction) sums the product of elements\n from `a` and `b` over the indices specified by `a_axes` and `b_axes`.\n The lists `a_axes` and `b_...
@tf_export('math.polyval') def polyval(coeffs, x, name=None): "Computes the elementwise value of a polynomial.\n\n If `x` is a tensor and `coeffs` is a list n + 1 tensors, this function returns\n the value of the n-th order polynomial\n\n p(x) = coeffs[n-1] + coeffs[n-2] * x + ... + coeffs[0] * x**(n-1)\n\n ...
-3,919,992,842,877,895,000
Computes the elementwise value of a polynomial. If `x` is a tensor and `coeffs` is a list n + 1 tensors, this function returns the value of the n-th order polynomial p(x) = coeffs[n-1] + coeffs[n-2] * x + ... + coeffs[0] * x**(n-1) evaluated using Horner's method, i.e. p(x) = coeffs[n-1] + x * (coeffs[n-2] +...
tensorflow/python/ops/math_ops.py
polyval
minminsun/tensorflow
python
@tf_export('math.polyval') def polyval(coeffs, x, name=None): "Computes the elementwise value of a polynomial.\n\n If `x` is a tensor and `coeffs` is a list n + 1 tensors, this function returns\n the value of the n-th order polynomial\n\n p(x) = coeffs[n-1] + coeffs[n-2] * x + ... + coeffs[0] * x**(n-1)\n\n ...
def __init__(self, x, name): 'Construct DivideDelegateWithName.\n\n Args:\n x: Tensor to use as left operand in operator overloads\n name: The name that is preferred for the op created.\n ' self.x = x self.name = name
5,227,219,078,160,836,000
Construct DivideDelegateWithName. Args: x: Tensor to use as left operand in operator overloads name: The name that is preferred for the op created.
tensorflow/python/ops/math_ops.py
__init__
minminsun/tensorflow
python
def __init__(self, x, name): 'Construct DivideDelegateWithName.\n\n Args:\n x: Tensor to use as left operand in operator overloads\n name: The name that is preferred for the op created.\n ' self.x = x self.name = name
def _tensordot_reshape(a, axes, flipped=False): 'Helper method to perform transpose and reshape for contraction op.\n\n This method is helpful in reducing `math_ops.tensordot` to `math_ops.matmul`\n using `array_ops.transpose` and `array_ops.reshape`. The method takes a\n tensor and performs the correct tr...
-1,413,959,816,581,312,000
Helper method to perform transpose and reshape for contraction op. This method is helpful in reducing `math_ops.tensordot` to `math_ops.matmul` using `array_ops.transpose` and `array_ops.reshape`. The method takes a tensor and performs the correct transpose and reshape operation for a given set of indices. It returns ...
tensorflow/python/ops/math_ops.py
_tensordot_reshape
minminsun/tensorflow
python
def _tensordot_reshape(a, axes, flipped=False): 'Helper method to perform transpose and reshape for contraction op.\n\n This method is helpful in reducing `math_ops.tensordot` to `math_ops.matmul`\n using `array_ops.transpose` and `array_ops.reshape`. The method takes a\n tensor and performs the correct tr...
def _tensordot_axes(a, axes): 'Generates two sets of contraction axes for the two tensor arguments.' a_shape = a.get_shape() if isinstance(axes, compat.integral_types): if (axes < 0): raise ValueError("'axes' must be at least 0.") if (a_shape.ndims is not None): if (a...
7,019,879,053,513,074,000
Generates two sets of contraction axes for the two tensor arguments.
tensorflow/python/ops/math_ops.py
_tensordot_axes
minminsun/tensorflow
python
def _tensordot_axes(a, axes): a_shape = a.get_shape() if isinstance(axes, compat.integral_types): if (axes < 0): raise ValueError("'axes' must be at least 0.") if (a_shape.ndims is not None): if (axes > a_shape.ndims): raise ValueError(("'axes' must n...
def robust_mean_mixture(x): "Compute the mean via a mixture of two Gaussians\n\n One Gaussian accounts for outliers, and one Gaussian accounts for\n the true distribution. This cannot be computed analytically, so\n it uses scipy's function optimization\n " if (len(x) == 1): return x x ...
-6,581,423,013,018,028,000
Compute the mean via a mixture of two Gaussians One Gaussian accounts for outliers, and one Gaussian accounts for the true distribution. This cannot be computed analytically, so it uses scipy's function optimization
book_figures/chapter3/fig_cauchy_median_mean.py
robust_mean_mixture
larsmans/astroML
python
def robust_mean_mixture(x): "Compute the mean via a mixture of two Gaussians\n\n One Gaussian accounts for outliers, and one Gaussian accounts for\n the true distribution. This cannot be computed analytically, so\n it uses scipy's function optimization\n " if (len(x) == 1): return x x ...
def robust_mean_iterated(x, sigma_cut=3): 'Compute the robust mean iteratively\n\n After computing the mean, points further than 3 sigma from the mean\n are removed and the result is repeated until convergence.\n ' flag = np.ones(x.shape, dtype=bool) n_to_keep = x.size while True: xf = ...
8,003,899,538,360,857,000
Compute the robust mean iteratively After computing the mean, points further than 3 sigma from the mean are removed and the result is repeated until convergence.
book_figures/chapter3/fig_cauchy_median_mean.py
robust_mean_iterated
larsmans/astroML
python
def robust_mean_iterated(x, sigma_cut=3): 'Compute the robust mean iteratively\n\n After computing the mean, points further than 3 sigma from the mean\n are removed and the result is repeated until convergence.\n ' flag = np.ones(x.shape, dtype=bool) n_to_keep = x.size while True: xf = ...
def hex_str(an_int): 'Converts an int to an hexadecimal string\n ' return '{0:#x}'.format(an_int)
7,558,796,261,362,834,000
Converts an int to an hexadecimal string
Sklearn_scipy_numpy/source/sklearn/externals/joblib/numpy_pickle.py
hex_str
Con-Mi/lambda-packs
python
def hex_str(an_int): '\n ' return '{0:#x}'.format(an_int)
def _read_magic(file_handle): ' Utility to check the magic signature of a file identifying it as a\n Zfile\n ' magic = file_handle.read(len(_ZFILE_PREFIX)) file_handle.seek(0) return magic
-8,760,349,105,058,396,000
Utility to check the magic signature of a file identifying it as a Zfile
Sklearn_scipy_numpy/source/sklearn/externals/joblib/numpy_pickle.py
_read_magic
Con-Mi/lambda-packs
python
def _read_magic(file_handle): ' Utility to check the magic signature of a file identifying it as a\n Zfile\n ' magic = file_handle.read(len(_ZFILE_PREFIX)) file_handle.seek(0) return magic
def read_zfile(file_handle): 'Read the z-file and return the content as a string\n\n Z-files are raw data compressed with zlib used internally by joblib\n for persistence. Backward compatibility is not guaranteed. Do not\n use for external purposes.\n ' file_handle.seek(0) assert (_read_magic(fi...
5,326,646,524,738,486,000
Read the z-file and return the content as a string Z-files are raw data compressed with zlib used internally by joblib for persistence. Backward compatibility is not guaranteed. Do not use for external purposes.
Sklearn_scipy_numpy/source/sklearn/externals/joblib/numpy_pickle.py
read_zfile
Con-Mi/lambda-packs
python
def read_zfile(file_handle): 'Read the z-file and return the content as a string\n\n Z-files are raw data compressed with zlib used internally by joblib\n for persistence. Backward compatibility is not guaranteed. Do not\n use for external purposes.\n ' file_handle.seek(0) assert (_read_magic(fi...
def write_zfile(file_handle, data, compress=1): 'Write the data in the given file as a Z-file.\n\n Z-files are raw data compressed with zlib used internally by joblib\n for persistence. Backward compatibility is not guarantied. Do not\n use for external purposes.\n ' file_handle.write(_ZFILE_PREFIX)...
3,700,030,578,405,151,000
Write the data in the given file as a Z-file. Z-files are raw data compressed with zlib used internally by joblib for persistence. Backward compatibility is not guarantied. Do not use for external purposes.
Sklearn_scipy_numpy/source/sklearn/externals/joblib/numpy_pickle.py
write_zfile
Con-Mi/lambda-packs
python
def write_zfile(file_handle, data, compress=1): 'Write the data in the given file as a Z-file.\n\n Z-files are raw data compressed with zlib used internally by joblib\n for persistence. Backward compatibility is not guarantied. Do not\n use for external purposes.\n ' file_handle.write(_ZFILE_PREFIX)...
def dump(value, filename, compress=0, cache_size=100, protocol=None): 'Fast persistence of an arbitrary Python object into one or multiple\n files, with dedicated storage for numpy arrays.\n\n Parameters\n -----------\n value: any Python object\n The object to store to disk\n filename: string\...
3,711,028,077,857,704,000
Fast persistence of an arbitrary Python object into one or multiple files, with dedicated storage for numpy arrays. Parameters ----------- value: any Python object The object to store to disk filename: string The name of the file in which it is to be stored compress: integer for 0 to 9, optional Optional c...
Sklearn_scipy_numpy/source/sklearn/externals/joblib/numpy_pickle.py
dump
Con-Mi/lambda-packs
python
def dump(value, filename, compress=0, cache_size=100, protocol=None): 'Fast persistence of an arbitrary Python object into one or multiple\n files, with dedicated storage for numpy arrays.\n\n Parameters\n -----------\n value: any Python object\n The object to store to disk\n filename: string\...
def load(filename, mmap_mode=None): "Reconstruct a Python object from a file persisted with joblib.dump.\n\n Parameters\n -----------\n filename: string\n The name of the file from which to load the object\n mmap_mode: {None, 'r+', 'r', 'w+', 'c'}, optional\n If not None, the arrays are me...
-3,146,841,016,032,771,000
Reconstruct a Python object from a file persisted with joblib.dump. Parameters ----------- filename: string The name of the file from which to load the object mmap_mode: {None, 'r+', 'r', 'w+', 'c'}, optional If not None, the arrays are memory-mapped from the disk. This mode has no effect for compressed fi...
Sklearn_scipy_numpy/source/sklearn/externals/joblib/numpy_pickle.py
load
Con-Mi/lambda-packs
python
def load(filename, mmap_mode=None): "Reconstruct a Python object from a file persisted with joblib.dump.\n\n Parameters\n -----------\n filename: string\n The name of the file from which to load the object\n mmap_mode: {None, 'r+', 'r', 'w+', 'c'}, optional\n If not None, the arrays are me...
def __init__(self, filename, subclass, allow_mmap=True): 'Store the useful information for later' self.filename = filename self.subclass = subclass self.allow_mmap = allow_mmap
2,602,862,781,666,437,600
Store the useful information for later
Sklearn_scipy_numpy/source/sklearn/externals/joblib/numpy_pickle.py
__init__
Con-Mi/lambda-packs
python
def __init__(self, filename, subclass, allow_mmap=True): self.filename = filename self.subclass = subclass self.allow_mmap = allow_mmap
def read(self, unpickler): 'Reconstruct the array' filename = os.path.join(unpickler._dirname, self.filename) np_ver = [int(x) for x in unpickler.np.__version__.split('.', 2)[:2]] allow_mmap = getattr(self, 'allow_mmap', True) memmap_kwargs = ({} if (not allow_mmap) else {'mmap_mode': unpickler.mmap...
-6,502,246,441,225,244,000
Reconstruct the array
Sklearn_scipy_numpy/source/sklearn/externals/joblib/numpy_pickle.py
read
Con-Mi/lambda-packs
python
def read(self, unpickler): filename = os.path.join(unpickler._dirname, self.filename) np_ver = [int(x) for x in unpickler.np.__version__.split('.', 2)[:2]] allow_mmap = getattr(self, 'allow_mmap', True) memmap_kwargs = ({} if (not allow_mmap) else {'mmap_mode': unpickler.mmap_mode}) array = unp...
def __init__(self, filename, init_args, state): 'Store the useful information for later' self.filename = filename self.state = state self.init_args = init_args
314,920,583,705,363,460
Store the useful information for later
Sklearn_scipy_numpy/source/sklearn/externals/joblib/numpy_pickle.py
__init__
Con-Mi/lambda-packs
python
def __init__(self, filename, init_args, state): self.filename = filename self.state = state self.init_args = init_args
def read(self, unpickler): 'Reconstruct the array from the meta-information and the z-file' filename = os.path.join(unpickler._dirname, self.filename) array = unpickler.np.core.multiarray._reconstruct(*self.init_args) with open(filename, 'rb') as f: data = read_zfile(f) state = (self.state +...
-2,455,838,817,075,382,300
Reconstruct the array from the meta-information and the z-file
Sklearn_scipy_numpy/source/sklearn/externals/joblib/numpy_pickle.py
read
Con-Mi/lambda-packs
python
def read(self, unpickler): filename = os.path.join(unpickler._dirname, self.filename) array = unpickler.np.core.multiarray._reconstruct(*self.init_args) with open(filename, 'rb') as f: data = read_zfile(f) state = (self.state + (data,)) array.__setstate__(state) return array
def save(self, obj): ' Subclass the save method, to save ndarray subclasses in npy\n files, rather than pickling them. Of course, this is a\n total abuse of the Pickler class.\n ' if ((self.np is not None) and (type(obj) in (self.np.ndarray, self.np.matrix, self.np.memmap))): ...
5,880,806,143,418,513,000
Subclass the save method, to save ndarray subclasses in npy files, rather than pickling them. Of course, this is a total abuse of the Pickler class.
Sklearn_scipy_numpy/source/sklearn/externals/joblib/numpy_pickle.py
save
Con-Mi/lambda-packs
python
def save(self, obj): ' Subclass the save method, to save ndarray subclasses in npy\n files, rather than pickling them. Of course, this is a\n total abuse of the Pickler class.\n ' if ((self.np is not None) and (type(obj) in (self.np.ndarray, self.np.matrix, self.np.memmap))): ...
def load_build(self): ' This method is called to set the state of a newly created\n object.\n\n We capture it to replace our place-holder objects,\n NDArrayWrapper, by the array we are interested in. We\n replace them directly in the stack of pickler.\n ' Unpic...
-5,773,961,076,715,824,000
This method is called to set the state of a newly created object. We capture it to replace our place-holder objects, NDArrayWrapper, by the array we are interested in. We replace them directly in the stack of pickler.
Sklearn_scipy_numpy/source/sklearn/externals/joblib/numpy_pickle.py
load_build
Con-Mi/lambda-packs
python
def load_build(self): ' This method is called to set the state of a newly created\n object.\n\n We capture it to replace our place-holder objects,\n NDArrayWrapper, by the array we are interested in. We\n replace them directly in the stack of pickler.\n ' Unpic...
def infer_language_pair(path): 'Infer language pair from filename: <split>.<lang1>-<lang2>.(...).idx' (src, dst) = (None, None) for filename in PathManager.ls(path): parts = filename.split('.') if ((len(parts) >= 3) and (len(parts[1].split('-')) == 2)): return parts[1].split('-')...
1,323,945,075,611,131,000
Infer language pair from filename: <split>.<lang1>-<lang2>.(...).idx
fairseq/data/data_utils.py
infer_language_pair
1130310223/fairseq
python
def infer_language_pair(path): (src, dst) = (None, None) for filename in PathManager.ls(path): parts = filename.split('.') if ((len(parts) >= 3) and (len(parts[1].split('-')) == 2)): return parts[1].split('-') return (src, dst)
def collate_tokens(values, pad_idx, eos_idx=None, left_pad=False, move_eos_to_beginning=False, pad_to_length=None, pad_to_multiple=1, pad_to_bsz=None): 'Convert a list of 1d tensors into a padded 2d tensor.' size = max((v.size(0) for v in values)) size = (size if (pad_to_length is None) else max(size, pad_t...
-6,598,988,593,867,076,000
Convert a list of 1d tensors into a padded 2d tensor.
fairseq/data/data_utils.py
collate_tokens
1130310223/fairseq
python
def collate_tokens(values, pad_idx, eos_idx=None, left_pad=False, move_eos_to_beginning=False, pad_to_length=None, pad_to_multiple=1, pad_to_bsz=None): size = max((v.size(0) for v in values)) size = (size if (pad_to_length is None) else max(size, pad_to_length)) if ((pad_to_multiple != 1) and ((size % ...
def load_indexed_dataset(path, dictionary=None, dataset_impl=None, combine=False, default='cached'): "A helper function for loading indexed datasets.\n\n Args:\n path (str): path to indexed dataset (e.g., 'data-bin/train')\n dictionary (~fairseq.data.Dictionary): data dictionary\n dataset_im...
-5,094,570,305,231,014,000
A helper function for loading indexed datasets. Args: path (str): path to indexed dataset (e.g., 'data-bin/train') dictionary (~fairseq.data.Dictionary): data dictionary dataset_impl (str, optional): which dataset implementation to use. If not provided, it will be inferred automatically. For legacy...
fairseq/data/data_utils.py
load_indexed_dataset
1130310223/fairseq
python
def load_indexed_dataset(path, dictionary=None, dataset_impl=None, combine=False, default='cached'): "A helper function for loading indexed datasets.\n\n Args:\n path (str): path to indexed dataset (e.g., 'data-bin/train')\n dictionary (~fairseq.data.Dictionary): data dictionary\n dataset_im...
@contextlib.contextmanager def numpy_seed(seed, *addl_seeds): 'Context manager which seeds the NumPy PRNG with the specified seed and\n restores the state afterward' if (seed is None): (yield) return if (len(addl_seeds) > 0): seed = int((hash((seed, *addl_seeds)) % 1000000.0)) ...
3,241,145,856,189,170,000
Context manager which seeds the NumPy PRNG with the specified seed and restores the state afterward
fairseq/data/data_utils.py
numpy_seed
1130310223/fairseq
python
@contextlib.contextmanager def numpy_seed(seed, *addl_seeds): 'Context manager which seeds the NumPy PRNG with the specified seed and\n restores the state afterward' if (seed is None): (yield) return if (len(addl_seeds) > 0): seed = int((hash((seed, *addl_seeds)) % 1000000.0)) ...
def collect_filtered(function, iterable, filtered): '\n Similar to :func:`filter` but collects filtered elements in ``filtered``.\n\n Args:\n function (callable): function that returns ``False`` for elements that\n should be filtered\n iterable (iterable): iterable to filter\n ...
6,198,038,143,385,185,000
Similar to :func:`filter` but collects filtered elements in ``filtered``. Args: function (callable): function that returns ``False`` for elements that should be filtered iterable (iterable): iterable to filter filtered (list): list to store filtered elements
fairseq/data/data_utils.py
collect_filtered
1130310223/fairseq
python
def collect_filtered(function, iterable, filtered): '\n Similar to :func:`filter` but collects filtered elements in ``filtered``.\n\n Args:\n function (callable): function that returns ``False`` for elements that\n should be filtered\n iterable (iterable): iterable to filter\n ...
def filter_by_size(indices, dataset, max_positions, raise_exception=False): '\n [deprecated] Filter indices based on their size.\n Use `FairseqDataset::filter_indices_by_size` instead.\n\n Args:\n indices (List[int]): ordered list of dataset indices\n dataset (FairseqDataset): fairseq dataset...
217,252,323,563,451,330
[deprecated] Filter indices based on their size. Use `FairseqDataset::filter_indices_by_size` instead. Args: indices (List[int]): ordered list of dataset indices dataset (FairseqDataset): fairseq dataset instance max_positions (tuple): filter elements larger than this size. Comparisons are done com...
fairseq/data/data_utils.py
filter_by_size
1130310223/fairseq
python
def filter_by_size(indices, dataset, max_positions, raise_exception=False): '\n [deprecated] Filter indices based on their size.\n Use `FairseqDataset::filter_indices_by_size` instead.\n\n Args:\n indices (List[int]): ordered list of dataset indices\n dataset (FairseqDataset): fairseq dataset...
def filter_paired_dataset_indices_by_size(src_sizes, tgt_sizes, indices, max_sizes): 'Filter a list of sample indices. Remove those that are longer\n than specified in max_sizes.\n\n Args:\n indices (np.array): original array of sample indices\n max_sizes (int or list[int] or tuple[int]): ma...
5,178,672,285,530,145,000
Filter a list of sample indices. Remove those that are longer than specified in max_sizes. Args: indices (np.array): original array of sample indices max_sizes (int or list[int] or tuple[int]): max sample size, can be defined separately for src and tgt (then list or tuple) Returns: np.array: f...
fairseq/data/data_utils.py
filter_paired_dataset_indices_by_size
1130310223/fairseq
python
def filter_paired_dataset_indices_by_size(src_sizes, tgt_sizes, indices, max_sizes): 'Filter a list of sample indices. Remove those that are longer\n than specified in max_sizes.\n\n Args:\n indices (np.array): original array of sample indices\n max_sizes (int or list[int] or tuple[int]): ma...
def batch_by_size(indices, num_tokens_fn, num_tokens_vec=None, max_tokens=None, max_sentences=None, required_batch_size_multiple=1, fixed_shapes=None): '\n Yield mini-batches of indices bucketed by size. Batches may contain\n sequences of different lengths.\n\n Args:\n indices (List[int]): ordered l...
-8,882,598,935,316,831,000
Yield mini-batches of indices bucketed by size. Batches may contain sequences of different lengths. Args: indices (List[int]): ordered list of dataset indices num_tokens_fn (callable): function that returns the number of tokens at a given index num_tokens_vec (List[int], optional): precomputed vect...
fairseq/data/data_utils.py
batch_by_size
1130310223/fairseq
python
def batch_by_size(indices, num_tokens_fn, num_tokens_vec=None, max_tokens=None, max_sentences=None, required_batch_size_multiple=1, fixed_shapes=None): '\n Yield mini-batches of indices bucketed by size. Batches may contain\n sequences of different lengths.\n\n Args:\n indices (List[int]): ordered l...
def compute_mask_indices(shape: Tuple[(int, int)], padding_mask: Optional[torch.Tensor], mask_prob: float, mask_length: int, mask_type: str='static', mask_other: float=0.0, min_masks: int=0, no_overlap: bool=False, min_space: int=0) -> np.ndarray: '\n Computes random mask spans for a given shape\n\n Args:\n ...
-2,985,380,715,055,749,600
Computes random mask spans for a given shape Args: shape: the the shape for which to compute masks. should be of size 2 where first element is batch size and 2nd is timesteps padding_mask: optional padding mask of the same size as shape, which will prevent masking padded elements mask_prob: probabi...
fairseq/data/data_utils.py
compute_mask_indices
1130310223/fairseq
python
def compute_mask_indices(shape: Tuple[(int, int)], padding_mask: Optional[torch.Tensor], mask_prob: float, mask_length: int, mask_type: str='static', mask_other: float=0.0, min_masks: int=0, no_overlap: bool=False, min_space: int=0) -> np.ndarray: '\n Computes random mask spans for a given shape\n\n Args:\n ...
def raise_if_valid_subsets_unintentionally_ignored(train_cfg) -> None: "Raises if there are paths matching 'valid*[0-9].*' which are not combined or ignored." if (train_cfg.dataset.ignore_unused_valid_subsets or train_cfg.dataset.combine_valid_subsets or train_cfg.dataset.disable_validation or (not hasattr(trai...
560,671,202,644,717,800
Raises if there are paths matching 'valid*[0-9].*' which are not combined or ignored.
fairseq/data/data_utils.py
raise_if_valid_subsets_unintentionally_ignored
1130310223/fairseq
python
def raise_if_valid_subsets_unintentionally_ignored(train_cfg) -> None: if (train_cfg.dataset.ignore_unused_valid_subsets or train_cfg.dataset.combine_valid_subsets or train_cfg.dataset.disable_validation or (not hasattr(train_cfg.task, 'data'))): return other_paths = _find_extra_valid_paths(train_c...
@commands.command(name='xkcd', brief='send xkcd comic') async def xkcd(self, ctx, args=None): '\n send xkcd comic\n *xkcd -> sends newest comic\n *xkcd random -> sends random comic\n *xkcd [number] -> sends a specific comic\n ' url = None if (not args): url = 'http...
5,095,829,258,922,094,000
send xkcd comic *xkcd -> sends newest comic *xkcd random -> sends random comic *xkcd [number] -> sends a specific comic
extensions/api.py
xkcd
JoseFilipeFerreira/JBB.py
python
@commands.command(name='xkcd', brief='send xkcd comic') async def xkcd(self, ctx, args=None): '\n send xkcd comic\n *xkcd -> sends newest comic\n *xkcd random -> sends random comic\n *xkcd [number] -> sends a specific comic\n ' url = None if (not args): url = 'http...