doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
torch.quantization.prepare_qat(model, mapping=None, inplace=False) [source]
Prepares a copy of the model for quantization calibration or quantization-aware training and converts it to quantized version. Quantization configuration should be assigned preemptively to individual submodules in .qconfig attribute. Paramet... | torch.quantization#torch.quantization.prepare_qat |
torch.quantization.propagate_qconfig_(module, qconfig_dict=None, allow_list=None) [source]
Propagate qconfig through the module hierarchy and assign qconfig attribute on each leaf module Parameters
module β input module
qconfig_dict β dictionary that maps from name or type of submodule to quantization configurat... | torch.quantization#torch.quantization.propagate_qconfig_ |
class torch.quantization.QConfig [source]
Describes how to quantize a layer or a part of the network by providing settings (observer classes) for activations and weights respectively. Note that QConfig needs to contain observer classes (like MinMaxObserver) or a callable that returns instances on invocation, not the ... | torch.quantization#torch.quantization.QConfig |
class torch.quantization.QConfigDynamic [source]
Describes how to dynamically quantize a layer or a part of the network by providing settings (observer classes) for weights. Itβs like QConfig, but for dynamic quantization. Note that QConfigDynamic needs to contain observer classes (like MinMaxObserver) or a callable ... | torch.quantization#torch.quantization.QConfigDynamic |
torch.quantization.quantize(model, run_fn, run_args, mapping=None, inplace=False) [source]
Quantize the input float model with post training static quantization. First it will prepare the model for calibration, then it calls run_fn which will run the calibration step, after that we will convert the model to a quantiz... | torch.quantization#torch.quantization.quantize |
torch.quantization.quantize_dynamic(model, qconfig_spec=None, dtype=torch.qint8, mapping=None, inplace=False) [source]
Converts a float model to dynamic (i.e. weights-only) quantized model. Replaces specified modules with dynamic weight-only quantized versions and output the quantized model. For simplest usage provid... | torch.quantization#torch.quantization.quantize_dynamic |
torch.quantization.quantize_qat(model, run_fn, run_args, inplace=False) [source]
Do quantization aware training and output a quantized model Parameters
model β input model
run_fn β a function for evaluating the prepared model, can be a function that simply runs the prepared model or a training loop
run_args β p... | torch.quantization#torch.quantization.quantize_qat |
class torch.quantization.QuantStub(qconfig=None) [source]
Quantize stub module, before calibration, this is same as an observer, it will be swapped as nnq.Quantize in convert. Parameters
qconfig β quantization configuration for the tensor, if qconfig is not provided, we will get qconfig from parent modules | torch.quantization#torch.quantization.QuantStub |
class torch.quantization.QuantWrapper(module) [source]
A wrapper class that wraps the input module, adds QuantStub and DeQuantStub and surround the call to module with call to quant and dequant modules. This is used by the quantization utility functions to add the quant and dequant modules, before convert function Qu... | torch.quantization#torch.quantization.QuantWrapper |
class torch.quantization.RecordingObserver(**kwargs) [source]
The module is mainly for debug and records the tensor values during runtime. Parameters
dtype β Quantized data type
qscheme β Quantization scheme to be used
reduce_range β Reduces the range of the quantized data type by 1 bit | torch.quantization#torch.quantization.RecordingObserver |
torch.quantization.swap_module(mod, mapping, custom_module_class_mapping) [source]
Swaps the module if it has a quantized counterpart and it has an observer attached. Parameters
mod β input module
mapping β a dictionary that maps from nn module to nnq module Returns
The corresponding quantized module of mod | torch.quantization#torch.quantization.swap_module |
torch.quantize_per_channel(input, scales, zero_points, axis, dtype) β Tensor
Converts a float tensor to a per-channel quantized tensor with given scales and zero points. Parameters
input (Tensor) β float tensor to quantize
scales (Tensor) β float 1D tensor of scales to use, size should match input.size(axis)
z... | torch.generated.torch.quantize_per_channel#torch.quantize_per_channel |
torch.quantize_per_tensor(input, scale, zero_point, dtype) β Tensor
Converts a float tensor to a quantized tensor with given scale and zero point. Parameters
input (Tensor) β float tensor to quantize
scale (float) β scale to apply in quantization formula
zero_point (int) β offset in integer value that maps to f... | torch.generated.torch.quantize_per_tensor#torch.quantize_per_tensor |
class torch.quasirandom.SobolEngine(dimension, scramble=False, seed=None) [source]
The torch.quasirandom.SobolEngine is an engine for generating (scrambled) Sobol sequences. Sobol sequences are an example of low discrepancy quasi-random sequences. This implementation of an engine for Sobol sequences is capable of sam... | torch.generated.torch.quasirandom.sobolengine#torch.quasirandom.SobolEngine |
draw(n=1, out=None, dtype=torch.float32) [source]
Function to draw a sequence of n points from a Sobol sequence. Note that the samples are dependent on the previous samples. The size of the result is (n,dimension)(n, dimension) . Parameters
n (Int, optional) β The length of sequence of points to draw. Default: 1 ... | torch.generated.torch.quasirandom.sobolengine#torch.quasirandom.SobolEngine.draw |
draw_base2(m, out=None, dtype=torch.float32) [source]
Function to draw a sequence of 2**m points from a Sobol sequence. Note that the samples are dependent on the previous samples. The size of the result is (2ββm,dimension)(2**m, dimension) . Parameters
m (Int) β The (base2) exponent of the number of points to dr... | torch.generated.torch.quasirandom.sobolengine#torch.quasirandom.SobolEngine.draw_base2 |
fast_forward(n) [source]
Function to fast-forward the state of the SobolEngine by n steps. This is equivalent to drawing n samples without using the samples. Parameters
n (Int) β The number of steps to fast-forward by. | torch.generated.torch.quasirandom.sobolengine#torch.quasirandom.SobolEngine.fast_forward |
reset() [source]
Function to reset the SobolEngine to base state. | torch.generated.torch.quasirandom.sobolengine#torch.quasirandom.SobolEngine.reset |
torch.rad2deg(input, *, out=None) β Tensor
Returns a new tensor with each of the elements of input converted from angles in radians to degrees. Parameters
input (Tensor) β the input tensor. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.tensor([[3.142, -3.142], [6.283, -6.2... | torch.generated.torch.rad2deg#torch.rad2deg |
torch.rand(*size, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) β Tensor
Returns a tensor filled with random numbers from a uniform distribution on the interval [0,1)[0, 1) The shape of the tensor is defined by the variable argument size. Parameters
size (int...) β a sequence of ... | torch.generated.torch.rand#torch.rand |
torch.randint(low=0, high, size, *, generator=None, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) β Tensor
Returns a tensor filled with random integers generated uniformly between low (inclusive) and high (exclusive). The shape of the tensor is defined by the variable argument size. N... | torch.generated.torch.randint#torch.randint |
torch.randint_like(input, low=0, high, *, dtype=None, layout=torch.strided, device=None, requires_grad=False, memory_format=torch.preserve_format) β Tensor
Returns a tensor with the same shape as Tensor input filled with random integers generated uniformly between low (inclusive) and high (exclusive). Parameters
... | torch.generated.torch.randint_like#torch.randint_like |
torch.randn(*size, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) β Tensor
Returns a tensor filled with random numbers from a normal distribution with mean 0 and variance 1 (also called the standard normal distribution). outiβΌN(0,1)\text{out}_{i} \sim \mathcal{N}(0, 1)
The shape o... | torch.generated.torch.randn#torch.randn |
torch.randn_like(input, *, dtype=None, layout=None, device=None, requires_grad=False, memory_format=torch.preserve_format) β Tensor
Returns a tensor with the same size as input that is filled with random numbers from a normal distribution with mean 0 and variance 1. torch.randn_like(input) is equivalent to torch.rand... | torch.generated.torch.randn_like#torch.randn_like |
torch.random
torch.random.fork_rng(devices=None, enabled=True, _caller='fork_rng', _devices_kw='devices') [source]
Forks the RNG, so that when you return, the RNG is reset to the state that it was previously in. Parameters
devices (iterable of CUDA IDs) β CUDA devices for which to fork the RNG. CPU RNG state is... | torch.random |
torch.random.fork_rng(devices=None, enabled=True, _caller='fork_rng', _devices_kw='devices') [source]
Forks the RNG, so that when you return, the RNG is reset to the state that it was previously in. Parameters
devices (iterable of CUDA IDs) β CUDA devices for which to fork the RNG. CPU RNG state is always forked.... | torch.random#torch.random.fork_rng |
torch.random.get_rng_state() [source]
Returns the random number generator state as a torch.ByteTensor. | torch.random#torch.random.get_rng_state |
torch.random.initial_seed() [source]
Returns the initial seed for generating random numbers as a Python long. | torch.random#torch.random.initial_seed |
torch.random.manual_seed(seed) [source]
Sets the seed for generating random numbers. Returns a torch.Generator object. Parameters
seed (int) β The desired seed. Value must be within the inclusive range [-0x8000_0000_0000_0000, 0xffff_ffff_ffff_ffff]. Otherwise, a RuntimeError is raised. Negative inputs are remapped... | torch.random#torch.random.manual_seed |
torch.random.seed() [source]
Sets the seed for generating random numbers to a non-deterministic random number. Returns a 64 bit number used to seed the RNG. | torch.random#torch.random.seed |
torch.random.set_rng_state(new_state) [source]
Sets the random number generator state. Parameters
new_state (torch.ByteTensor) β The desired state | torch.random#torch.random.set_rng_state |
torch.randperm(n, *, generator=None, out=None, dtype=torch.int64, layout=torch.strided, device=None, requires_grad=False, pin_memory=False) β Tensor
Returns a random permutation of integers from 0 to n - 1. Parameters
n (int) β the upper bound (exclusive) Keyword Arguments
generator (torch.Generator, optional) ... | torch.generated.torch.randperm#torch.randperm |
torch.rand_like(input, *, dtype=None, layout=None, device=None, requires_grad=False, memory_format=torch.preserve_format) β Tensor
Returns a tensor with the same size as input that is filled with random numbers from a uniform distribution on the interval [0,1)[0, 1) . torch.rand_like(input) is equivalent to torch.ran... | torch.generated.torch.rand_like#torch.rand_like |
torch.range(start=0, end, step=1, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) β Tensor
Returns a 1-D tensor of size βendβstartstepβ+1\left\lfloor \frac{\text{end} - \text{start}}{\text{step}} \right\rfloor + 1 with values from start to end with step step. Step is the gap between ... | torch.generated.torch.range#torch.range |
torch.ravel(input) β Tensor
Return a contiguous flattened tensor. A copy is made only if needed. Parameters
input (Tensor) β the input tensor. Example: >>> t = torch.tensor([[[1, 2],
... [3, 4]],
... [[5, 6],
... [7, 8]]])
>>> torch.ravel(t)
tensor([1, 2, 3,... | torch.generated.torch.ravel#torch.ravel |
torch.real(input) β Tensor
Returns a new tensor containing real values of the self tensor. The returned tensor and self share the same underlying storage. Warning real() is only supported for tensors with complex dtypes. Parameters
input (Tensor) β the input tensor. Example::
>>> x=torch.randn(4, dtype=torch.... | torch.generated.torch.real#torch.real |
torch.reciprocal(input, *, out=None) β Tensor
Returns a new tensor with the reciprocal of the elements of input Note Unlike NumPyβs reciprocal, torch.reciprocal supports integral inputs. Integral inputs to reciprocal are automatically promoted to the default scalar type. outi=1inputi\text{out}_{i} = \frac{1}{\text... | torch.generated.torch.reciprocal#torch.reciprocal |
torch.remainder(input, other, *, out=None) β Tensor
Computes the element-wise remainder of division. The dividend and divisor may contain both for integer and floating point numbers. The remainder has the same sign as the divisor other. Supports broadcasting to a common shape, type promotion, and integer and float in... | torch.generated.torch.remainder#torch.remainder |
torch.renorm(input, p, dim, maxnorm, *, out=None) β Tensor
Returns a tensor where each sub-tensor of input along dimension dim is normalized such that the p-norm of the sub-tensor is lower than the value maxnorm Note If the norm of a row is lower than maxnorm, the row is unchanged Parameters
input (Tensor) β th... | torch.generated.torch.renorm#torch.renorm |
torch.repeat_interleave(input, repeats, dim=None) β Tensor
Repeat elements of a tensor. Warning This is different from torch.Tensor.repeat() but similar to numpy.repeat. Parameters
input (Tensor) β the input tensor.
repeats (Tensor or int) β The number of repetitions for each element. repeats is broadcasted to... | torch.generated.torch.repeat_interleave#torch.repeat_interleave |
torch.reshape(input, shape) β Tensor
Returns a tensor with the same data and number of elements as input, but with the specified shape. When possible, the returned tensor will be a view of input. Otherwise, it will be a copy. Contiguous inputs and inputs with compatible strides can be reshaped without copying, but yo... | torch.generated.torch.reshape#torch.reshape |
torch.result_type(tensor1, tensor2) β dtype
Returns the torch.dtype that would result from performing an arithmetic operation on the provided input tensors. See type promotion documentation for more information on the type promotion logic. Parameters
tensor1 (Tensor or Number) β an input tensor or number
tensor2... | torch.generated.torch.result_type#torch.result_type |
torch.roll(input, shifts, dims=None) β Tensor
Roll the tensor along the given dimension(s). Elements that are shifted beyond the last position are re-introduced at the first position. If a dimension is not specified, the tensor will be flattened before rolling and then restored to the original shape. Parameters
i... | torch.generated.torch.roll#torch.roll |
torch.rot90(input, k, dims) β Tensor
Rotate a n-D tensor by 90 degrees in the plane specified by dims axis. Rotation direction is from the first towards the second axis if k > 0, and from the second towards the first for k < 0. Parameters
input (Tensor) β the input tensor.
k (int) β number of times to rotate
di... | torch.generated.torch.rot90#torch.rot90 |
torch.round(input, *, out=None) β Tensor
Returns a new tensor with each of the elements of input rounded to the closest integer. Parameters
input (Tensor) β the input tensor. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.randn(4)
>>> a
tensor([ 0.9920, 0.6077, 0.9734, -1... | torch.generated.torch.round#torch.round |
torch.row_stack(tensors, *, out=None) β Tensor
Alias of torch.vstack(). | torch.generated.torch.row_stack#torch.row_stack |
torch.rsqrt(input, *, out=None) β Tensor
Returns a new tensor with the reciprocal of the square-root of each of the elements of input. outi=1inputi\text{out}_{i} = \frac{1}{\sqrt{\text{input}_{i}}}
Parameters
input (Tensor) β the input tensor. Keyword Arguments
out (Tensor, optional) β the output tensor. Exa... | torch.generated.torch.rsqrt#torch.rsqrt |
torch.save(obj, f, pickle_module=<module 'pickle' from '/home/matti/miniconda3/lib/python3.7/pickle.py'>, pickle_protocol=2, _use_new_zipfile_serialization=True) [source]
Saves an object to a disk file. See also: saving-loading-tensors Parameters
obj β saved object
f β a file-like object (has to implement write ... | torch.generated.torch.save#torch.save |
torch.scatter(input, dim, index, src) β Tensor
Out-of-place version of torch.Tensor.scatter_() | torch.generated.torch.scatter#torch.scatter |
torch.scatter_add(input, dim, index, src) β Tensor
Out-of-place version of torch.Tensor.scatter_add_() | torch.generated.torch.scatter_add#torch.scatter_add |
torch.searchsorted(sorted_sequence, values, *, out_int32=False, right=False, out=None) β Tensor
Find the indices from the innermost dimension of sorted_sequence such that, if the corresponding values in values were inserted before the indices, the order of the corresponding innermost dimension within sorted_sequence ... | torch.generated.torch.searchsorted#torch.searchsorted |
torch.seed() [source]
Sets the seed for generating random numbers to a non-deterministic random number. Returns a 64 bit number used to seed the RNG. | torch.generated.torch.seed#torch.seed |
torch.set_default_dtype(d) [source]
Sets the default floating point dtype to d. This dtype is: The inferred dtype for python floats in torch.tensor(). Used to infer dtype for python complex numbers. The default complex dtype is set to torch.complex128 if default floating point dtype is torch.float64, otherwise itβs ... | torch.generated.torch.set_default_dtype#torch.set_default_dtype |
torch.set_default_tensor_type(t) [source]
Sets the default torch.Tensor type to floating point tensor type t. This type will also be used as default floating point type for type inference in torch.tensor(). The default floating point tensor type is initially torch.FloatTensor. Parameters
t (type or string) β the fl... | torch.generated.torch.set_default_tensor_type#torch.set_default_tensor_type |
torch.set_flush_denormal(mode) β bool
Disables denormal floating numbers on CPU. Returns True if your system supports flushing denormal numbers and it successfully configures flush denormal mode. set_flush_denormal() is only supported on x86 architectures supporting SSE3. Parameters
mode (bool) β Controls whether t... | torch.generated.torch.set_flush_denormal#torch.set_flush_denormal |
class torch.set_grad_enabled(mode) [source]
Context-manager that sets gradient calculation to on or off. set_grad_enabled will enable or disable grads based on its argument mode. It can be used as a context-manager or as a function. This context manager is thread local; it will not affect computation in other threads... | torch.generated.torch.set_grad_enabled#torch.set_grad_enabled |
torch.set_num_interop_threads(int)
Sets the number of threads used for interop parallelism (e.g. in JIT interpreter) on CPU. Warning Can only be called once and before any inter-op parallel work is started (e.g. JIT execution). | torch.generated.torch.set_num_interop_threads#torch.set_num_interop_threads |
torch.set_num_threads(int)
Sets the number of threads used for intraop parallelism on CPU. Warning To ensure that the correct number of threads is used, set_num_threads must be called before running eager, JIT or autograd code. | torch.generated.torch.set_num_threads#torch.set_num_threads |
torch.set_printoptions(precision=None, threshold=None, edgeitems=None, linewidth=None, profile=None, sci_mode=None) [source]
Set options for printing. Items shamelessly taken from NumPy Parameters
precision β Number of digits of precision for floating point output (default = 4).
threshold β Total number of array... | torch.generated.torch.set_printoptions#torch.set_printoptions |
torch.set_rng_state(new_state) [source]
Sets the random number generator state. Parameters
new_state (torch.ByteTensor) β The desired state | torch.generated.torch.set_rng_state#torch.set_rng_state |
torch.sgn(input, *, out=None) β Tensor
For complex tensors, this function returns a new tensor whose elemants have the same angle as that of the elements of input and absolute value 1. For a non-complex tensor, this function returns the signs of the elements of input (see torch.sign()). outi=0\text{out}_{i} = 0 , if ... | torch.generated.torch.sgn#torch.sgn |
torch.sigmoid(input, *, out=None) β Tensor
Returns a new tensor with the sigmoid of the elements of input. outi=11+eβinputi\text{out}_{i} = \frac{1}{1 + e^{-\text{input}_{i}}}
Parameters
input (Tensor) β the input tensor. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.ra... | torch.generated.torch.sigmoid#torch.sigmoid |
torch.sign(input, *, out=None) β Tensor
Returns a new tensor with the signs of the elements of input. outi=sgnβ‘(inputi)\text{out}_{i} = \operatorname{sgn}(\text{input}_{i})
Parameters
input (Tensor) β the input tensor. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.tenso... | torch.generated.torch.sign#torch.sign |
torch.signbit(input, *, out=None) β Tensor
Tests if each element of input has its sign bit set (is less than zero) or not. Parameters
input (Tensor) β the input tensor. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.tensor([0.7, -1.2, 0., 2.3])
>>> torch.signbit(a)
tensor([... | torch.generated.torch.signbit#torch.signbit |
torch.sin(input, *, out=None) β Tensor
Returns a new tensor with the sine of the elements of input. outi=sinβ‘(inputi)\text{out}_{i} = \sin(\text{input}_{i})
Parameters
input (Tensor) β the input tensor. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.randn(4)
>>> a
tensor... | torch.generated.torch.sin#torch.sin |
torch.sinc(input, *, out=None) β Tensor
Computes the normalized sinc of input. outi={1,if inputi=0sinβ‘(Οinputi)/(Οinputi),otherwise\text{out}_{i} = \begin{cases} 1, & \text{if}\ \text{input}_{i}=0 \\ \sin(\pi \text{input}_{i}) / (\pi \text{input}_{i}), & \text{otherwise} \end{cases}
Parameters
input (Tensor) β t... | torch.generated.torch.sinc#torch.sinc |
torch.sinh(input, *, out=None) β Tensor
Returns a new tensor with the hyperbolic sine of the elements of input. outi=sinhβ‘(inputi)\text{out}_{i} = \sinh(\text{input}_{i})
Parameters
input (Tensor) β the input tensor. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.randn(4... | torch.generated.torch.sinh#torch.sinh |
torch.slogdet(input) -> (Tensor, Tensor)
Calculates the sign and log absolute value of the determinant(s) of a square matrix or batches of square matrices. Note torch.slogdet() is deprecated. Please use torch.linalg.slogdet() instead. Note If input has zero determinant, this returns (0, -inf). Note Backward thro... | torch.generated.torch.slogdet#torch.slogdet |
torch.smm(input, mat) β Tensor
Performs a matrix multiplication of the sparse matrix input with the dense matrix mat. Parameters
input (Tensor) β a sparse matrix to be matrix multiplied
mat (Tensor) β a dense matrix to be matrix multiplied | torch.sparse#torch.smm |
torch.solve(input, A, *, out=None) -> (Tensor, Tensor)
This function returns the solution to the system of linear equations represented by AX=BAX = B and the LU factorization of A, in order as a namedtuple solution, LU. LU contains L and U factors for LU factorization of A. torch.solve(B, A) can take in 2D inputs B,... | torch.generated.torch.solve#torch.solve |
torch.sort(input, dim=-1, descending=False, *, out=None) -> (Tensor, LongTensor)
Sorts the elements of the input tensor along a given dimension in ascending order by value. If dim is not given, the last dimension of the input is chosen. If descending is True then the elements are sorted in descending order by value. ... | torch.generated.torch.sort#torch.sort |
torch.sparse Introduction PyTorch provides torch.Tensor to represent a multi-dimensional array containing elements of a single data type. By default, array elements are stored contiguously in memory leading to efficient implementations of various array processing algorithms that relay on the fast access to array elemen... | torch.sparse |
torch.sparse.addmm(mat, mat1, mat2, beta=1.0, alpha=1.0) [source]
This function does exact same thing as torch.addmm() in the forward, except that it supports backward for sparse matrix mat1. mat1 need to have sparse_dim = 2. Note that the gradients of mat1 is a coalesced sparse tensor. Parameters
mat (Tensor) β ... | torch.sparse#torch.sparse.addmm |
torch.sparse.log_softmax(input, dim, dtype=None) [source]
Applies a softmax function followed by logarithm. See softmax for more details. Parameters
input (Tensor) β input
dim (int) β A dimension along which softmax will be computed.
dtype (torch.dtype, optional) β the desired data type of returned tensor. If s... | torch.sparse#torch.sparse.log_softmax |
torch.sparse.mm(mat1, mat2) [source]
Performs a matrix multiplication of the sparse matrix mat1 and the (sparse or strided) matrix mat2. Similar to torch.mm(), If mat1 is a (nΓm)(n \times m) tensor, mat2 is a (mΓp)(m \times p) tensor, out will be a (nΓp)(n \times p) tensor. mat1 need to have sparse_dim = 2. This f... | torch.sparse#torch.sparse.mm |
torch.sparse.softmax(input, dim, dtype=None) [source]
Applies a softmax function. Softmax is defined as: Softmax(xi)=exp(xi)βjexp(xj)\text{Softmax}(x_{i}) = \frac{exp(x_i)}{\sum_j exp(x_j)} where i,ji, j run over sparse tensor indices and unspecified entries are ignores. This is equivalent to defining unspecified e... | torch.sparse#torch.sparse.softmax |
torch.sparse.sum(input, dim=None, dtype=None) [source]
Returns the sum of each row of the sparse tensor input in the given dimensions dim. If dim is a list of dimensions, reduce over all of them. When sum over all sparse_dim, this method returns a dense tensor instead of a sparse tensor. All summed dim are squeezed (... | torch.sparse#torch.sparse.sum |
torch.sparse_coo_tensor(indices, values, size=None, *, dtype=None, device=None, requires_grad=False) β Tensor
Constructs a sparse tensor in COO(rdinate) format with specified values at the given indices. Note This function returns an uncoalesced tensor. Parameters
indices (array_like) β Initial data for the ten... | torch.generated.torch.sparse_coo_tensor#torch.sparse_coo_tensor |
torch.split(tensor, split_size_or_sections, dim=0) [source]
Splits the tensor into chunks. Each chunk is a view of the original tensor. If split_size_or_sections is an integer type, then tensor will be split into equally sized chunks (if possible). Last chunk will be smaller if the tensor size along the given dimensi... | torch.generated.torch.split#torch.split |
torch.sqrt(input, *, out=None) β Tensor
Returns a new tensor with the square-root of the elements of input. outi=inputi\text{out}_{i} = \sqrt{\text{input}_{i}}
Parameters
input (Tensor) β the input tensor. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.randn(4)
>>> a
ten... | torch.generated.torch.sqrt#torch.sqrt |
torch.square(input, *, out=None) β Tensor
Returns a new tensor with the square of the elements of input. Parameters
input (Tensor) β the input tensor. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.randn(4)
>>> a
tensor([-2.0755, 1.0226, 0.0831, 0.4806])
>>> torch.square... | torch.generated.torch.square#torch.square |
torch.squeeze(input, dim=None, *, out=None) β Tensor
Returns a tensor with all the dimensions of input of size 1 removed. For example, if input is of shape: (AΓ1ΓBΓCΓ1ΓD)(A \times 1 \times B \times C \times 1 \times D) then the out tensor will be of shape: (AΓBΓCΓD)(A \times B \times C \times D) . When dim is given,... | torch.generated.torch.squeeze#torch.squeeze |
torch.sspaddmm(input, mat1, mat2, *, beta=1, alpha=1, out=None) β Tensor
Matrix multiplies a sparse tensor mat1 with a dense tensor mat2, then adds the sparse tensor input to the result. Note: This function is equivalent to torch.addmm(), except input and mat1 are sparse. Parameters
input (Tensor) β a sparse matr... | torch.sparse#torch.sspaddmm |
torch.stack(tensors, dim=0, *, out=None) β Tensor
Concatenates a sequence of tensors along a new dimension. All tensors need to be of the same size. Parameters
tensors (sequence of Tensors) β sequence of tensors to concatenate
dim (int) β dimension to insert. Has to be between 0 and the number of dimensions of c... | torch.generated.torch.stack#torch.stack |
torch.std(input, unbiased=True) β Tensor
Returns the standard-deviation of all elements in the input tensor. If unbiased is False, then the standard-deviation will be calculated via the biased estimator. Otherwise, Besselβs correction will be used. Parameters
input (Tensor) β the input tensor.
unbiased (bool) β ... | torch.generated.torch.std#torch.std |
torch.std_mean(input, unbiased=True) -> (Tensor, Tensor)
Returns the standard-deviation and mean of all elements in the input tensor. If unbiased is False, then the standard-deviation will be calculated via the biased estimator. Otherwise, Besselβs correction will be used. Parameters
input (Tensor) β the input te... | torch.generated.torch.std_mean#torch.std_mean |
torch.stft(input, n_fft, hop_length=None, win_length=None, window=None, center=True, pad_mode='reflect', normalized=False, onesided=None, return_complex=None) [source]
Short-time Fourier transform (STFT). Warning From version 1.8.0, return_complex must always be given explicitly for real inputs and return_complex=Fa... | torch.generated.torch.stft#torch.stft |
torch.Storage A torch.Storage is a contiguous, one-dimensional array of a single data type. Every torch.Tensor has a corresponding storage of the same data type.
class torch.FloatStorage(*args, **kwargs) [source]
bfloat16()
Casts this storage to bfloat16 type
bool()
Casts this storage to bool type
byt... | torch.storage |
torch.sub(input, other, *, alpha=1, out=None) β Tensor
Subtracts other, scaled by alpha, from input. outi=inputiβalphaΓotheri\text{{out}}_i = \text{{input}}_i - \text{{alpha}} \times \text{{other}}_i
Supports broadcasting to a common shape, type promotion, and integer, float, and complex inputs. Parameters
inp... | torch.generated.torch.sub#torch.sub |
torch.subtract(input, other, *, alpha=1, out=None) β Tensor
Alias for torch.sub(). | torch.generated.torch.subtract#torch.subtract |
torch.sum(input, *, dtype=None) β Tensor
Returns the sum of all elements in the input tensor. Parameters
input (Tensor) β the input tensor. Keyword Arguments
dtype (torch.dtype, optional) β the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the operation is performe... | torch.generated.torch.sum#torch.sum |
torch.svd(input, some=True, compute_uv=True, *, out=None) -> (Tensor, Tensor, Tensor)
Computes the singular value decomposition of either a matrix or batch of matrices input. The singular value decomposition is represented as a namedtuple (U,S,V), such that input = U diag(S) Vα΄΄, where Vα΄΄ is the transpose of V for the... | torch.generated.torch.svd#torch.svd |
torch.svd_lowrank(A, q=6, niter=2, M=None) [source]
Return the singular value decomposition (U, S, V) of a matrix, batches of matrices, or a sparse matrix AA such that AβUdiag(S)VTA \approx U diag(S) V^T . In case MM is given, then SVD is computed for the matrix AβMA - M . Note The implementation is based on the A... | torch.generated.torch.svd_lowrank#torch.svd_lowrank |
torch.swapaxes(input, axis0, axis1) β Tensor
Alias for torch.transpose(). This function is equivalent to NumPyβs swapaxes function. Examples: >>> x = torch.tensor([[[0,1],[2,3]],[[4,5],[6,7]]])
>>> x
tensor([[[0, 1],
[2, 3]],
[[4, 5],
[6, 7]]])
>>> torch.swapaxes(x, 0, 1)
tensor([[[0, 1],
... | torch.generated.torch.swapaxes#torch.swapaxes |
torch.swapdims(input, dim0, dim1) β Tensor
Alias for torch.transpose(). This function is equivalent to NumPyβs swapaxes function. Examples: >>> x = torch.tensor([[[0,1],[2,3]],[[4,5],[6,7]]])
>>> x
tensor([[[0, 1],
[2, 3]],
[[4, 5],
[6, 7]]])
>>> torch.swapdims(x, 0, 1)
tensor([[[0, 1],
... | torch.generated.torch.swapdims#torch.swapdims |
torch.symeig(input, eigenvectors=False, upper=True, *, out=None) -> (Tensor, Tensor)
This function returns eigenvalues and eigenvectors of a real symmetric matrix input or a batch of real symmetric matrices, represented by a namedtuple (eigenvalues, eigenvectors). This function calculates all eigenvalues (and vectors... | torch.generated.torch.symeig#torch.symeig |
torch.t(input) β Tensor
Expects input to be <= 2-D tensor and transposes dimensions 0 and 1. 0-D and 1-D tensors are returned as is. When input is a 2-D tensor this is equivalent to transpose(input, 0, 1). Parameters
input (Tensor) β the input tensor. Example: >>> x = torch.randn(())
>>> x
tensor(0.1995)
>>> torc... | torch.generated.torch.t#torch.t |
torch.take(input, index) β Tensor
Returns a new tensor with the elements of input at the given indices. The input tensor is treated as if it were viewed as a 1-D tensor. The result takes the same shape as the indices. Parameters
input (Tensor) β the input tensor.
indices (LongTensor) β the indices into tensor ... | torch.generated.torch.take#torch.take |
torch.tan(input, *, out=None) β Tensor
Returns a new tensor with the tangent of the elements of input. outi=tanβ‘(inputi)\text{out}_{i} = \tan(\text{input}_{i})
Parameters
input (Tensor) β the input tensor. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.randn(4)
>>> a
ten... | torch.generated.torch.tan#torch.tan |
torch.tanh(input, *, out=None) β Tensor
Returns a new tensor with the hyperbolic tangent of the elements of input. outi=tanhβ‘(inputi)\text{out}_{i} = \tanh(\text{input}_{i})
Parameters
input (Tensor) β the input tensor. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.rand... | torch.generated.torch.tanh#torch.tanh |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.