doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
torch.log1p(input, *, out=None) → Tensor
Returns a new tensor with the natural logarithm of (1 + input). yi=loge(xi+1)y_i = \log_{e} (x_i + 1)
Note This function is more accurate than torch.log() for small values of input Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional)... | torch.generated.torch.log1p#torch.log1p |
torch.log2(input, *, out=None) → Tensor
Returns a new tensor with the logarithm to the base 2 of the elements of input. yi=log2(xi)y_{i} = \log_{2} (x_{i})
Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.rand(5)
>>> a
tensor(... | torch.generated.torch.log2#torch.log2 |
torch.logaddexp(input, other, *, out=None) → Tensor
Logarithm of the sum of exponentiations of the inputs. Calculates pointwise log(ex+ey)\log\left(e^x + e^y\right) . This function is useful in statistics where the calculated probabilities of events may be so small as to exceed the range of normal floating point num... | torch.generated.torch.logaddexp#torch.logaddexp |
torch.logaddexp2(input, other, *, out=None) → Tensor
Logarithm of the sum of exponentiations of the inputs in base-2. Calculates pointwise log2(2x+2y)\log_2\left(2^x + 2^y\right) . See torch.logaddexp() for more details. Parameters
input (Tensor) – the input tensor.
other (Tensor) – the second input tensor Ke... | torch.generated.torch.logaddexp2#torch.logaddexp2 |
torch.logcumsumexp(input, dim, *, out=None) → Tensor
Returns the logarithm of the cumulative summation of the exponentiation of elements of input in the dimension dim. For summation index jj given by dim and other indices ii , the result is logcumsumexp(x)ij=log∑j=0iexp(xij)\text{logcumsumexp}(x)_{ij} = \log \sum... | torch.generated.torch.logcumsumexp#torch.logcumsumexp |
torch.logdet(input) → Tensor
Calculates log determinant of a square matrix or batches of square matrices. Note Result is -inf if input has zero log determinant, and is nan if input has negative determinant. Note Backward through logdet() internally uses SVD results when input is not invertible. In this case, doubl... | torch.generated.torch.logdet#torch.logdet |
torch.logical_and(input, other, *, out=None) → Tensor
Computes the element-wise logical AND of the given input tensors. Zeros are treated as False and nonzeros are treated as True. Parameters
input (Tensor) – the input tensor.
other (Tensor) – the tensor to compute AND with Keyword Arguments
out (Tensor, opti... | torch.generated.torch.logical_and#torch.logical_and |
torch.logical_not(input, *, out=None) → Tensor
Computes the element-wise logical NOT of the given input tensor. If not specified, the output tensor will have the bool dtype. If the input tensor is not a bool tensor, zeros are treated as False and non-zeros are treated as True. Parameters
input (Tensor) – the input ... | torch.generated.torch.logical_not#torch.logical_not |
torch.logical_or(input, other, *, out=None) → Tensor
Computes the element-wise logical OR of the given input tensors. Zeros are treated as False and nonzeros are treated as True. Parameters
input (Tensor) – the input tensor.
other (Tensor) – the tensor to compute OR with Keyword Arguments
out (Tensor, optiona... | torch.generated.torch.logical_or#torch.logical_or |
torch.logical_xor(input, other, *, out=None) → Tensor
Computes the element-wise logical XOR of the given input tensors. Zeros are treated as False and nonzeros are treated as True. Parameters
input (Tensor) – the input tensor.
other (Tensor) – the tensor to compute XOR with Keyword Arguments
out (Tensor, opti... | torch.generated.torch.logical_xor#torch.logical_xor |
torch.logit(input, eps=None, *, out=None) → Tensor
Returns a new tensor with the logit of the elements of input. input is clamped to [eps, 1 - eps] when eps is not None. When eps is None and input < 0 or input > 1, the function will yields NaN. yi=ln(zi1−zi)zi={xiif eps is Noneepsif xi<epsxiif eps≤xi≤1−eps1−epsif x... | torch.generated.torch.logit#torch.logit |
torch.logspace(start, end, steps, base=10.0, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
Creates a one-dimensional tensor of size steps whose values are evenly spaced from basestart{{\text{{base}}}}^{{\text{{start}}}} to baseend{{\text{{base}}}}^{{\text{{end}}}} , inclus... | torch.generated.torch.logspace#torch.logspace |
torch.logsumexp(input, dim, keepdim=False, *, out=None)
Returns the log of summed exponentials of each row of the input tensor in the given dimension dim. The computation is numerically stabilized. For summation index jj given by dim and other indices ii , the result is logsumexp(x)i=log∑jexp(xij)\text{logsumexp}... | torch.generated.torch.logsumexp#torch.logsumexp |
torch.lstsq(input, A, *, out=None) → Tensor
Computes the solution to the least squares and least norm problems for a full rank matrix AA of size (m×n)(m \times n) and a matrix BB of size (m×k)(m \times k) . If m≥nm \geq n , lstsq() solves the least-squares problem: minX∥AX−B∥2.\begin{array}{ll} \min_X & \|AX-B\|... | torch.generated.torch.lstsq#torch.lstsq |
torch.lt(input, other, *, out=None) → Tensor
Computes input<other\text{input} < \text{other} element-wise. The second argument can be a number or a tensor whose shape is broadcastable with the first argument. Parameters
input (Tensor) – the tensor to compare
other (Tensor or float) – the tensor or value to comp... | torch.generated.torch.lt#torch.lt |
torch.lu(*args, **kwargs)
Computes the LU factorization of a matrix or batches of matrices A. Returns a tuple containing the LU factorization and pivots of A. Pivoting is done if pivot is set to True. Note The pivots returned by the function are 1-indexed. If pivot is False, then the returned pivots is a tensor fill... | torch.generated.torch.lu#torch.lu |
torch.lu_solve(b, LU_data, LU_pivots, *, out=None) → Tensor
Returns the LU solve of the linear system Ax=bAx = b using the partially pivoted LU factorization of A from torch.lu(). This function supports float, double, cfloat and cdouble dtypes for input. Parameters
b (Tensor) – the RHS tensor of size (∗,m,k)(*, ... | torch.generated.torch.lu_solve#torch.lu_solve |
torch.lu_unpack(LU_data, LU_pivots, unpack_data=True, unpack_pivots=True) [source]
Unpacks the data and pivots from a LU factorization of a tensor. Returns a tuple of tensors as (the pivots, the L tensor, the U tensor). Parameters
LU_data (Tensor) – the packed LU factorization data
LU_pivots (Tensor) – the packe... | torch.generated.torch.lu_unpack#torch.lu_unpack |
torch.manual_seed(seed) [source]
Sets the seed for generating random numbers. Returns a torch.Generator object. Parameters
seed (int) – The desired seed. Value must be within the inclusive range [-0x8000_0000_0000_0000, 0xffff_ffff_ffff_ffff]. Otherwise, a RuntimeError is raised. Negative inputs are remapped to pos... | torch.generated.torch.manual_seed#torch.manual_seed |
torch.masked_select(input, mask, *, out=None) → Tensor
Returns a new 1-D tensor which indexes the input tensor according to the boolean mask mask which is a BoolTensor. The shapes of the mask tensor and the input tensor don’t need to match, but they must be broadcastable. Note The returned tensor does not use the sa... | torch.generated.torch.masked_select#torch.masked_select |
torch.matmul(input, other, *, out=None) → Tensor
Matrix product of two tensors. The behavior depends on the dimensionality of the tensors as follows: If both tensors are 1-dimensional, the dot product (scalar) is returned. If both arguments are 2-dimensional, the matrix-matrix product is returned. If the first argum... | torch.generated.torch.matmul#torch.matmul |
torch.matrix_exp()
Returns the matrix exponential. Supports batched input. For a matrix A, the matrix exponential is defined as eA=∑k=0∞Ak/k!\mathrm{e}^A = \sum_{k=0}^\infty A^k / k!
The implementation is based on: Bader, P.; Blanes, S.; Casas, F. Computing the Matrix Exponential with an Optimized Taylor Polynomia... | torch.generated.torch.matrix_exp#torch.matrix_exp |
torch.matrix_power(input, n) → Tensor
Returns the matrix raised to the power n for square matrices. For batch of matrices, each individual matrix is raised to the power n. If n is negative, then the inverse of the matrix (if invertible) is raised to the power n. For a batch of matrices, the batched inverse (if invert... | torch.generated.torch.matrix_power#torch.matrix_power |
torch.matrix_rank(input, tol=None, symmetric=False, *, out=None) → Tensor
Returns the numerical rank of a 2-D tensor. The method to compute the matrix rank is done using SVD by default. If symmetric is True, then input is assumed to be symmetric, and the computation of the rank is done by obtaining the eigenvalues. t... | torch.generated.torch.matrix_rank#torch.matrix_rank |
torch.max(input) → Tensor
Returns the maximum value of all elements in the input tensor. Warning This function produces deterministic (sub)gradients unlike max(dim=0) Parameters
input (Tensor) – the input tensor. Example: >>> a = torch.randn(1, 3)
>>> a
tensor([[ 0.6763, 0.7445, -2.2369]])
>>> torch.max(a)
ten... | torch.generated.torch.max#torch.max |
torch.maximum(input, other, *, out=None) → Tensor
Computes the element-wise maximum of input and other. Note If one of the elements being compared is a NaN, then that element is returned. maximum() is not supported for tensors with complex dtypes. Parameters
input (Tensor) – the input tensor.
other (Tensor) – ... | torch.generated.torch.maximum#torch.maximum |
torch.mean(input) → Tensor
Returns the mean value of all elements in the input tensor. Parameters
input (Tensor) – the input tensor. Example: >>> a = torch.randn(1, 3)
>>> a
tensor([[ 0.2294, -0.5481, 1.3288]])
>>> torch.mean(a)
tensor(0.3367)
torch.mean(input, dim, keepdim=False, *, out=None) → Tensor
Retu... | torch.generated.torch.mean#torch.mean |
torch.median(input) → Tensor
Returns the median of the values in input. Note The median is not unique for input tensors with an even number of elements. In this case the lower of the two medians is returned. To compute the mean of both medians, use torch.quantile() with q=0.5 instead. Warning This function produce... | torch.generated.torch.median#torch.median |
torch.meshgrid(*tensors) [source]
Take NN tensors, each of which can be either scalar or 1-dimensional vector, and create NN N-dimensional grids, where the ii th grid is defined by expanding the ii th input over dimensions defined by other inputs. Parameters
tensors (list of Tensor) – list of scalars or 1 dimen... | torch.generated.torch.meshgrid#torch.meshgrid |
torch.min(input) → Tensor
Returns the minimum value of all elements in the input tensor. Warning This function produces deterministic (sub)gradients unlike min(dim=0) Parameters
input (Tensor) – the input tensor. Example: >>> a = torch.randn(1, 3)
>>> a
tensor([[ 0.6750, 1.0857, 1.7197]])
>>> torch.min(a)
ten... | torch.generated.torch.min#torch.min |
torch.minimum(input, other, *, out=None) → Tensor
Computes the element-wise minimum of input and other. Note If one of the elements being compared is a NaN, then that element is returned. minimum() is not supported for tensors with complex dtypes. Parameters
input (Tensor) – the input tensor.
other (Tensor) – ... | torch.generated.torch.minimum#torch.minimum |
torch.mm(input, mat2, *, out=None) → Tensor
Performs a matrix multiplication of the matrices input and mat2. If input is a (n×m)(n \times m) tensor, mat2 is a (m×p)(m \times p) tensor, out will be a (n×p)(n \times p) tensor. Note This function does not broadcast. For broadcasting matrix products, see torch.matmul... | torch.generated.torch.mm#torch.mm |
torch.mode(input, dim=-1, keepdim=False, *, out=None) -> (Tensor, LongTensor)
Returns a namedtuple (values, indices) where values is the mode value of each row of the input tensor in the given dimension dim, i.e. a value which appears most often in that row, and indices is the index location of each mode value found.... | torch.generated.torch.mode#torch.mode |
torch.moveaxis(input, source, destination) → Tensor
Alias for torch.movedim(). This function is equivalent to NumPy’s moveaxis function. Examples: >>> t = torch.randn(3,2,1)
>>> t
tensor([[[-0.3362],
[-0.8437]],
[[-0.9627],
[ 0.1727]],
[[ 0.5173],
[-0.1398]]])
>>> torch.movea... | torch.generated.torch.moveaxis#torch.moveaxis |
torch.movedim(input, source, destination) → Tensor
Moves the dimension(s) of input at the position(s) in source to the position(s) in destination. Other dimensions of input that are not explicitly moved remain in their original order and appear at the positions not specified in destination. Parameters
input (Tens... | torch.generated.torch.movedim#torch.movedim |
torch.msort(input, *, out=None) → Tensor
Sorts the elements of the input tensor along its first dimension in ascending order by value. Note torch.msort(t) is equivalent to torch.sort(t, dim=0)[0]. See also torch.sort(). Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the... | torch.generated.torch.msort#torch.msort |
torch.mul(input, other, *, out=None)
Multiplies each element of the input input with the scalar other and returns a new resulting tensor. outi=other×inputi\text{out}_i = \text{other} \times \text{input}_i
If input is of type FloatTensor or DoubleTensor, other should be a real number, otherwise it should be an inte... | torch.generated.torch.mul#torch.mul |
torch.multinomial(input, num_samples, replacement=False, *, generator=None, out=None) → LongTensor
Returns a tensor where each row contains num_samples indices sampled from the multinomial probability distribution located in the corresponding row of tensor input. Note The rows of input do not need to sum to one (in ... | torch.generated.torch.multinomial#torch.multinomial |
torch.multiply(input, other, *, out=None)
Alias for torch.mul(). | torch.generated.torch.multiply#torch.multiply |
Multiprocessing package - torch.multiprocessing torch.multiprocessing is a wrapper around the native multiprocessing module. It registers custom reducers, that use shared memory to provide shared views on the same data in different processes. Once the tensor/storage is moved to shared_memory (see share_memory_()), it w... | torch.multiprocessing |
torch.multiprocessing.get_all_sharing_strategies() [source]
Returns a set of sharing strategies supported on a current system. | torch.multiprocessing#torch.multiprocessing.get_all_sharing_strategies |
torch.multiprocessing.get_sharing_strategy() [source]
Returns the current strategy for sharing CPU tensors. | torch.multiprocessing#torch.multiprocessing.get_sharing_strategy |
torch.multiprocessing.set_sharing_strategy(new_strategy) [source]
Sets the strategy for sharing CPU tensors. Parameters
new_strategy (str) – Name of the selected strategy. Should be one of the values returned by get_all_sharing_strategies(). | torch.multiprocessing#torch.multiprocessing.set_sharing_strategy |
torch.multiprocessing.spawn(fn, args=(), nprocs=1, join=True, daemon=False, start_method='spawn') [source]
Spawns nprocs processes that run fn with args. If one of the processes exits with a non-zero exit status, the remaining processes are killed and an exception is raised with the cause of termination. In the case ... | torch.multiprocessing#torch.multiprocessing.spawn |
class torch.multiprocessing.SpawnContext [source]
Returned by spawn() when called with join=False.
join(timeout=None)
Tries to join one or more processes in this spawn context. If one of them exited with a non-zero exit status, this function kills the remaining processes and raises an exception with the cause of ... | torch.multiprocessing#torch.multiprocessing.SpawnContext |
join(timeout=None)
Tries to join one or more processes in this spawn context. If one of them exited with a non-zero exit status, this function kills the remaining processes and raises an exception with the cause of the first process exiting. Returns True if all processes have been joined successfully, False if there ... | torch.multiprocessing#torch.multiprocessing.SpawnContext.join |
torch.mv(input, vec, *, out=None) → Tensor
Performs a matrix-vector product of the matrix input and the vector vec. If input is a (n×m)(n \times m) tensor, vec is a 1-D tensor of size mm , out will be 1-D of size nn . Note This function does not broadcast. Parameters
input (Tensor) – matrix to be multiplied
v... | torch.generated.torch.mv#torch.mv |
torch.mvlgamma(input, p) → Tensor
Computes the multivariate log-gamma function) with dimension pp element-wise, given by log(Γp(a))=C+∑i=1plog(Γ(a−i−12))\log(\Gamma_{p}(a)) = C + \displaystyle \sum_{i=1}^{p} \log\left(\Gamma\left(a - \frac{i - 1}{2}\right)\right)
where C=log(π)×p(p−1)4C = \log(\pi) \times \fra... | torch.generated.torch.mvlgamma#torch.mvlgamma |
torch.nanmedian(input) → Tensor
Returns the median of the values in input, ignoring NaN values. This function is identical to torch.median() when there are no NaN values in input. When input has one or more NaN values, torch.median() will always return NaN, while this function will return the median of the non-NaN el... | torch.generated.torch.nanmedian#torch.nanmedian |
torch.nanquantile(input, q, dim=None, keepdim=False, *, out=None) → Tensor
This is a variant of torch.quantile() that “ignores” NaN values, computing the quantiles q as if NaN values in input did not exist. If all values in a reduced row are NaN then the quantiles for that reduction will be NaN. See the documentation... | torch.generated.torch.nanquantile#torch.nanquantile |
torch.nansum(input, *, dtype=None) → Tensor
Returns the sum of all elements, treating Not a Numbers (NaNs) as zero. Parameters
input (Tensor) – the input tensor. Keyword Arguments
dtype (torch.dtype, optional) – the desired data type of returned tensor. If specified, the input tensor is casted to dtype before the... | torch.generated.torch.nansum#torch.nansum |
torch.nan_to_num(input, nan=0.0, posinf=None, neginf=None, *, out=None) → Tensor
Replaces NaN, positive infinity, and negative infinity values in input with the values specified by nan, posinf, and neginf, respectively. By default, NaN`s are replaced with zero, positive infinity is replaced with the
greatest finite v... | torch.generated.torch.nan_to_num#torch.nan_to_num |
torch.narrow(input, dim, start, length) → Tensor
Returns a new tensor that is a narrowed version of input tensor. The dimension dim is input from start to start + length. The returned tensor and input tensor share the same underlying storage. Parameters
input (Tensor) – the tensor to narrow
dim (int) – the dimen... | torch.generated.torch.narrow#torch.narrow |
torch.ne(input, other, *, out=None) → Tensor
Computes input≠other\text{input} \neq \text{other} element-wise. The second argument can be a number or a tensor whose shape is broadcastable with the first argument. Parameters
input (Tensor) – the tensor to compare
other (Tensor or float) – the tensor or value to c... | torch.generated.torch.ne#torch.ne |
torch.neg(input, *, out=None) → Tensor
Returns a new tensor with the negative of the elements of input. out=−1×input\text{out} = -1 \times \text{input}
Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.randn(5)
>>> a
tensor([ 0.... | torch.generated.torch.neg#torch.neg |
torch.negative(input, *, out=None) → Tensor
Alias for torch.neg() | torch.generated.torch.negative#torch.negative |
torch.nextafter(input, other, *, out=None) → Tensor
Return the next floating-point value after input towards other, elementwise. The shapes of input and other must be broadcastable. Parameters
input (Tensor) – the first input tensor
other (Tensor) – the second input tensor Keyword Arguments
out (Tensor, optio... | torch.generated.torch.nextafter#torch.nextafter |
torch.nn These are the basic building block for graphs torch.nn Containers Convolution Layers Pooling layers Padding Layers Non-linear Activations (weighted sum, nonlinearity) Non-linear Activations (other) Normalization Layers Recurrent Layers Transformer Layers Linear Layers Dropout Layers Sparse Layers Distance Fun... | torch.nn |
class torch.nn.AdaptiveAvgPool1d(output_size) [source]
Applies a 1D adaptive average pooling over an input signal composed of several input planes. The output size is H, for any input size. The number of output features is equal to the number of input planes. Parameters
output_size – the target output size H Exam... | torch.generated.torch.nn.adaptiveavgpool1d#torch.nn.AdaptiveAvgPool1d |
class torch.nn.AdaptiveAvgPool2d(output_size) [source]
Applies a 2D adaptive average pooling over an input signal composed of several input planes. The output is of size H x W, for any input size. The number of output features is equal to the number of input planes. Parameters
output_size – the target output size o... | torch.generated.torch.nn.adaptiveavgpool2d#torch.nn.AdaptiveAvgPool2d |
class torch.nn.AdaptiveAvgPool3d(output_size) [source]
Applies a 3D adaptive average pooling over an input signal composed of several input planes. The output is of size D x H x W, for any input size. The number of output features is equal to the number of input planes. Parameters
output_size – the target output si... | torch.generated.torch.nn.adaptiveavgpool3d#torch.nn.AdaptiveAvgPool3d |
class torch.nn.AdaptiveLogSoftmaxWithLoss(in_features, n_classes, cutoffs, div_value=4.0, head_bias=False) [source]
Efficient softmax approximation as described in Efficient softmax approximation for GPUs by Edouard Grave, Armand Joulin, Moustapha Cissé, David Grangier, and Hervé Jégou. Adaptive softmax is an approxi... | torch.generated.torch.nn.adaptivelogsoftmaxwithloss#torch.nn.AdaptiveLogSoftmaxWithLoss |
log_prob(input) [source]
Computes log probabilities for all n_classes\texttt{n\_classes} Parameters
input (Tensor) – a minibatch of examples Returns
log-probabilities of for each class cc in range 0<=c<=n_classes0 <= c <= \texttt{n\_classes} , where n_classes\texttt{n\_classes} is a parameter passed to Adaptiv... | torch.generated.torch.nn.adaptivelogsoftmaxwithloss#torch.nn.AdaptiveLogSoftmaxWithLoss.log_prob |
predict(input) [source]
This is equivalent to self.log_pob(input).argmax(dim=1), but is more efficient in some cases. Parameters
input (Tensor) – a minibatch of examples Returns
a class with the highest probability for each example Return type
output (Tensor) Shape:
Input: (N,in_features)(N, \texttt{in\_fe... | torch.generated.torch.nn.adaptivelogsoftmaxwithloss#torch.nn.AdaptiveLogSoftmaxWithLoss.predict |
class torch.nn.AdaptiveMaxPool1d(output_size, return_indices=False) [source]
Applies a 1D adaptive max pooling over an input signal composed of several input planes. The output size is H, for any input size. The number of output features is equal to the number of input planes. Parameters
output_size – the target ... | torch.generated.torch.nn.adaptivemaxpool1d#torch.nn.AdaptiveMaxPool1d |
class torch.nn.AdaptiveMaxPool2d(output_size, return_indices=False) [source]
Applies a 2D adaptive max pooling over an input signal composed of several input planes. The output is of size H x W, for any input size. The number of output features is equal to the number of input planes. Parameters
output_size – the ... | torch.generated.torch.nn.adaptivemaxpool2d#torch.nn.AdaptiveMaxPool2d |
class torch.nn.AdaptiveMaxPool3d(output_size, return_indices=False) [source]
Applies a 3D adaptive max pooling over an input signal composed of several input planes. The output is of size D x H x W, for any input size. The number of output features is equal to the number of input planes. Parameters
output_size – ... | torch.generated.torch.nn.adaptivemaxpool3d#torch.nn.AdaptiveMaxPool3d |
class torch.nn.AlphaDropout(p=0.5, inplace=False) [source]
Applies Alpha Dropout over the input. Alpha Dropout is a type of Dropout that maintains the self-normalizing property. For an input with zero mean and unit standard deviation, the output of Alpha Dropout maintains the original mean and standard deviation of t... | torch.generated.torch.nn.alphadropout#torch.nn.AlphaDropout |
class torch.nn.AvgPool1d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True) [source]
Applies a 1D average pooling over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size (N,C,L)(N, C, L) , output (N,C,Lout)(N, C, L_{out}) a... | torch.generated.torch.nn.avgpool1d#torch.nn.AvgPool1d |
class torch.nn.AvgPool2d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None) [source]
Applies a 2D average pooling over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size (N,C,H,W)(N, C, H, W) , output ... | torch.generated.torch.nn.avgpool2d#torch.nn.AvgPool2d |
class torch.nn.AvgPool3d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None) [source]
Applies a 3D average pooling over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size (N,C,D,H,W)(N, C, D, H, W) , ou... | torch.generated.torch.nn.avgpool3d#torch.nn.AvgPool3d |
class torch.nn.BatchNorm1d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) [source]
Applies Batch Normalization over a 2D or 3D input (a mini-batch of 1D inputs with optional additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training b... | torch.generated.torch.nn.batchnorm1d#torch.nn.BatchNorm1d |
class torch.nn.BatchNorm2d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) [source]
Applies Batch Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Inte... | torch.generated.torch.nn.batchnorm2d#torch.nn.BatchNorm2d |
class torch.nn.BatchNorm3d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) [source]
Applies Batch Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Inte... | torch.generated.torch.nn.batchnorm3d#torch.nn.BatchNorm3d |
class torch.nn.BCELoss(weight=None, size_average=None, reduce=None, reduction='mean') [source]
Creates a criterion that measures the Binary Cross Entropy between the target and the output: The unreduced (i.e. with reduction set to 'none') loss can be described as: ℓ(x,y)=L={l1,…,lN}⊤,ln=−wn[yn⋅logxn+(1−yn)⋅log(1−x... | torch.generated.torch.nn.bceloss#torch.nn.BCELoss |
class torch.nn.BCEWithLogitsLoss(weight=None, size_average=None, reduce=None, reduction='mean', pos_weight=None) [source]
This loss combines a Sigmoid layer and the BCELoss in one single class. This version is more numerically stable than using a plain Sigmoid followed by a BCELoss as, by combining the operations int... | torch.generated.torch.nn.bcewithlogitsloss#torch.nn.BCEWithLogitsLoss |
class torch.nn.Bilinear(in1_features, in2_features, out_features, bias=True) [source]
Applies a bilinear transformation to the incoming data: y=x1TAx2+by = x_1^T A x_2 + b Parameters
in1_features – size of each first input sample
in2_features – size of each second input sample
out_features – size of each outpu... | torch.generated.torch.nn.bilinear#torch.nn.Bilinear |
class torch.nn.CELU(alpha=1.0, inplace=False) [source]
Applies the element-wise function: CELU(x)=max(0,x)+min(0,α∗(exp(x/α)−1))\text{CELU}(x) = \max(0,x) + \min(0, \alpha * (\exp(x/\alpha) - 1))
More details can be found in the paper Continuously Differentiable Exponential Linear Units . Parameters
alpha –... | torch.generated.torch.nn.celu#torch.nn.CELU |
class torch.nn.ChannelShuffle(groups) [source]
Divide the channels in a tensor of shape (∗,C,H,W)(*, C , H, W) into g groups and rearrange them as (∗,Cg,g,H,W)(*, C \frac g, g, H, W) , while keeping the original tensor shape. Parameters
groups (int) – number of groups to divide channels in. Examples: >>> channel... | torch.generated.torch.nn.channelshuffle#torch.nn.ChannelShuffle |
class torch.nn.ConstantPad1d(padding, value) [source]
Pads the input tensor boundaries with a constant value. For N-dimensional padding, use torch.nn.functional.pad(). Parameters
padding (int, tuple) – the size of the padding. If is int, uses the same padding in both boundaries. If a 2-tuple, uses (padding_left\tex... | torch.generated.torch.nn.constantpad1d#torch.nn.ConstantPad1d |
class torch.nn.ConstantPad2d(padding, value) [source]
Pads the input tensor boundaries with a constant value. For N-dimensional padding, use torch.nn.functional.pad(). Parameters
padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 4-tuple, uses (padding_left\text... | torch.generated.torch.nn.constantpad2d#torch.nn.ConstantPad2d |
class torch.nn.ConstantPad3d(padding, value) [source]
Pads the input tensor boundaries with a constant value. For N-dimensional padding, use torch.nn.functional.pad(). Parameters
padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 6-tuple, uses (padding_left\text... | torch.generated.torch.nn.constantpad3d#torch.nn.ConstantPad3d |
class torch.nn.Conv1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros') [source]
Applies a 1D convolution over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size (N,Cin,L)(N, C_{\text{i... | torch.generated.torch.nn.conv1d#torch.nn.Conv1d |
class torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros') [source]
Applies a 2D convolution over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size (N,Cin,H,W)(N, C_{\text... | torch.generated.torch.nn.conv2d#torch.nn.Conv2d |
class torch.nn.Conv3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros') [source]
Applies a 3D convolution over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size (N,Cin,D,H,W)(N, C_{in}... | torch.generated.torch.nn.conv3d#torch.nn.Conv3d |
class torch.nn.ConvTranspose1d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros') [source]
Applies a 1D transposed convolution operator over an input image composed of several input planes. This module can be seen as the gradient of Co... | torch.generated.torch.nn.convtranspose1d#torch.nn.ConvTranspose1d |
class torch.nn.ConvTranspose2d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros') [source]
Applies a 2D transposed convolution operator over an input image composed of several input planes. This module can be seen as the gradient of Co... | torch.generated.torch.nn.convtranspose2d#torch.nn.ConvTranspose2d |
class torch.nn.ConvTranspose3d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros') [source]
Applies a 3D transposed convolution operator over an input image composed of several input planes. The transposed convolution operator multiplie... | torch.generated.torch.nn.convtranspose3d#torch.nn.ConvTranspose3d |
class torch.nn.CosineEmbeddingLoss(margin=0.0, size_average=None, reduce=None, reduction='mean') [source]
Creates a criterion that measures the loss given input tensors x1x_1 , x2x_2 and a Tensor label yy with values 1 or -1. This is used for measuring whether two inputs are similar or dissimilar, using the cosine ... | torch.generated.torch.nn.cosineembeddingloss#torch.nn.CosineEmbeddingLoss |
class torch.nn.CosineSimilarity(dim=1, eps=1e-08) [source]
Returns cosine similarity between x1x_1 and x2x_2 , computed along dim. similarity=x1⋅x2max(∥x1∥2⋅∥x2∥2,ϵ).\text{similarity} = \dfrac{x_1 \cdot x_2}{\max(\Vert x_1 \Vert _2 \cdot \Vert x_2 \Vert _2, \epsilon)}.
Parameters
dim (int, optional) – Dimens... | torch.generated.torch.nn.cosinesimilarity#torch.nn.CosineSimilarity |
class torch.nn.CrossEntropyLoss(weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean') [source]
This criterion combines LogSoftmax and NLLLoss in one single class. It is useful when training a classification problem with C classes. If provided, the optional argument weight should be a 1D Te... | torch.generated.torch.nn.crossentropyloss#torch.nn.CrossEntropyLoss |
class torch.nn.CTCLoss(blank=0, reduction='mean', zero_infinity=False) [source]
The Connectionist Temporal Classification loss. Calculates loss between a continuous (unsegmented) time series and a target sequence. CTCLoss sums over the probability of possible alignments of input to target, producing a loss value whic... | torch.generated.torch.nn.ctcloss#torch.nn.CTCLoss |
class torch.nn.DataParallel(module, device_ids=None, output_device=None, dim=0) [source]
Implements data parallelism at the module level. This container parallelizes the application of the given module by splitting the input across the specified devices by chunking in the batch dimension (other objects will be copied... | torch.generated.torch.nn.dataparallel#torch.nn.DataParallel |
class torch.nn.Dropout(p=0.5, inplace=False) [source]
During training, randomly zeroes some of the elements of the input tensor with probability p using samples from a Bernoulli distribution. Each channel will be zeroed out independently on every forward call. This has proven to be an effective technique for regulari... | torch.generated.torch.nn.dropout#torch.nn.Dropout |
class torch.nn.Dropout2d(p=0.5, inplace=False) [source]
Randomly zero out entire channels (a channel is a 2D feature map, e.g., the jj -th channel of the ii -th sample in the batched input is a 2D tensor input[i,j]\text{input}[i, j] ). Each channel will be zeroed out independently on every forward call with probabili... | torch.generated.torch.nn.dropout2d#torch.nn.Dropout2d |
class torch.nn.Dropout3d(p=0.5, inplace=False) [source]
Randomly zero out entire channels (a channel is a 3D feature map, e.g., the jj -th channel of the ii -th sample in the batched input is a 3D tensor input[i,j]\text{input}[i, j] ). Each channel will be zeroed out independently on every forward call with probabili... | torch.generated.torch.nn.dropout3d#torch.nn.Dropout3d |
class torch.nn.ELU(alpha=1.0, inplace=False) [source]
Applies the element-wise function: ELU(x)={x, if x>0α∗(exp(x)−1), if x≤0\text{ELU}(x) = \begin{cases} x, & \text{ if } x > 0\\ \alpha * (\exp(x) - 1), & \text{ if } x \leq 0 \end{cases}
Parameters
alpha – the α\alpha value for the ELU formulation. Default... | torch.generated.torch.nn.elu#torch.nn.ELU |
class torch.nn.Embedding(num_embeddings, embedding_dim, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False, _weight=None) [source]
A simple lookup table that stores embeddings of a fixed dictionary and size. This module is often used to store word embeddings and retrieve them using... | torch.generated.torch.nn.embedding#torch.nn.Embedding |
classmethod from_pretrained(embeddings, freeze=True, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False) [source]
Creates Embedding instance from given 2-dimensional FloatTensor. Parameters
embeddings (Tensor) – FloatTensor containing weights for the Embedding. First dimension ... | torch.generated.torch.nn.embedding#torch.nn.Embedding.from_pretrained |
class torch.nn.EmbeddingBag(num_embeddings, embedding_dim, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, mode='mean', sparse=False, _weight=None, include_last_offset=False) [source]
Computes sums or means of ‘bags’ of embeddings, without instantiating the intermediate embeddings. For bags of constant length... | torch.generated.torch.nn.embeddingbag#torch.nn.EmbeddingBag |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.