doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
classmethod from_pretrained(embeddings, freeze=True, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, mode='mean', sparse=False, include_last_offset=False) [source] Creates EmbeddingBag instance from given 2-dimensional FloatTensor. Parameters embeddings (Tensor) – FloatTensor containing weights for the Em...
torch.generated.torch.nn.embeddingbag#torch.nn.EmbeddingBag.from_pretrained
class torch.nn.Flatten(start_dim=1, end_dim=-1) [source] Flattens a contiguous range of dims into a tensor. For use with Sequential. Shape: Input: (N,∗dims)(N, *dims) Output: (N,∏∗dims)(N, \prod *dims) (for the default case). Parameters start_dim – first dim to flatten (default = 1). end_dim – last dim ...
torch.generated.torch.nn.flatten#torch.nn.Flatten
add_module(name, module) Adds a child module to the current module. The module can be accessed as an attribute using the given name. Parameters name (string) – name of the child module. The child module can be accessed from this module using the given name module (Module) – child module to be added to the module...
torch.generated.torch.nn.flatten#torch.nn.Flatten.add_module
apply(fn) Applies fn recursively to every submodule (as returned by .children()) as well as self. Typical use includes initializing the parameters of a model (see also torch.nn.init). Parameters fn (Module -> None) – function to be applied to each submodule Returns self Return type Module Example: >>> @torch....
torch.generated.torch.nn.flatten#torch.nn.Flatten.apply
bfloat16() Casts all floating point parameters and buffers to bfloat16 datatype. Returns self Return type Module
torch.generated.torch.nn.flatten#torch.nn.Flatten.bfloat16
buffers(recurse=True) Returns an iterator over module buffers. Parameters recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module. Yields torch.Tensor – module buffer Example: >>> for buf in model.buffers(): >>> p...
torch.generated.torch.nn.flatten#torch.nn.Flatten.buffers
children() Returns an iterator over immediate children modules. Yields Module – a child module
torch.generated.torch.nn.flatten#torch.nn.Flatten.children
cpu() Moves all model parameters and buffers to the CPU. Returns self Return type Module
torch.generated.torch.nn.flatten#torch.nn.Flatten.cpu
cuda(device=None) Moves all model parameters and buffers to the GPU. This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized. Parameters device (int, optional) – if specified, all parameters will b...
torch.generated.torch.nn.flatten#torch.nn.Flatten.cuda
double() Casts all floating point parameters and buffers to double datatype. Returns self Return type Module
torch.generated.torch.nn.flatten#torch.nn.Flatten.double
eval() Sets the module in evaluation mode. This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc. This is equivalent with self.train(False). Returns self Return type Modul...
torch.generated.torch.nn.flatten#torch.nn.Flatten.eval
float() Casts all floating point parameters and buffers to float datatype. Returns self Return type Module
torch.generated.torch.nn.flatten#torch.nn.Flatten.float
half() Casts all floating point parameters and buffers to half datatype. Returns self Return type Module
torch.generated.torch.nn.flatten#torch.nn.Flatten.half
load_state_dict(state_dict, strict=True) Copies parameters and buffers from state_dict into this module and its descendants. If strict is True, then the keys of state_dict must exactly match the keys returned by this module’s state_dict() function. Parameters state_dict (dict) – a dict containing parameters and p...
torch.generated.torch.nn.flatten#torch.nn.Flatten.load_state_dict
modules() Returns an iterator over all modules in the network. Yields Module – a module in the network Note Duplicate modules are returned only once. In the following example, l will be returned only once. Example: >>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.modules()):...
torch.generated.torch.nn.flatten#torch.nn.Flatten.modules
named_buffers(prefix='', recurse=True) Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself. Parameters prefix (str) – prefix to prepend to all buffer names. recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields on...
torch.generated.torch.nn.flatten#torch.nn.Flatten.named_buffers
named_children() Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself. Yields (string, Module) – Tuple containing a name and child module Example: >>> for name, module in model.named_children(): >>> if name in ['conv4', 'conv5']: >>> pr...
torch.generated.torch.nn.flatten#torch.nn.Flatten.named_children
named_modules(memo=None, prefix='') Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself. Yields (string, Module) – Tuple of name and module Note Duplicate modules are returned only once. In the following example, l will be returned only once. Ex...
torch.generated.torch.nn.flatten#torch.nn.Flatten.named_modules
named_parameters(prefix='', recurse=True) Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself. Parameters prefix (str) – prefix to prepend to all parameter names. recurse (bool) – if True, then yields parameters of this module and all submodules. Ot...
torch.generated.torch.nn.flatten#torch.nn.Flatten.named_parameters
parameters(recurse=True) Returns an iterator over module parameters. This is typically passed to an optimizer. Parameters recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module. Yields Parameter – module paramete...
torch.generated.torch.nn.flatten#torch.nn.Flatten.parameters
register_backward_hook(hook) Registers a backward hook on the module. This function is deprecated in favor of nn.Module.register_full_backward_hook() and the behavior of this function will change in future versions. Returns a handle that can be used to remove the added hook by calling handle.remove() Return type ...
torch.generated.torch.nn.flatten#torch.nn.Flatten.register_backward_hook
register_buffer(name, tensor, persistent=True) Adds a buffer to the module. This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s running_mean is not a parameter, but is part of the module’s state. Buffers, by default, are persistent and will be saved...
torch.generated.torch.nn.flatten#torch.nn.Flatten.register_buffer
register_forward_hook(hook) Registers a forward hook on the module. The hook will be called every time after forward() has computed an output. It should have the following signature: hook(module, input, output) -> None or modified output The input contains only the positional arguments given to the module. Keyword a...
torch.generated.torch.nn.flatten#torch.nn.Flatten.register_forward_hook
register_forward_pre_hook(hook) Registers a forward pre-hook on the module. The hook will be called every time before forward() is invoked. It should have the following signature: hook(module, input) -> None or modified input The input contains only the positional arguments given to the module. Keyword arguments won...
torch.generated.torch.nn.flatten#torch.nn.Flatten.register_forward_pre_hook
register_full_backward_hook(hook) Registers a backward hook on the module. The hook will be called every time the gradients with respect to module inputs are computed. The hook should have the following signature: hook(module, grad_input, grad_output) -> tuple(Tensor) or None The grad_input and grad_output are tuple...
torch.generated.torch.nn.flatten#torch.nn.Flatten.register_full_backward_hook
register_parameter(name, param) Adds a parameter to the module. The parameter can be accessed as an attribute using given name. Parameters name (string) – name of the parameter. The parameter can be accessed from this module using the given name param (Parameter) – parameter to be added to the module.
torch.generated.torch.nn.flatten#torch.nn.Flatten.register_parameter
requires_grad_(requires_grad=True) Change if autograd should record operations on parameters in this module. This method sets the parameters’ requires_grad attributes in-place. This method is helpful for freezing part of the module for finetuning or training parts of a model individually (e.g., GAN training). Parame...
torch.generated.torch.nn.flatten#torch.nn.Flatten.requires_grad_
state_dict(destination=None, prefix='', keep_vars=False) Returns a dictionary containing a whole state of the module. Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Returns a dictionary containing a whole state of the module Return ty...
torch.generated.torch.nn.flatten#torch.nn.Flatten.state_dict
to(*args, **kwargs) Moves and/or casts the parameters and buffers. This can be called as to(device=None, dtype=None, non_blocking=False) to(dtype, non_blocking=False) to(tensor, non_blocking=False) to(memory_format=torch.channels_last) Its signature is similar to torch.Tensor.to(), but only accepts fl...
torch.generated.torch.nn.flatten#torch.nn.Flatten.to
train(mode=True) Sets the module in training mode. This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc. Parameters mode (bool) – whether to set training mode (True) or eva...
torch.generated.torch.nn.flatten#torch.nn.Flatten.train
type(dst_type) Casts all parameters and buffers to dst_type. Parameters dst_type (type or string) – the desired type Returns self Return type Module
torch.generated.torch.nn.flatten#torch.nn.Flatten.type
xpu(device=None) Moves all model parameters and buffers to the XPU. This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on XPU while being optimized. Parameters device (int, optional) – if specified, all parameters will be...
torch.generated.torch.nn.flatten#torch.nn.Flatten.xpu
zero_grad(set_to_none=False) Sets gradients of all model parameters to zero. See similar function under torch.optim.Optimizer for more context. Parameters set_to_none (bool) – instead of setting to zero, set the grads to None. See torch.optim.Optimizer.zero_grad() for details.
torch.generated.torch.nn.flatten#torch.nn.Flatten.zero_grad
class torch.nn.Fold(output_size, kernel_size, dilation=1, padding=0, stride=1) [source] Combines an array of sliding local blocks into a large containing tensor. Consider a batched input tensor containing sliding local blocks, e.g., patches of images, of shape (N,C×∏(kernel_size),L)(N, C \times \prod(\text{kernel\_si...
torch.generated.torch.nn.fold#torch.nn.Fold
class torch.nn.FractionalMaxPool2d(kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None) [source] Applies a 2D fractional max pooling over an input signal composed of several input planes. Fractional MaxPooling is described in detail in the paper Fractional MaxPooling by Ben Gr...
torch.generated.torch.nn.fractionalmaxpool2d#torch.nn.FractionalMaxPool2d
torch.nn.functional Convolution functions conv1d torch.nn.functional.conv1d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) → Tensor Applies a 1D convolution over an input signal composed of several input planes. This operator supports TensorFloat32. See Conv1d for details and output shape. No...
torch.nn.functional
torch.nn.functional.adaptive_avg_pool1d(input, output_size) → Tensor Applies a 1D adaptive average pooling over an input signal composed of several input planes. See AdaptiveAvgPool1d for details and output shape. Parameters output_size – the target output size (single integer)
torch.nn.functional#torch.nn.functional.adaptive_avg_pool1d
torch.nn.functional.adaptive_avg_pool2d(input, output_size) [source] Applies a 2D adaptive average pooling over an input signal composed of several input planes. See AdaptiveAvgPool2d for details and output shape. Parameters output_size – the target output size (single integer or double-integer tuple)
torch.nn.functional#torch.nn.functional.adaptive_avg_pool2d
torch.nn.functional.adaptive_avg_pool3d(input, output_size) [source] Applies a 3D adaptive average pooling over an input signal composed of several input planes. See AdaptiveAvgPool3d for details and output shape. Parameters output_size – the target output size (single integer or triple-integer tuple)
torch.nn.functional#torch.nn.functional.adaptive_avg_pool3d
torch.nn.functional.adaptive_max_pool1d(*args, **kwargs) Applies a 1D adaptive max pooling over an input signal composed of several input planes. See AdaptiveMaxPool1d for details and output shape. Parameters output_size – the target output size (single integer) return_indices – whether to return pooling indices...
torch.nn.functional#torch.nn.functional.adaptive_max_pool1d
torch.nn.functional.adaptive_max_pool2d(*args, **kwargs) Applies a 2D adaptive max pooling over an input signal composed of several input planes. See AdaptiveMaxPool2d for details and output shape. Parameters output_size – the target output size (single integer or double-integer tuple) return_indices – whether t...
torch.nn.functional#torch.nn.functional.adaptive_max_pool2d
torch.nn.functional.adaptive_max_pool3d(*args, **kwargs) Applies a 3D adaptive max pooling over an input signal composed of several input planes. See AdaptiveMaxPool3d for details and output shape. Parameters output_size – the target output size (single integer or triple-integer tuple) return_indices – whether t...
torch.nn.functional#torch.nn.functional.adaptive_max_pool3d
torch.nn.functional.affine_grid(theta, size, align_corners=None) [source] Generates a 2D or 3D flow field (sampling grid), given a batch of affine matrices theta. Note This function is often used in conjunction with grid_sample() to build Spatial Transformer Networks . Parameters theta (Tensor) – input batch of...
torch.nn.functional#torch.nn.functional.affine_grid
torch.nn.functional.alpha_dropout(input, p=0.5, training=False, inplace=False) [source] Applies alpha dropout to the input. See AlphaDropout for details.
torch.nn.functional#torch.nn.functional.alpha_dropout
torch.nn.functional.avg_pool1d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True) → Tensor Applies a 1D average pooling over an input signal composed of several input planes. See AvgPool1d for details and output shape. Parameters input – input tensor of shape (minibatch,in_channe...
torch.nn.functional#torch.nn.functional.avg_pool1d
torch.nn.functional.avg_pool2d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None) → Tensor Applies 2D average-pooling operation in kH×kWkH \times kW regions by step size sH×sWsH \times sW steps. The number of output features is equal to the number of input pl...
torch.nn.functional#torch.nn.functional.avg_pool2d
torch.nn.functional.avg_pool3d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None) → Tensor Applies 3D average-pooling operation in kT×kH×kWkT \times kH \times kW regions by step size sT×sH×sWsT \times sH \times sW steps. The number of output features is equal...
torch.nn.functional#torch.nn.functional.avg_pool3d
torch.nn.functional.batch_norm(input, running_mean, running_var, weight=None, bias=None, training=False, momentum=0.1, eps=1e-05) [source] Applies Batch Normalization for each channel across a batch of data. See BatchNorm1d, BatchNorm2d, BatchNorm3d for details.
torch.nn.functional#torch.nn.functional.batch_norm
torch.nn.functional.bilinear(input1, input2, weight, bias=None) [source] Applies a bilinear transformation to the incoming data: y=x1TAx2+by = x_1^T A x_2 + b Shape: input1: (N,∗,Hin1)(N, *, H_{in1}) where Hin1=in1_featuresH_{in1}=\text{in1\_features} and ∗* means any number of additional dimensions. All but the...
torch.nn.functional#torch.nn.functional.bilinear
torch.nn.functional.binary_cross_entropy(input, target, weight=None, size_average=None, reduce=None, reduction='mean') [source] Function that measures the Binary Cross Entropy between the target and the output. See BCELoss for details. Parameters input – Tensor of arbitrary shape target – Tensor of the same shap...
torch.nn.functional#torch.nn.functional.binary_cross_entropy
torch.nn.functional.binary_cross_entropy_with_logits(input, target, weight=None, size_average=None, reduce=None, reduction='mean', pos_weight=None) [source] Function that measures Binary Cross Entropy between target and output logits. See BCEWithLogitsLoss for details. Parameters input – Tensor of arbitrary shape...
torch.nn.functional#torch.nn.functional.binary_cross_entropy_with_logits
torch.nn.functional.celu(input, alpha=1., inplace=False) → Tensor [source] Applies element-wise, CELU(x)=max⁡(0,x)+min⁡(0,α∗(exp⁡(x/α)−1))\text{CELU}(x) = \max(0,x) + \min(0, \alpha * (\exp(x/\alpha) - 1)) . See CELU for more details.
torch.nn.functional#torch.nn.functional.celu
torch.nn.functional.conv1d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) → Tensor Applies a 1D convolution over an input signal composed of several input planes. This operator supports TensorFloat32. See Conv1d for details and output shape. Note In some circumstances when given tensors on a CU...
torch.nn.functional#torch.nn.functional.conv1d
torch.nn.functional.conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) → Tensor Applies a 2D convolution over an input image composed of several input planes. This operator supports TensorFloat32. See Conv2d for details and output shape. Note In some circumstances when given tensors on a CUD...
torch.nn.functional#torch.nn.functional.conv2d
torch.nn.functional.conv3d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) → Tensor Applies a 3D convolution over an input image composed of several input planes. This operator supports TensorFloat32. See Conv3d for details and output shape. Note In some circumstances when given tensors on a CUD...
torch.nn.functional#torch.nn.functional.conv3d
torch.nn.functional.conv_transpose1d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1) → Tensor Applies a 1D transposed convolution operator over an input signal composed of several input planes, sometimes also called “deconvolution”. This operator supports TensorFloat32. See Conv...
torch.nn.functional#torch.nn.functional.conv_transpose1d
torch.nn.functional.conv_transpose2d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1) → Tensor Applies a 2D transposed convolution operator over an input image composed of several input planes, sometimes also called “deconvolution”. This operator supports TensorFloat32. See ConvT...
torch.nn.functional#torch.nn.functional.conv_transpose2d
torch.nn.functional.conv_transpose3d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1) → Tensor Applies a 3D transposed convolution operator over an input image composed of several input planes, sometimes also called “deconvolution” This operator supports TensorFloat32. See ConvTr...
torch.nn.functional#torch.nn.functional.conv_transpose3d
torch.nn.functional.cosine_embedding_loss(input1, input2, target, margin=0, size_average=None, reduce=None, reduction='mean') → Tensor [source] See CosineEmbeddingLoss for details.
torch.nn.functional#torch.nn.functional.cosine_embedding_loss
torch.nn.functional.cosine_similarity(x1, x2, dim=1, eps=1e-8) → Tensor Returns cosine similarity between x1 and x2, computed along dim. similarity=x1⋅x2max⁡(∥x1∥2⋅∥x2∥2,ϵ)\text{similarity} = \dfrac{x_1 \cdot x_2}{\max(\Vert x_1 \Vert _2 \cdot \Vert x_2 \Vert _2, \epsilon)} Parameters x1 (Tensor) – First input...
torch.nn.functional#torch.nn.functional.cosine_similarity
torch.nn.functional.cross_entropy(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean') [source] This criterion combines log_softmax and nll_loss in a single function. See CrossEntropyLoss for details. Parameters input (Tensor) – (N,C)(N, C) where C = number of classes ...
torch.nn.functional#torch.nn.functional.cross_entropy
torch.nn.functional.ctc_loss(log_probs, targets, input_lengths, target_lengths, blank=0, reduction='mean', zero_infinity=False) [source] The Connectionist Temporal Classification loss. See CTCLoss for details. Note In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a ...
torch.nn.functional#torch.nn.functional.ctc_loss
torch.nn.functional.dropout(input, p=0.5, training=True, inplace=False) [source] During training, randomly zeroes some of the elements of the input tensor with probability p using samples from a Bernoulli distribution. See Dropout for details. Parameters p – probability of an element to be zeroed. Default: 0.5 t...
torch.nn.functional#torch.nn.functional.dropout
torch.nn.functional.dropout2d(input, p=0.5, training=True, inplace=False) [source] Randomly zero out entire channels (a channel is a 2D feature map, e.g., the jj -th channel of the ii -th sample in the batched input is a 2D tensor input[i,j]\text{input}[i, j] ) of the input tensor). Each channel will be zeroed out in...
torch.nn.functional#torch.nn.functional.dropout2d
torch.nn.functional.dropout3d(input, p=0.5, training=True, inplace=False) [source] Randomly zero out entire channels (a channel is a 3D feature map, e.g., the jj -th channel of the ii -th sample in the batched input is a 3D tensor input[i,j]\text{input}[i, j] ) of the input tensor). Each channel will be zeroed out in...
torch.nn.functional#torch.nn.functional.dropout3d
torch.nn.functional.elu(input, alpha=1.0, inplace=False) [source] Applies element-wise, ELU(x)=max⁡(0,x)+min⁡(0,α∗(exp⁡(x)−1))\text{ELU}(x) = \max(0,x) + \min(0, \alpha * (\exp(x) - 1)) . See ELU for more details.
torch.nn.functional#torch.nn.functional.elu
torch.nn.functional.elu_(input, alpha=1.) → Tensor In-place version of elu().
torch.nn.functional#torch.nn.functional.elu_
torch.nn.functional.embedding(input, weight, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False) [source] A simple lookup table that looks up embeddings in a fixed dictionary and size. This module is often used to retrieve word embeddings using indices. The input to the module is a...
torch.nn.functional#torch.nn.functional.embedding
torch.nn.functional.embedding_bag(input, weight, offsets=None, max_norm=None, norm_type=2, scale_grad_by_freq=False, mode='mean', sparse=False, per_sample_weights=None, include_last_offset=False) [source] Computes sums, means or maxes of bags of embeddings, without instantiating the intermediate embeddings. See torch...
torch.nn.functional#torch.nn.functional.embedding_bag
torch.nn.functional.feature_alpha_dropout(input, p=0.5, training=False, inplace=False) [source] Randomly masks out entire channels (a channel is a feature map, e.g. the jj -th channel of the ii -th sample in the batch input is a tensor input[i,j]\text{input}[i, j] ) of the input tensor). Instead of setting activation...
torch.nn.functional#torch.nn.functional.feature_alpha_dropout
torch.nn.functional.fold(input, output_size, kernel_size, dilation=1, padding=0, stride=1) [source] Combines an array of sliding local blocks into a large containing tensor. Warning Currently, only 3-D output tensors (unfolded batched image-like tensors) are supported. See torch.nn.Fold for details
torch.nn.functional#torch.nn.functional.fold
torch.nn.functional.gelu(input) → Tensor [source] Applies element-wise the function GELU(x)=x∗Φ(x)\text{GELU}(x) = x * \Phi(x) where Φ(x)\Phi(x) is the Cumulative Distribution Function for Gaussian Distribution. See Gaussian Error Linear Units (GELUs).
torch.nn.functional#torch.nn.functional.gelu
torch.nn.functional.glu(input, dim=-1) → Tensor [source] The gated linear unit. Computes: GLU(a,b)=a⊗σ(b)\text{GLU}(a, b) = a \otimes \sigma(b) where input is split in half along dim to form a and b, σ\sigma is the sigmoid function and ⊗\otimes is the element-wise product between matrices. See Language Modeling ...
torch.nn.functional#torch.nn.functional.glu
torch.nn.functional.grid_sample(input, grid, mode='bilinear', padding_mode='zeros', align_corners=None) [source] Given an input and a flow-field grid, computes the output using input values and pixel locations from grid. Currently, only spatial (4-D) and volumetric (5-D) input are supported. In the spatial (4-D) case...
torch.nn.functional#torch.nn.functional.grid_sample
torch.nn.functional.gumbel_softmax(logits, tau=1, hard=False, eps=1e-10, dim=-1) [source] Samples from the Gumbel-Softmax distribution (Link 1 Link 2) and optionally discretizes. Parameters logits – […, num_features] unnormalized log probabilities tau – non-negative scalar temperature hard – if True, the return...
torch.nn.functional#torch.nn.functional.gumbel_softmax
torch.nn.functional.hardshrink(input, lambd=0.5) → Tensor [source] Applies the hard shrinkage function element-wise See Hardshrink for more details.
torch.nn.functional#torch.nn.functional.hardshrink
torch.nn.functional.hardsigmoid(input) → Tensor [source] Applies the element-wise function Hardsigmoid(x)={0if x≤−3,1if x≥+3,x/6+1/2otherwise\text{Hardsigmoid}(x) = \begin{cases} 0 & \text{if~} x \le -3, \\ 1 & \text{if~} x \ge +3, \\ x / 6 + 1 / 2 & \text{otherwise} \end{cases} Parameters inplace – If set to Tr...
torch.nn.functional#torch.nn.functional.hardsigmoid
torch.nn.functional.hardswish(input, inplace=False) [source] Applies the hardswish function, element-wise, as described in the paper: Searching for MobileNetV3. Hardswish(x)={0if x≤−3,xif x≥+3,x⋅(x+3)/6otherwise\text{Hardswish}(x) = \begin{cases} 0 & \text{if~} x \le -3, \\ x & \text{if~} x \ge +3, \\ x \cdot (x + 3...
torch.nn.functional#torch.nn.functional.hardswish
torch.nn.functional.hardtanh(input, min_val=-1., max_val=1., inplace=False) → Tensor [source] Applies the HardTanh function element-wise. See Hardtanh for more details.
torch.nn.functional#torch.nn.functional.hardtanh
torch.nn.functional.hardtanh_(input, min_val=-1., max_val=1.) → Tensor In-place version of hardtanh().
torch.nn.functional#torch.nn.functional.hardtanh_
torch.nn.functional.hinge_embedding_loss(input, target, margin=1.0, size_average=None, reduce=None, reduction='mean') → Tensor [source] See HingeEmbeddingLoss for details.
torch.nn.functional#torch.nn.functional.hinge_embedding_loss
torch.nn.functional.instance_norm(input, running_mean=None, running_var=None, weight=None, bias=None, use_input_stats=True, momentum=0.1, eps=1e-05) [source] Applies Instance Normalization for each channel in each data sample in a batch. See InstanceNorm1d, InstanceNorm2d, InstanceNorm3d for details.
torch.nn.functional#torch.nn.functional.instance_norm
torch.nn.functional.interpolate(input, size=None, scale_factor=None, mode='nearest', align_corners=None, recompute_scale_factor=None) [source] Down/up samples the input to either the given size or the given scale_factor The algorithm used for interpolation is determined by mode. Currently temporal, spatial and volume...
torch.nn.functional#torch.nn.functional.interpolate
torch.nn.functional.kl_div(input, target, size_average=None, reduce=None, reduction='mean', log_target=False) [source] The Kullback-Leibler divergence Loss See KLDivLoss for details. Parameters input – Tensor of arbitrary shape target – Tensor of the same shape as input size_average (bool, optional) – Deprecate...
torch.nn.functional#torch.nn.functional.kl_div
torch.nn.functional.l1_loss(input, target, size_average=None, reduce=None, reduction='mean') → Tensor [source] Function that takes the mean element-wise absolute value difference. See L1Loss for details.
torch.nn.functional#torch.nn.functional.l1_loss
torch.nn.functional.layer_norm(input, normalized_shape, weight=None, bias=None, eps=1e-05) [source] Applies Layer Normalization for last certain number of dimensions. See LayerNorm for details.
torch.nn.functional#torch.nn.functional.layer_norm
torch.nn.functional.leaky_relu(input, negative_slope=0.01, inplace=False) → Tensor [source] Applies element-wise, LeakyReLU(x)=max⁡(0,x)+negative_slope∗min⁡(0,x)\text{LeakyReLU}(x) = \max(0, x) + \text{negative\_slope} * \min(0, x) See LeakyReLU for more details.
torch.nn.functional#torch.nn.functional.leaky_relu
torch.nn.functional.leaky_relu_(input, negative_slope=0.01) → Tensor In-place version of leaky_relu().
torch.nn.functional#torch.nn.functional.leaky_relu_
torch.nn.functional.linear(input, weight, bias=None) [source] Applies a linear transformation to the incoming data: y=xAT+by = xA^T + b . This operator supports TensorFloat32. Shape: Input: (N,∗,in_features)(N, *, in\_features) N is the batch size, * means any number of additional dimensions Weight: (out_features,i...
torch.nn.functional#torch.nn.functional.linear
torch.nn.functional.local_response_norm(input, size, alpha=0.0001, beta=0.75, k=1.0) [source] Applies local response normalization over an input signal composed of several input planes, where channels occupy the second dimension. Applies normalization across channels. See LocalResponseNorm for details.
torch.nn.functional#torch.nn.functional.local_response_norm
torch.nn.functional.logsigmoid(input) → Tensor Applies element-wise LogSigmoid(xi)=log⁡(11+exp⁡(−xi))\text{LogSigmoid}(x_i) = \log \left(\frac{1}{1 + \exp(-x_i)}\right) See LogSigmoid for more details.
torch.nn.functional#torch.nn.functional.logsigmoid
torch.nn.functional.log_softmax(input, dim=None, _stacklevel=3, dtype=None) [source] Applies a softmax followed by a logarithm. While mathematically equivalent to log(softmax(x)), doing these two operations separately is slower, and numerically unstable. This function uses an alternative formulation to compute the ou...
torch.nn.functional#torch.nn.functional.log_softmax
torch.nn.functional.lp_pool1d(input, norm_type, kernel_size, stride=None, ceil_mode=False) [source] Applies a 1D power-average pooling over an input signal composed of several input planes. If the sum of all inputs to the power of p is zero, the gradient is set to zero as well. See LPPool1d for details.
torch.nn.functional#torch.nn.functional.lp_pool1d
torch.nn.functional.lp_pool2d(input, norm_type, kernel_size, stride=None, ceil_mode=False) [source] Applies a 2D power-average pooling over an input signal composed of several input planes. If the sum of all inputs to the power of p is zero, the gradient is set to zero as well. See LPPool2d for details.
torch.nn.functional#torch.nn.functional.lp_pool2d
torch.nn.functional.margin_ranking_loss(input1, input2, target, margin=0, size_average=None, reduce=None, reduction='mean') → Tensor [source] See MarginRankingLoss for details.
torch.nn.functional#torch.nn.functional.margin_ranking_loss
torch.nn.functional.max_pool1d(*args, **kwargs) Applies a 1D max pooling over an input signal composed of several input planes. See MaxPool1d for details.
torch.nn.functional#torch.nn.functional.max_pool1d
torch.nn.functional.max_pool2d(*args, **kwargs) Applies a 2D max pooling over an input signal composed of several input planes. See MaxPool2d for details.
torch.nn.functional#torch.nn.functional.max_pool2d
torch.nn.functional.max_pool3d(*args, **kwargs) Applies a 3D max pooling over an input signal composed of several input planes. See MaxPool3d for details.
torch.nn.functional#torch.nn.functional.max_pool3d
torch.nn.functional.max_unpool1d(input, indices, kernel_size, stride=None, padding=0, output_size=None) [source] Computes a partial inverse of MaxPool1d. See MaxUnpool1d for details.
torch.nn.functional#torch.nn.functional.max_unpool1d
torch.nn.functional.max_unpool2d(input, indices, kernel_size, stride=None, padding=0, output_size=None) [source] Computes a partial inverse of MaxPool2d. See MaxUnpool2d for details.
torch.nn.functional#torch.nn.functional.max_unpool2d