doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
classmethod from_float(mod) [source] Create a qat module from a float module or qparams_dict Args: mod a float module, either produced by torch.quantization utilities or directly from user
torch.nn.qat#torch.nn.qat.Linear.from_float
class torch.nn.quantized.BatchNorm2d(num_features, eps=1e-05, momentum=0.1) [source] This is the quantized version of BatchNorm2d.
torch.nn.quantized#torch.nn.quantized.BatchNorm2d
class torch.nn.quantized.BatchNorm3d(num_features, eps=1e-05, momentum=0.1) [source] This is the quantized version of BatchNorm3d.
torch.nn.quantized#torch.nn.quantized.BatchNorm3d
class torch.nn.quantized.Conv1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros') [source] Applies a 1D convolution over a quantized input signal composed of several quantized input planes. For details on input arguments, parameters, and implementation...
torch.nn.quantized#torch.nn.quantized.Conv1d
classmethod from_float(mod) [source] Creates a quantized module from a float module or qparams_dict. Parameters mod (Module) – a float module, either produced by torch.quantization utilities or provided by the user
torch.nn.quantized#torch.nn.quantized.Conv1d.from_float
class torch.nn.quantized.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros') [source] Applies a 2D convolution over a quantized input signal composed of several quantized input planes. For details on input arguments, parameters, and implementation...
torch.nn.quantized#torch.nn.quantized.Conv2d
classmethod from_float(mod) [source] Creates a quantized module from a float module or qparams_dict. Parameters mod (Module) – a float module, either produced by torch.quantization utilities or provided by the user
torch.nn.quantized#torch.nn.quantized.Conv2d.from_float
class torch.nn.quantized.Conv3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros') [source] Applies a 3D convolution over a quantized input signal composed of several quantized input planes. For details on input arguments, parameters, and implementation...
torch.nn.quantized#torch.nn.quantized.Conv3d
classmethod from_float(mod) [source] Creates a quantized module from a float module or qparams_dict. Parameters mod (Module) – a float module, either produced by torch.quantization utilities or provided by the user
torch.nn.quantized#torch.nn.quantized.Conv3d.from_float
class torch.nn.quantized.DeQuantize [source] Dequantizes an incoming tensor Examples:: >>> input = torch.tensor([[1., -1.], [1., -1.]]) >>> scale, zero_point, dtype = 1.0, 2, torch.qint8 >>> qm = Quantize(scale, zero_point, dtype) >>> quantized_input = qm(input) >>> dqm = DeQuantize() >>> dequantized = dqm(quantize...
torch.nn.quantized#torch.nn.quantized.DeQuantize
torch.nn.quantized.dynamic Linear class torch.nn.quantized.dynamic.Linear(in_features, out_features, bias_=True, dtype=torch.qint8) [source] A dynamic quantized linear module with floating point tensor as inputs and outputs. We adopt the same interface as torch.nn.Linear, please see https://pytorch.org/docs/stable/...
torch.nn.quantized.dynamic
class torch.nn.quantized.dynamic.GRUCell(input_size, hidden_size, bias=True, dtype=torch.qint8) [source] A gated recurrent unit (GRU) cell A dynamic quantized GRUCell module with floating point tensor as inputs and outputs. Weights are quantized to 8 bits. We adopt the same interface as torch.nn.GRUCell, please see h...
torch.nn.quantized.dynamic#torch.nn.quantized.dynamic.GRUCell
class torch.nn.quantized.dynamic.Linear(in_features, out_features, bias_=True, dtype=torch.qint8) [source] A dynamic quantized linear module with floating point tensor as inputs and outputs. We adopt the same interface as torch.nn.Linear, please see https://pytorch.org/docs/stable/nn.html#torch.nn.Linear for document...
torch.nn.quantized.dynamic#torch.nn.quantized.dynamic.Linear
classmethod from_float(mod) [source] Create a dynamic quantized module from a float module or qparams_dict Parameters mod (Module) – a float module, either produced by torch.quantization utilities or provided by the user
torch.nn.quantized.dynamic#torch.nn.quantized.dynamic.Linear.from_float
class torch.nn.quantized.dynamic.LSTM(*args, **kwargs) [source] A dynamic quantized LSTM module with floating point tensor as inputs and outputs. We adopt the same interface as torch.nn.LSTM, please see https://pytorch.org/docs/stable/nn.html#torch.nn.LSTM for documentation. Examples: >>> rnn = nn.LSTM(10, 20, 2) >>>...
torch.nn.quantized.dynamic#torch.nn.quantized.dynamic.LSTM
class torch.nn.quantized.dynamic.LSTMCell(*args, **kwargs) [source] A long short-term memory (LSTM) cell. A dynamic quantized LSTMCell module with floating point tensor as inputs and outputs. Weights are quantized to 8 bits. We adopt the same interface as torch.nn.LSTMCell, please see https://pytorch.org/docs/stable/...
torch.nn.quantized.dynamic#torch.nn.quantized.dynamic.LSTMCell
class torch.nn.quantized.dynamic.RNNCell(input_size, hidden_size, bias=True, nonlinearity='tanh', dtype=torch.qint8) [source] An Elman RNN cell with tanh or ReLU non-linearity. A dynamic quantized RNNCell module with floating point tensor as inputs and outputs. Weights are quantized to 8 bits. We adopt the same inter...
torch.nn.quantized.dynamic#torch.nn.quantized.dynamic.RNNCell
class torch.nn.quantized.ELU(scale, zero_point, alpha=1.0) [source] This is the quantized equivalent of ELU. Parameters scale – quantization scale of the output tensor zero_point – quantization zero point of the output tensor alpha – the alpha constant
torch.nn.quantized#torch.nn.quantized.ELU
class torch.nn.quantized.Embedding(num_embeddings, embedding_dim, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False, _weight=None, dtype=torch.quint8) [source] A quantized Embedding module with quantized packed weights as inputs. We adopt the same interface as torch.nn.Embedding, ...
torch.nn.quantized#torch.nn.quantized.Embedding
classmethod from_float(mod) [source] Create a quantized embedding module from a float module Parameters mod (Module) – a float module, either produced by torch.quantization utilities or provided by user
torch.nn.quantized#torch.nn.quantized.Embedding.from_float
class torch.nn.quantized.EmbeddingBag(num_embeddings, embedding_dim, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, mode='sum', sparse=False, _weight=None, include_last_offset=False, dtype=torch.quint8) [source] A quantized EmbeddingBag module with quantized packed weights as inputs. We adopt the same interf...
torch.nn.quantized#torch.nn.quantized.EmbeddingBag
classmethod from_float(mod) [source] Create a quantized embedding_bag module from a float module Parameters mod (Module) – a float module, either produced by torch.quantization utilities or provided by user
torch.nn.quantized#torch.nn.quantized.EmbeddingBag.from_float
class torch.nn.quantized.FloatFunctional [source] State collector class for float operations. The instance of this class can be used instead of the torch. prefix for some operations. See example usage below. Note This class does not provide a forward hook. Instead, you must use one of the underlying functions (e.g. ...
torch.nn.quantized#torch.nn.quantized.FloatFunctional
torch.nn.quantized.functional.adaptive_avg_pool2d(input, output_size) [source] Applies a 2D adaptive average pooling over a quantized input signal composed of several quantized input planes. Note The input quantization parameters propagate to the output. See AdaptiveAvgPool2d for details and output shape. Paramete...
torch.nn.quantized#torch.nn.quantized.functional.adaptive_avg_pool2d
torch.nn.quantized.functional.avg_pool2d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None) [source] Applies 2D average-pooling operation in kH×kWkH \times kW regions by step size sH×sWsH \times sW steps. The number of output features is equal to the number o...
torch.nn.quantized#torch.nn.quantized.functional.avg_pool2d
torch.nn.quantized.functional.conv1d(input, weight, bias, stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros', scale=1.0, zero_point=0, dtype=torch.quint8) [source] Applies a 1D convolution over a quantized 1D input composed of several input planes. See Conv1d for details and output shape. Parameters ...
torch.nn.quantized#torch.nn.quantized.functional.conv1d
torch.nn.quantized.functional.conv2d(input, weight, bias, stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros', scale=1.0, zero_point=0, dtype=torch.quint8) [source] Applies a 2D convolution over a quantized 2D input composed of several input planes. See Conv2d for details and output shape. Parameters ...
torch.nn.quantized#torch.nn.quantized.functional.conv2d
torch.nn.quantized.functional.conv3d(input, weight, bias, stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros', scale=1.0, zero_point=0, dtype=torch.quint8) [source] Applies a 3D convolution over a quantized 3D input composed of several input planes. See Conv3d for details and output shape. Parameters ...
torch.nn.quantized#torch.nn.quantized.functional.conv3d
torch.nn.quantized.functional.hardswish(input, scale, zero_point) [source] This is the quantized version of hardswish(). Parameters input – quantized input scale – quantization scale of the output tensor zero_point – quantization zero point of the output tensor
torch.nn.quantized#torch.nn.quantized.functional.hardswish
torch.nn.quantized.functional.interpolate(input, size=None, scale_factor=None, mode='nearest', align_corners=None) [source] Down/up samples the input to either the given size or the given scale_factor See torch.nn.functional.interpolate() for implementation details. The input dimensions are interpreted in the form: m...
torch.nn.quantized#torch.nn.quantized.functional.interpolate
torch.nn.quantized.functional.linear(input, weight, bias=None, scale=None, zero_point=None) [source] Applies a linear transformation to the incoming quantized data: y=xAT+by = xA^T + b . See Linear Note Current implementation packs weights on every call, which has penalty on performance. If you want to avoid the ove...
torch.nn.quantized#torch.nn.quantized.functional.linear
torch.nn.quantized.functional.max_pool2d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False) [source] Applies a 2D max pooling over a quantized input signal composed of several quantized input planes. Note The input quantization parameters are propagated to the output. See...
torch.nn.quantized#torch.nn.quantized.functional.max_pool2d
torch.nn.quantized.functional.upsample(input, size=None, scale_factor=None, mode='nearest', align_corners=None) [source] Upsamples the input to either the given size or the given scale_factor Warning This function is deprecated in favor of torch.nn.quantized.functional.interpolate(). This is equivalent with nn.quant...
torch.nn.quantized#torch.nn.quantized.functional.upsample
torch.nn.quantized.functional.upsample_bilinear(input, size=None, scale_factor=None) [source] Upsamples the input, using bilinear upsampling. Warning This function is deprecated in favor of torch.nn.quantized.functional.interpolate(). This is equivalent with nn.quantized.functional.interpolate(..., mode='bilinear', ...
torch.nn.quantized#torch.nn.quantized.functional.upsample_bilinear
torch.nn.quantized.functional.upsample_nearest(input, size=None, scale_factor=None) [source] Upsamples the input, using nearest neighbours’ pixel values. Warning This function is deprecated in favor of torch.nn.quantized.functional.interpolate(). This is equivalent with nn.quantized.functional.interpolate(..., mode=...
torch.nn.quantized#torch.nn.quantized.functional.upsample_nearest
class torch.nn.quantized.GroupNorm(num_groups, num_channels, weight, bias, scale, zero_point, eps=1e-05, affine=True) [source] This is the quantized version of GroupNorm. Additional args: scale - quantization scale of the output, type: double. zero_point - quantization zero point of the output, type: long.
torch.nn.quantized#torch.nn.quantized.GroupNorm
class torch.nn.quantized.Hardswish(scale, zero_point) [source] This is the quantized version of Hardswish. Parameters scale – quantization scale of the output tensor zero_point – quantization zero point of the output tensor
torch.nn.quantized#torch.nn.quantized.Hardswish
class torch.nn.quantized.InstanceNorm1d(num_features, weight, bias, scale, zero_point, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) [source] This is the quantized version of InstanceNorm1d. Additional args: scale - quantization scale of the output, type: double. zero_point - quantization zer...
torch.nn.quantized#torch.nn.quantized.InstanceNorm1d
class torch.nn.quantized.InstanceNorm2d(num_features, weight, bias, scale, zero_point, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) [source] This is the quantized version of InstanceNorm2d. Additional args: scale - quantization scale of the output, type: double. zero_point - quantization zer...
torch.nn.quantized#torch.nn.quantized.InstanceNorm2d
class torch.nn.quantized.InstanceNorm3d(num_features, weight, bias, scale, zero_point, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) [source] This is the quantized version of InstanceNorm3d. Additional args: scale - quantization scale of the output, type: double. zero_point - quantization zer...
torch.nn.quantized#torch.nn.quantized.InstanceNorm3d
class torch.nn.quantized.LayerNorm(normalized_shape, weight, bias, scale, zero_point, eps=1e-05, elementwise_affine=True) [source] This is the quantized version of LayerNorm. Additional args: scale - quantization scale of the output, type: double. zero_point - quantization zero point of the output, type: long.
torch.nn.quantized#torch.nn.quantized.LayerNorm
class torch.nn.quantized.Linear(in_features, out_features, bias_=True, dtype=torch.qint8) [source] A quantized linear module with quantized tensor as inputs and outputs. We adopt the same interface as torch.nn.Linear, please see https://pytorch.org/docs/stable/nn.html#torch.nn.Linear for documentation. Similar to Lin...
torch.nn.quantized#torch.nn.quantized.Linear
classmethod from_float(mod) [source] Create a quantized module from a float module or qparams_dict Parameters mod (Module) – a float module, either produced by torch.quantization utilities or provided by the user
torch.nn.quantized#torch.nn.quantized.Linear.from_float
class torch.nn.quantized.QFunctional [source] Wrapper class for quantized operations. The instance of this class can be used instead of the torch.ops.quantized prefix. See example usage below. Note This class does not provide a forward hook. Instead, you must use one of the underlying functions (e.g. add). Examples...
torch.nn.quantized#torch.nn.quantized.QFunctional
class torch.nn.quantized.Quantize(scale, zero_point, dtype) [source] Quantizes an incoming tensor Parameters scale – scale of the output Quantized Tensor zero_point – zero_point of output Quantized Tensor dtype – data type of output Quantized Tensor Variables zero_point, dtype (`scale`,) – Examples:: >>...
torch.nn.quantized#torch.nn.quantized.Quantize
class torch.nn.quantized.ReLU6(inplace=False) [source] Applies the element-wise function: ReLU6(x)=min⁡(max⁡(x0,x),q(6))\text{ReLU6}(x) = \min(\max(x_0, x), q(6)) , where x0x_0 is the zero_point, and q(6)q(6) is the quantized representation of number 6. Parameters inplace – can optionally do the operation in-plac...
torch.nn.quantized#torch.nn.quantized.ReLU6
class torch.nn.ReflectionPad1d(padding) [source] Pads the input tensor using the reflection of the input boundary. For N-dimensional padding, use torch.nn.functional.pad(). Parameters padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 2-tuple, uses (padding_left...
torch.generated.torch.nn.reflectionpad1d#torch.nn.ReflectionPad1d
class torch.nn.ReflectionPad2d(padding) [source] Pads the input tensor using the reflection of the input boundary. For N-dimensional padding, use torch.nn.functional.pad(). Parameters padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 4-tuple, uses (padding_left...
torch.generated.torch.nn.reflectionpad2d#torch.nn.ReflectionPad2d
class torch.nn.ReLU(inplace=False) [source] Applies the rectified linear unit function element-wise: ReLU(x)=(x)+=max⁡(0,x)\text{ReLU}(x) = (x)^+ = \max(0, x) Parameters inplace – can optionally do the operation in-place. Default: False Shape: Input: (N,∗)(N, *) where * means, any number of additional dimens...
torch.generated.torch.nn.relu#torch.nn.ReLU
class torch.nn.ReLU6(inplace=False) [source] Applies the element-wise function: ReLU6(x)=min⁡(max⁡(0,x),6)\text{ReLU6}(x) = \min(\max(0,x), 6) Parameters inplace – can optionally do the operation in-place. Default: False Shape: Input: (N,∗)(N, *) where * means, any number of additional dimensions Output: (...
torch.generated.torch.nn.relu6#torch.nn.ReLU6
class torch.nn.ReplicationPad1d(padding) [source] Pads the input tensor using replication of the input boundary. For N-dimensional padding, use torch.nn.functional.pad(). Parameters padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 2-tuple, uses (padding_left\t...
torch.generated.torch.nn.replicationpad1d#torch.nn.ReplicationPad1d
class torch.nn.ReplicationPad2d(padding) [source] Pads the input tensor using replication of the input boundary. For N-dimensional padding, use torch.nn.functional.pad(). Parameters padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 4-tuple, uses (padding_left\t...
torch.generated.torch.nn.replicationpad2d#torch.nn.ReplicationPad2d
class torch.nn.ReplicationPad3d(padding) [source] Pads the input tensor using replication of the input boundary. For N-dimensional padding, use torch.nn.functional.pad(). Parameters padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 6-tuple, uses (padding_left\t...
torch.generated.torch.nn.replicationpad3d#torch.nn.ReplicationPad3d
class torch.nn.RNN(*args, **kwargs) [source] Applies a multi-layer Elman RNN with tanh⁡\tanh or ReLU\text{ReLU} non-linearity to an input sequence. For each element in the input sequence, each layer computes the following function: ht=tanh⁡(Wihxt+bih+Whhh(t−1)+bhh)h_t = \tanh(W_{ih} x_t + b_{ih} + W_{hh} h_{(t-1)}...
torch.generated.torch.nn.rnn#torch.nn.RNN
class torch.nn.RNNBase(mode, input_size, hidden_size, num_layers=1, bias=True, batch_first=False, dropout=0.0, bidirectional=False, proj_size=0) [source] flatten_parameters() [source] Resets parameter data pointer so that they can use faster code paths. Right now, this works only if the module is on the GPU and c...
torch.generated.torch.nn.rnnbase#torch.nn.RNNBase
flatten_parameters() [source] Resets parameter data pointer so that they can use faster code paths. Right now, this works only if the module is on the GPU and cuDNN is enabled. Otherwise, it’s a no-op.
torch.generated.torch.nn.rnnbase#torch.nn.RNNBase.flatten_parameters
class torch.nn.RNNCell(input_size, hidden_size, bias=True, nonlinearity='tanh') [source] An Elman RNN cell with tanh or ReLU non-linearity. h′=tanh⁡(Wihx+bih+Whhh+bhh)h' = \tanh(W_{ih} x + b_{ih} + W_{hh} h + b_{hh}) If nonlinearity is ‘relu’, then ReLU is used in place of tanh. Parameters input_size – The numb...
torch.generated.torch.nn.rnncell#torch.nn.RNNCell
class torch.nn.RReLU(lower=0.125, upper=0.3333333333333333, inplace=False) [source] Applies the randomized leaky rectified liner unit function, element-wise, as described in the paper: Empirical Evaluation of Rectified Activations in Convolutional Network. The function is defined as: RReLU(x)={xif x≥0ax otherwise \t...
torch.generated.torch.nn.rrelu#torch.nn.RReLU
class torch.nn.SELU(inplace=False) [source] Applied element-wise, as: SELU(x)=scale∗(max⁡(0,x)+min⁡(0,α∗(exp⁡(x)−1)))\text{SELU}(x) = \text{scale} * (\max(0,x) + \min(0, \alpha * (\exp(x) - 1))) with α=1.6732632423543772848170429916717\alpha = 1.6732632423543772848170429916717 and scale=1.050700987355480493419334...
torch.generated.torch.nn.selu#torch.nn.SELU
class torch.nn.Sequential(*args) [source] A sequential container. Modules will be added to it in the order they are passed in the constructor. Alternatively, an ordered dict of modules can also be passed in. To make it easier to understand, here is a small example: # Example of using Sequential model = nn.Sequential(...
torch.generated.torch.nn.sequential#torch.nn.Sequential
class torch.nn.Sigmoid [source] Applies the element-wise function: Sigmoid(x)=σ(x)=11+exp⁡(−x)\text{Sigmoid}(x) = \sigma(x) = \frac{1}{1 + \exp(-x)} Shape: Input: (N,∗)(N, *) where * means, any number of additional dimensions Output: (N,∗)(N, *) , same shape as the input Examples: >>> m = nn.Sigmoid() >>> ...
torch.generated.torch.nn.sigmoid#torch.nn.Sigmoid
class torch.nn.SiLU(inplace=False) [source] Applies the silu function, element-wise. silu(x)=x∗σ(x),where σ(x) is the logistic sigmoid.\text{silu}(x) = x * \sigma(x), \text{where } \sigma(x) \text{ is the logistic sigmoid.} Note See Gaussian Error Linear Units (GELUs) where the SiLU (Sigmoid Linear Unit) was orig...
torch.generated.torch.nn.silu#torch.nn.SiLU
class torch.nn.SmoothL1Loss(size_average=None, reduce=None, reduction='mean', beta=1.0) [source] Creates a criterion that uses a squared term if the absolute element-wise error falls below beta and an L1 term otherwise. It is less sensitive to outliers than the torch.nn.MSELoss and in some cases prevents exploding gr...
torch.generated.torch.nn.smoothl1loss#torch.nn.SmoothL1Loss
class torch.nn.SoftMarginLoss(size_average=None, reduce=None, reduction='mean') [source] Creates a criterion that optimizes a two-class classification logistic loss between input tensor xx and target tensor yy (containing 1 or -1). loss(x,y)=∑ilog⁡(1+exp⁡(−y[i]∗x[i]))x.nelement()\text{loss}(x, y) = \sum_i \frac{\l...
torch.generated.torch.nn.softmarginloss#torch.nn.SoftMarginLoss
class torch.nn.Softmax(dim=None) [source] Applies the Softmax function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range [0,1] and sum to 1. Softmax is defined as: Softmax(xi)=exp⁡(xi)∑jexp⁡(xj)\text{Softmax}(x_{i}) = \frac{\exp(x_i)}{\sum_j \exp...
torch.generated.torch.nn.softmax#torch.nn.Softmax
class torch.nn.Softmax2d [source] Applies SoftMax over features to each spatial location. When given an image of Channels x Height x Width, it will apply Softmax to each location (Channels,hi,wj)(Channels, h_i, w_j) Shape: Input: (N,C,H,W)(N, C, H, W) Output: (N,C,H,W)(N, C, H, W) (same shape as input) Ret...
torch.generated.torch.nn.softmax2d#torch.nn.Softmax2d
class torch.nn.Softmin(dim=None) [source] Applies the Softmin function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range [0, 1] and sum to 1. Softmin is defined as: Softmin(xi)=exp⁡(−xi)∑jexp⁡(−xj)\text{Softmin}(x_{i}) = \frac{\exp(-x_i)}{\sum_j ...
torch.generated.torch.nn.softmin#torch.nn.Softmin
class torch.nn.Softplus(beta=1, threshold=20) [source] Applies the element-wise function: Softplus(x)=1β∗log⁡(1+exp⁡(β∗x))\text{Softplus}(x) = \frac{1}{\beta} * \log(1 + \exp(\beta * x)) SoftPlus is a smooth approximation to the ReLU function and can be used to constrain the output of a machine to always be positi...
torch.generated.torch.nn.softplus#torch.nn.Softplus
class torch.nn.Softshrink(lambd=0.5) [source] Applies the soft shrinkage function elementwise: SoftShrinkage(x)={x−λ, if x>λx+λ, if x<−λ0, otherwise \text{SoftShrinkage}(x) = \begin{cases} x - \lambda, & \text{ if } x > \lambda \\ x + \lambda, & \text{ if } x < -\lambda \\ 0, & \text{ otherwise } \end{cases} Para...
torch.generated.torch.nn.softshrink#torch.nn.Softshrink
class torch.nn.Softsign [source] Applies the element-wise function: SoftSign(x)=x1+∣x∣\text{SoftSign}(x) = \frac{x}{ 1 + |x|} Shape: Input: (N,∗)(N, *) where * means, any number of additional dimensions Output: (N,∗)(N, *) , same shape as the input Examples: >>> m = nn.Softsign() >>> input = torch.randn(2)...
torch.generated.torch.nn.softsign#torch.nn.Softsign
class torch.nn.SyncBatchNorm(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, process_group=None) [source] Applies Batch Normalization over a N-Dimensional input (a mini-batch of [N-2]D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating D...
torch.generated.torch.nn.syncbatchnorm#torch.nn.SyncBatchNorm
classmethod convert_sync_batchnorm(module, process_group=None) [source] Helper function to convert all BatchNorm*D layers in the model to torch.nn.SyncBatchNorm layers. Parameters module (nn.Module) – module containing one or more attr:BatchNorm*D layers process_group (optional) – process group to scope synchron...
torch.generated.torch.nn.syncbatchnorm#torch.nn.SyncBatchNorm.convert_sync_batchnorm
class torch.nn.Tanh [source] Applies the element-wise function: Tanh(x)=tanh⁡(x)=exp⁡(x)−exp⁡(−x)exp⁡(x)+exp⁡(−x)\text{Tanh}(x) = \tanh(x) = \frac{\exp(x) - \exp(-x)} {\exp(x) + \exp(-x)} Shape: Input: (N,∗)(N, *) where * means, any number of additional dimensions Output: (N,∗)(N, *) , same shape as the input ...
torch.generated.torch.nn.tanh#torch.nn.Tanh
class torch.nn.Tanhshrink [source] Applies the element-wise function: Tanhshrink(x)=x−tanh⁡(x)\text{Tanhshrink}(x) = x - \tanh(x) Shape: Input: (N,∗)(N, *) where * means, any number of additional dimensions Output: (N,∗)(N, *) , same shape as the input Examples: >>> m = nn.Tanhshrink() >>> input = torch.ra...
torch.generated.torch.nn.tanhshrink#torch.nn.Tanhshrink
class torch.nn.Threshold(threshold, value, inplace=False) [source] Thresholds each element of the input Tensor. Threshold is defined as: y={x, if x>thresholdvalue, otherwise y = \begin{cases} x, &\text{ if } x > \text{threshold} \\ \text{value}, &\text{ otherwise } \end{cases} Parameters threshold – The value ...
torch.generated.torch.nn.threshold#torch.nn.Threshold
class torch.nn.Transformer(d_model=512, nhead=8, num_encoder_layers=6, num_decoder_layers=6, dim_feedforward=2048, dropout=0.1, activation='relu', custom_encoder=None, custom_decoder=None) [source] A transformer model. User is able to modify the attributes as needed. The architecture is based on the paper “Attention ...
torch.generated.torch.nn.transformer#torch.nn.Transformer
forward(src, tgt, src_mask=None, tgt_mask=None, memory_mask=None, src_key_padding_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None) [source] Take in and process masked source/target sequences. Parameters src – the sequence to the encoder (required). tgt – the sequence to the decoder (required)....
torch.generated.torch.nn.transformer#torch.nn.Transformer.forward
generate_square_subsequent_mask(sz) [source] Generate a square mask for the sequence. The masked positions are filled with float(‘-inf’). Unmasked positions are filled with float(0.0).
torch.generated.torch.nn.transformer#torch.nn.Transformer.generate_square_subsequent_mask
class torch.nn.TransformerDecoder(decoder_layer, num_layers, norm=None) [source] TransformerDecoder is a stack of N decoder layers Parameters decoder_layer – an instance of the TransformerDecoderLayer() class (required). num_layers – the number of sub-decoder-layers in the decoder (required). norm – the layer n...
torch.generated.torch.nn.transformerdecoder#torch.nn.TransformerDecoder
forward(tgt, memory, tgt_mask=None, memory_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None) [source] Pass the inputs (and mask) through the decoder layer in turn. Parameters tgt – the sequence to the decoder (required). memory – the sequence from the last layer of the encoder (required). tgt_...
torch.generated.torch.nn.transformerdecoder#torch.nn.TransformerDecoder.forward
class torch.nn.TransformerDecoderLayer(d_model, nhead, dim_feedforward=2048, dropout=0.1, activation='relu') [source] TransformerDecoderLayer is made up of self-attn, multi-head-attn and feedforward network. This standard decoder layer is based on the paper “Attention Is All You Need”. Ashish Vaswani, Noam Shazeer, N...
torch.generated.torch.nn.transformerdecoderlayer#torch.nn.TransformerDecoderLayer
forward(tgt, memory, tgt_mask=None, memory_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None) [source] Pass the inputs (and mask) through the decoder layer. Parameters tgt – the sequence to the decoder layer (required). memory – the sequence from the last layer of the encoder (required). tgt_ma...
torch.generated.torch.nn.transformerdecoderlayer#torch.nn.TransformerDecoderLayer.forward
class torch.nn.TransformerEncoder(encoder_layer, num_layers, norm=None) [source] TransformerEncoder is a stack of N encoder layers Parameters encoder_layer – an instance of the TransformerEncoderLayer() class (required). num_layers – the number of sub-encoder-layers in the encoder (required). norm – the layer n...
torch.generated.torch.nn.transformerencoder#torch.nn.TransformerEncoder
forward(src, mask=None, src_key_padding_mask=None) [source] Pass the input through the encoder layers in turn. Parameters src – the sequence to the encoder (required). mask – the mask for the src sequence (optional). src_key_padding_mask – the mask for the src keys per batch (optional). Shape: see the docs...
torch.generated.torch.nn.transformerencoder#torch.nn.TransformerEncoder.forward
class torch.nn.TransformerEncoderLayer(d_model, nhead, dim_feedforward=2048, dropout=0.1, activation='relu') [source] TransformerEncoderLayer is made up of self-attn and feedforward network. This standard encoder layer is based on the paper “Attention Is All You Need”. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob...
torch.generated.torch.nn.transformerencoderlayer#torch.nn.TransformerEncoderLayer
forward(src, src_mask=None, src_key_padding_mask=None) [source] Pass the input through the encoder layer. Parameters src – the sequence to the encoder layer (required). src_mask – the mask for the src sequence (optional). src_key_padding_mask – the mask for the src keys per batch (optional). Shape: see the...
torch.generated.torch.nn.transformerencoderlayer#torch.nn.TransformerEncoderLayer.forward
class torch.nn.TripletMarginLoss(margin=1.0, p=2.0, eps=1e-06, swap=False, size_average=None, reduce=None, reduction='mean') [source] Creates a criterion that measures the triplet loss given an input tensors x1x1 , x2x2 , x3x3 and a margin with a value greater than 00 . This is used for measuring a relative similari...
torch.generated.torch.nn.tripletmarginloss#torch.nn.TripletMarginLoss
class torch.nn.TripletMarginWithDistanceLoss(*, distance_function=None, margin=1.0, swap=False, reduction='mean') [source] Creates a criterion that measures the triplet loss given input tensors aa , pp , and nn (representing anchor, positive, and negative examples, respectively), and a nonnegative, real-valued funct...
torch.generated.torch.nn.tripletmarginwithdistanceloss#torch.nn.TripletMarginWithDistanceLoss
class torch.nn.Unflatten(dim, unflattened_size) [source] Unflattens a tensor dim expanding it to a desired shape. For use with Sequential. dim specifies the dimension of the input tensor to be unflattened, and it can be either int or str when Tensor or NamedTensor is used, respectively. unflattened_size is the new...
torch.generated.torch.nn.unflatten#torch.nn.Unflatten
add_module(name, module) Adds a child module to the current module. The module can be accessed as an attribute using the given name. Parameters name (string) – name of the child module. The child module can be accessed from this module using the given name module (Module) – child module to be added to the module...
torch.generated.torch.nn.unflatten#torch.nn.Unflatten.add_module
apply(fn) Applies fn recursively to every submodule (as returned by .children()) as well as self. Typical use includes initializing the parameters of a model (see also torch.nn.init). Parameters fn (Module -> None) – function to be applied to each submodule Returns self Return type Module Example: >>> @torch....
torch.generated.torch.nn.unflatten#torch.nn.Unflatten.apply
bfloat16() Casts all floating point parameters and buffers to bfloat16 datatype. Returns self Return type Module
torch.generated.torch.nn.unflatten#torch.nn.Unflatten.bfloat16
buffers(recurse=True) Returns an iterator over module buffers. Parameters recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module. Yields torch.Tensor – module buffer Example: >>> for buf in model.buffers(): >>> p...
torch.generated.torch.nn.unflatten#torch.nn.Unflatten.buffers
children() Returns an iterator over immediate children modules. Yields Module – a child module
torch.generated.torch.nn.unflatten#torch.nn.Unflatten.children
cpu() Moves all model parameters and buffers to the CPU. Returns self Return type Module
torch.generated.torch.nn.unflatten#torch.nn.Unflatten.cpu
cuda(device=None) Moves all model parameters and buffers to the GPU. This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized. Parameters device (int, optional) – if specified, all parameters will b...
torch.generated.torch.nn.unflatten#torch.nn.Unflatten.cuda
double() Casts all floating point parameters and buffers to double datatype. Returns self Return type Module
torch.generated.torch.nn.unflatten#torch.nn.Unflatten.double
eval() Sets the module in evaluation mode. This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc. This is equivalent with self.train(False). Returns self Return type Modul...
torch.generated.torch.nn.unflatten#torch.nn.Unflatten.eval
float() Casts all floating point parameters and buffers to float datatype. Returns self Return type Module
torch.generated.torch.nn.unflatten#torch.nn.Unflatten.float
half() Casts all floating point parameters and buffers to half datatype. Returns self Return type Module
torch.generated.torch.nn.unflatten#torch.nn.Unflatten.half