doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
torch.nn.functional.max_unpool3d(input, indices, kernel_size, stride=None, padding=0, output_size=None) [source]
Computes a partial inverse of MaxPool3d. See MaxUnpool3d for details. | torch.nn.functional#torch.nn.functional.max_unpool3d |
torch.nn.functional.mse_loss(input, target, size_average=None, reduce=None, reduction='mean') → Tensor [source]
Measures the element-wise mean squared error. See MSELoss for details. | torch.nn.functional#torch.nn.functional.mse_loss |
torch.nn.functional.multilabel_margin_loss(input, target, size_average=None, reduce=None, reduction='mean') → Tensor [source]
See MultiLabelMarginLoss for details. | torch.nn.functional#torch.nn.functional.multilabel_margin_loss |
torch.nn.functional.multilabel_soft_margin_loss(input, target, weight=None, size_average=None) → Tensor [source]
See MultiLabelSoftMarginLoss for details. | torch.nn.functional#torch.nn.functional.multilabel_soft_margin_loss |
torch.nn.functional.multi_margin_loss(input, target, p=1, margin=1.0, weight=None, size_average=None, reduce=None, reduction='mean') [source]
multi_margin_loss(input, target, p=1, margin=1, weight=None, size_average=None,
reduce=None, reduction=’mean’) -> Tensor See MultiMarginLoss for details. | torch.nn.functional#torch.nn.functional.multi_margin_loss |
torch.nn.functional.nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean') [source]
The negative log likelihood loss. See NLLLoss for details. Parameters
input – (N,C)(N, C) where C = number of classes or (N,C,H,W)(N, C, H, W) in case of 2D Loss, or (N,C,d1,d2,... | torch.nn.functional#torch.nn.functional.nll_loss |
torch.nn.functional.normalize(input, p=2, dim=1, eps=1e-12, out=None) [source]
Performs LpL_p normalization of inputs over specified dimension. For a tensor input of sizes (n0,...,ndim,...,nk)(n_0, ..., n_{dim}, ..., n_k) , each ndimn_{dim} -element vector vv along dimension dim is transformed as v=vmax(∥v∥p,ϵ).... | torch.nn.functional#torch.nn.functional.normalize |
torch.nn.functional.one_hot(tensor, num_classes=-1) → LongTensor
Takes LongTensor with index values of shape (*) and returns a tensor of shape (*, num_classes) that have zeros everywhere except where the index of last dimension matches the corresponding value of the input tensor, in which case it will be 1. See also ... | torch.nn.functional#torch.nn.functional.one_hot |
torch.nn.functional.pad(input, pad, mode='constant', value=0)
Pads tensor. Padding size:
The padding size by which to pad some dimensions of input are described starting from the last dimension and moving forward. ⌊len(pad)2⌋\left\lfloor\frac{\text{len(pad)}}{2}\right\rfloor dimensions of input will be padded. For... | torch.nn.functional#torch.nn.functional.pad |
torch.nn.functional.pairwise_distance(x1, x2, p=2.0, eps=1e-06, keepdim=False) [source]
See torch.nn.PairwiseDistance for details | torch.nn.functional#torch.nn.functional.pairwise_distance |
torch.nn.functional.pdist(input, p=2) → Tensor
Computes the p-norm distance between every pair of row vectors in the input. This is identical to the upper triangular portion, excluding the diagonal, of torch.norm(input[:, None] - input, dim=2, p=p). This function will be faster if the rows are contiguous. If input ha... | torch.nn.functional#torch.nn.functional.pdist |
torch.nn.functional.pixel_shuffle(input, upscale_factor) → Tensor
Rearranges elements in a tensor of shape (∗,C×r2,H,W)(*, C \times r^2, H, W) to a tensor of shape (∗,C,H×r,W×r)(*, C, H \times r, W \times r) , where r is the upscale_factor. See PixelShuffle for details. Parameters
input (Tensor) – the input tens... | torch.nn.functional#torch.nn.functional.pixel_shuffle |
torch.nn.functional.pixel_unshuffle(input, downscale_factor) → Tensor
Reverses the PixelShuffle operation by rearranging elements in a tensor of shape (∗,C,H×r,W×r)(*, C, H \times r, W \times r) to a tensor of shape (∗,C×r2,H,W)(*, C \times r^2, H, W) , where r is the downscale_factor. See PixelUnshuffle for details... | torch.nn.functional#torch.nn.functional.pixel_unshuffle |
torch.nn.functional.poisson_nll_loss(input, target, log_input=True, full=False, size_average=None, eps=1e-08, reduce=None, reduction='mean') [source]
Poisson negative log likelihood loss. See PoissonNLLLoss for details. Parameters
input – expectation of underlying Poisson distribution.
target – random sample tar... | torch.nn.functional#torch.nn.functional.poisson_nll_loss |
torch.nn.functional.prelu(input, weight) → Tensor [source]
Applies element-wise the function PReLU(x)=max(0,x)+weight∗min(0,x)\text{PReLU}(x) = \max(0,x) + \text{weight} * \min(0,x) where weight is a learnable parameter. See PReLU for more details. | torch.nn.functional#torch.nn.functional.prelu |
torch.nn.functional.relu(input, inplace=False) → Tensor [source]
Applies the rectified linear unit function element-wise. See ReLU for more details. | torch.nn.functional#torch.nn.functional.relu |
torch.nn.functional.relu6(input, inplace=False) → Tensor [source]
Applies the element-wise function ReLU6(x)=min(max(0,x),6)\text{ReLU6}(x) = \min(\max(0,x), 6) . See ReLU6 for more details. | torch.nn.functional#torch.nn.functional.relu6 |
torch.nn.functional.relu_(input) → Tensor
In-place version of relu(). | torch.nn.functional#torch.nn.functional.relu_ |
torch.nn.functional.rrelu(input, lower=1./8, upper=1./3, training=False, inplace=False) → Tensor [source]
Randomized leaky ReLU. See RReLU for more details. | torch.nn.functional#torch.nn.functional.rrelu |
torch.nn.functional.rrelu_(input, lower=1./8, upper=1./3, training=False) → Tensor
In-place version of rrelu(). | torch.nn.functional#torch.nn.functional.rrelu_ |
torch.nn.functional.selu(input, inplace=False) → Tensor [source]
Applies element-wise, SELU(x)=scale∗(max(0,x)+min(0,α∗(exp(x)−1)))\text{SELU}(x) = scale * (\max(0,x) + \min(0, \alpha * (\exp(x) - 1))) , with α=1.6732632423543772848170429916717\alpha=1.6732632423543772848170429916717 and scale=1.05070098735548049... | torch.nn.functional#torch.nn.functional.selu |
torch.nn.functional.sigmoid(input) → Tensor [source]
Applies the element-wise function Sigmoid(x)=11+exp(−x)\text{Sigmoid}(x) = \frac{1}{1 + \exp(-x)} See Sigmoid for more details. | torch.nn.functional#torch.nn.functional.sigmoid |
torch.nn.functional.silu(input, inplace=False) [source]
Applies the silu function, element-wise. silu(x)=x∗σ(x),where σ(x) is the logistic sigmoid.\text{silu}(x) = x * \sigma(x), \text{where } \sigma(x) \text{ is the logistic sigmoid.}
Note See Gaussian Error Linear Units (GELUs) where the SiLU (Sigmoid Linear Un... | torch.nn.functional#torch.nn.functional.silu |
torch.nn.functional.smooth_l1_loss(input, target, size_average=None, reduce=None, reduction='mean', beta=1.0) [source]
Function that uses a squared term if the absolute element-wise error falls below beta and an L1 term otherwise. See SmoothL1Loss for details. | torch.nn.functional#torch.nn.functional.smooth_l1_loss |
torch.nn.functional.softmax(input, dim=None, _stacklevel=3, dtype=None) [source]
Applies a softmax function. Softmax is defined as: Softmax(xi)=exp(xi)∑jexp(xj)\text{Softmax}(x_{i}) = \frac{\exp(x_i)}{\sum_j \exp(x_j)} It is applied to all slices along dim, and will re-scale them so that the elements lie in the ra... | torch.nn.functional#torch.nn.functional.softmax |
torch.nn.functional.softmin(input, dim=None, _stacklevel=3, dtype=None) [source]
Applies a softmin function. Note that Softmin(x)=Softmax(−x)\text{Softmin}(x) = \text{Softmax}(-x) . See softmax definition for mathematical formula. See Softmin for more details. Parameters
input (Tensor) – input
dim (int) – A dime... | torch.nn.functional#torch.nn.functional.softmin |
torch.nn.functional.softplus(input, beta=1, threshold=20) → Tensor
Applies element-wise, the function Softplus(x)=1β∗log(1+exp(β∗x))\text{Softplus}(x) = \frac{1}{\beta} * \log(1 + \exp(\beta * x)) . For numerical stability the implementation reverts to the linear function when input×β>thresholdinput \times \beta > ... | torch.nn.functional#torch.nn.functional.softplus |
torch.nn.functional.softshrink(input, lambd=0.5) → Tensor
Applies the soft shrinkage function elementwise See Softshrink for more details. | torch.nn.functional#torch.nn.functional.softshrink |
torch.nn.functional.softsign(input) → Tensor [source]
Applies element-wise, the function SoftSign(x)=x1+∣x∣\text{SoftSign}(x) = \frac{x}{1 + |x|} See Softsign for more details. | torch.nn.functional#torch.nn.functional.softsign |
torch.nn.functional.soft_margin_loss(input, target, size_average=None, reduce=None, reduction='mean') → Tensor [source]
See SoftMarginLoss for details. | torch.nn.functional#torch.nn.functional.soft_margin_loss |
torch.nn.functional.tanh(input) → Tensor [source]
Applies element-wise, Tanh(x)=tanh(x)=exp(x)−exp(−x)exp(x)+exp(−x)\text{Tanh}(x) = \tanh(x) = \frac{\exp(x) - \exp(-x)}{\exp(x) + \exp(-x)} See Tanh for more details. | torch.nn.functional#torch.nn.functional.tanh |
torch.nn.functional.tanhshrink(input) → Tensor [source]
Applies element-wise, Tanhshrink(x)=x−Tanh(x)\text{Tanhshrink}(x) = x - \text{Tanh}(x) See Tanhshrink for more details. | torch.nn.functional#torch.nn.functional.tanhshrink |
torch.nn.functional.threshold(input, threshold, value, inplace=False)
Thresholds each element of the input Tensor. See Threshold for more details. | torch.nn.functional#torch.nn.functional.threshold |
torch.nn.functional.threshold_(input, threshold, value) → Tensor
In-place version of threshold(). | torch.nn.functional#torch.nn.functional.threshold_ |
torch.nn.functional.triplet_margin_loss(anchor, positive, negative, margin=1.0, p=2, eps=1e-06, swap=False, size_average=None, reduce=None, reduction='mean') [source]
See TripletMarginLoss for details | torch.nn.functional#torch.nn.functional.triplet_margin_loss |
torch.nn.functional.triplet_margin_with_distance_loss(anchor, positive, negative, *, distance_function=None, margin=1.0, swap=False, reduction='mean') [source]
See TripletMarginWithDistanceLoss for details. | torch.nn.functional#torch.nn.functional.triplet_margin_with_distance_loss |
torch.nn.functional.unfold(input, kernel_size, dilation=1, padding=0, stride=1) [source]
Extracts sliding local blocks from a batched input tensor. Warning Currently, only 4-D input tensors (batched image-like tensors) are supported. Warning More than one element of the unfolded tensor may refer to a single memory... | torch.nn.functional#torch.nn.functional.unfold |
torch.nn.functional.upsample(input, size=None, scale_factor=None, mode='nearest', align_corners=None) [source]
Upsamples the input to either the given size or the given scale_factor Warning This function is deprecated in favor of torch.nn.functional.interpolate(). This is equivalent with nn.functional.interpolate(..... | torch.nn.functional#torch.nn.functional.upsample |
torch.nn.functional.upsample_bilinear(input, size=None, scale_factor=None) [source]
Upsamples the input, using bilinear upsampling. Warning This function is deprecated in favor of torch.nn.functional.interpolate(). This is equivalent with nn.functional.interpolate(..., mode='bilinear', align_corners=True). Expected... | torch.nn.functional#torch.nn.functional.upsample_bilinear |
torch.nn.functional.upsample_nearest(input, size=None, scale_factor=None) [source]
Upsamples the input, using nearest neighbours’ pixel values. Warning This function is deprecated in favor of torch.nn.functional.interpolate(). This is equivalent with nn.functional.interpolate(..., mode='nearest'). Currently spatial... | torch.nn.functional#torch.nn.functional.upsample_nearest |
class torch.nn.GaussianNLLLoss(*, full=False, eps=1e-06, reduction='mean') [source]
Gaussian negative log likelihood loss. The targets are treated as samples from Gaussian distributions with expectations and variances predicted by the neural network. For a D-dimensional target tensor modelled as having heteroscedasti... | torch.generated.torch.nn.gaussiannllloss#torch.nn.GaussianNLLLoss |
class torch.nn.GELU [source]
Applies the Gaussian Error Linear Units function: GELU(x)=x∗Φ(x)\text{GELU}(x) = x * \Phi(x)
where Φ(x)\Phi(x) is the Cumulative Distribution Function for Gaussian Distribution. Shape:
Input: (N,∗)(N, *) where * means, any number of additional dimensions Output: (N,∗)(N, *) , same... | torch.generated.torch.nn.gelu#torch.nn.GELU |
class torch.nn.GroupNorm(num_groups, num_channels, eps=1e-05, affine=True) [source]
Applies Group Normalization over a mini-batch of inputs as described in the paper Group Normalization y=x−E[x]Var[x]+ϵ∗γ+βy = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta
The input channels are sepa... | torch.generated.torch.nn.groupnorm#torch.nn.GroupNorm |
class torch.nn.GRU(*args, **kwargs) [source]
Applies a multi-layer gated recurrent unit (GRU) RNN to an input sequence. For each element in the input sequence, each layer computes the following function: rt=σ(Wirxt+bir+Whrh(t−1)+bhr)zt=σ(Wizxt+biz+Whzh(t−1)+bhz)nt=tanh(Winxt+bin+rt∗(Whnh(t−1)+bhn))ht=(1−zt)∗nt+zt∗h... | torch.generated.torch.nn.gru#torch.nn.GRU |
class torch.nn.GRUCell(input_size, hidden_size, bias=True) [source]
A gated recurrent unit (GRU) cell r=σ(Wirx+bir+Whrh+bhr)z=σ(Wizx+biz+Whzh+bhz)n=tanh(Winx+bin+r∗(Whnh+bhn))h′=(1−z)∗n+z∗h\begin{array}{ll} r = \sigma(W_{ir} x + b_{ir} + W_{hr} h + b_{hr}) \\ z = \sigma(W_{iz} x + b_{iz} + W_{hz} h + b_{hz}) \\ n =... | torch.generated.torch.nn.grucell#torch.nn.GRUCell |
class torch.nn.Hardshrink(lambd=0.5) [source]
Applies the hard shrinkage function element-wise: HardShrink(x)={x, if x>λx, if x<−λ0, otherwise \text{HardShrink}(x) = \begin{cases} x, & \text{ if } x > \lambda \\ x, & \text{ if } x < -\lambda \\ 0, & \text{ otherwise } \end{cases}
Parameters
lambd – the λ\lambda ... | torch.generated.torch.nn.hardshrink#torch.nn.Hardshrink |
class torch.nn.Hardsigmoid(inplace=False) [source]
Applies the element-wise function: Hardsigmoid(x)={0if x≤−3,1if x≥+3,x/6+1/2otherwise\text{Hardsigmoid}(x) = \begin{cases} 0 & \text{if~} x \le -3, \\ 1 & \text{if~} x \ge +3, \\ x / 6 + 1 / 2 & \text{otherwise} \end{cases}
Parameters
inplace – can optionally do... | torch.generated.torch.nn.hardsigmoid#torch.nn.Hardsigmoid |
class torch.nn.Hardswish(inplace=False) [source]
Applies the hardswish function, element-wise, as described in the paper: Searching for MobileNetV3. Hardswish(x)={0if x≤−3,xif x≥+3,x⋅(x+3)/6otherwise\text{Hardswish}(x) = \begin{cases} 0 & \text{if~} x \le -3, \\ x & \text{if~} x \ge +3, \\ x \cdot (x + 3) /6 & \text... | torch.generated.torch.nn.hardswish#torch.nn.Hardswish |
class torch.nn.Hardtanh(min_val=-1.0, max_val=1.0, inplace=False, min_value=None, max_value=None) [source]
Applies the HardTanh function element-wise HardTanh is defined as: HardTanh(x)={1 if x>1−1 if x<−1x otherwise \text{HardTanh}(x) = \begin{cases} 1 & \text{ if } x > 1 \\ -1 & \text{ if } x < -1 \\ x & \text{ ot... | torch.generated.torch.nn.hardtanh#torch.nn.Hardtanh |
class torch.nn.HingeEmbeddingLoss(margin=1.0, size_average=None, reduce=None, reduction='mean') [source]
Measures the loss given an input tensor xx and a labels tensor yy (containing 1 or -1). This is usually used for measuring whether two inputs are similar or dissimilar, e.g. using the L1 pairwise distance as xx ... | torch.generated.torch.nn.hingeembeddingloss#torch.nn.HingeEmbeddingLoss |
class torch.nn.Identity(*args, **kwargs) [source]
A placeholder identity operator that is argument-insensitive. Parameters
args – any argument (unused)
kwargs – any keyword argument (unused) Examples: >>> m = nn.Identity(54, unused_argument1=0.1, unused_argument2=False)
>>> input = torch.randn(128, 20)
>>> ou... | torch.generated.torch.nn.identity#torch.nn.Identity |
torch.nn.init
torch.nn.init.calculate_gain(nonlinearity, param=None) [source]
Return the recommended gain value for the given nonlinearity function. The values are as follows:
nonlinearity gain
Linear / Identity 11
Conv{1,2,3}D 11
Sigmoid 11
Tanh 53\frac{5}{3}
ReLU 2\sqrt{2}
Leaky Relu 21+negat... | torch.nn.init |
torch.nn.init.calculate_gain(nonlinearity, param=None) [source]
Return the recommended gain value for the given nonlinearity function. The values are as follows:
nonlinearity gain
Linear / Identity 11
Conv{1,2,3}D 11
Sigmoid 11
Tanh 53\frac{5}{3}
ReLU 2\sqrt{2}
Leaky Relu 21+negative_slope2\sqrt{... | torch.nn.init#torch.nn.init.calculate_gain |
torch.nn.init.constant_(tensor, val) [source]
Fills the input Tensor with the value val\text{val} . Parameters
tensor – an n-dimensional torch.Tensor
val – the value to fill the tensor with Examples >>> w = torch.empty(3, 5)
>>> nn.init.constant_(w, 0.3) | torch.nn.init#torch.nn.init.constant_ |
torch.nn.init.dirac_(tensor, groups=1) [source]
Fills the {3, 4, 5}-dimensional input Tensor with the Dirac delta function. Preserves the identity of the inputs in Convolutional layers, where as many input channels are preserved as possible. In case of groups>1, each group of channels preserves identity Parameters
... | torch.nn.init#torch.nn.init.dirac_ |
torch.nn.init.eye_(tensor) [source]
Fills the 2-dimensional input Tensor with the identity matrix. Preserves the identity of the inputs in Linear layers, where as many inputs are preserved as possible. Parameters
tensor – a 2-dimensional torch.Tensor Examples >>> w = torch.empty(3, 5)
>>> nn.init.eye_(w) | torch.nn.init#torch.nn.init.eye_ |
torch.nn.init.kaiming_normal_(tensor, a=0, mode='fan_in', nonlinearity='leaky_relu') [source]
Fills the input Tensor with values according to the method described in Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification - He, K. et al. (2015), using a normal distribution. The res... | torch.nn.init#torch.nn.init.kaiming_normal_ |
torch.nn.init.kaiming_uniform_(tensor, a=0, mode='fan_in', nonlinearity='leaky_relu') [source]
Fills the input Tensor with values according to the method described in Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification - He, K. et al. (2015), using a uniform distribution. The r... | torch.nn.init#torch.nn.init.kaiming_uniform_ |
torch.nn.init.normal_(tensor, mean=0.0, std=1.0) [source]
Fills the input Tensor with values drawn from the normal distribution N(mean,std2)\mathcal{N}(\text{mean}, \text{std}^2) . Parameters
tensor – an n-dimensional torch.Tensor
mean – the mean of the normal distribution
std – the standard deviation of the n... | torch.nn.init#torch.nn.init.normal_ |
torch.nn.init.ones_(tensor) [source]
Fills the input Tensor with the scalar value 1. Parameters
tensor – an n-dimensional torch.Tensor Examples >>> w = torch.empty(3, 5)
>>> nn.init.ones_(w) | torch.nn.init#torch.nn.init.ones_ |
torch.nn.init.orthogonal_(tensor, gain=1) [source]
Fills the input Tensor with a (semi) orthogonal matrix, as described in Exact solutions to the nonlinear dynamics of learning in deep linear neural networks - Saxe, A. et al. (2013). The input tensor must have at least 2 dimensions, and for tensors with more than 2 d... | torch.nn.init#torch.nn.init.orthogonal_ |
torch.nn.init.sparse_(tensor, sparsity, std=0.01) [source]
Fills the 2D input Tensor as a sparse matrix, where the non-zero elements will be drawn from the normal distribution N(0,0.01)\mathcal{N}(0, 0.01) , as described in Deep learning via Hessian-free optimization - Martens, J. (2010). Parameters
tensor – an n... | torch.nn.init#torch.nn.init.sparse_ |
torch.nn.init.uniform_(tensor, a=0.0, b=1.0) [source]
Fills the input Tensor with values drawn from the uniform distribution U(a,b)\mathcal{U}(a, b) . Parameters
tensor – an n-dimensional torch.Tensor
a – the lower bound of the uniform distribution
b – the upper bound of the uniform distribution Examples >>... | torch.nn.init#torch.nn.init.uniform_ |
torch.nn.init.xavier_normal_(tensor, gain=1.0) [source]
Fills the input Tensor with values according to the method described in Understanding the difficulty of training deep feedforward neural networks - Glorot, X. & Bengio, Y. (2010), using a normal distribution. The resulting tensor will have values sampled from N(... | torch.nn.init#torch.nn.init.xavier_normal_ |
torch.nn.init.xavier_uniform_(tensor, gain=1.0) [source]
Fills the input Tensor with values according to the method described in Understanding the difficulty of training deep feedforward neural networks - Glorot, X. & Bengio, Y. (2010), using a uniform distribution. The resulting tensor will have values sampled from ... | torch.nn.init#torch.nn.init.xavier_uniform_ |
torch.nn.init.zeros_(tensor) [source]
Fills the input Tensor with the scalar value 0. Parameters
tensor – an n-dimensional torch.Tensor Examples >>> w = torch.empty(3, 5)
>>> nn.init.zeros_(w) | torch.nn.init#torch.nn.init.zeros_ |
class torch.nn.InstanceNorm1d(num_features, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) [source]
Applies Instance Normalization over a 3D input (a mini-batch of 1D inputs with optional additional channel dimension) as described in the paper Instance Normalization: The Missing Ingredient for Fast... | torch.generated.torch.nn.instancenorm1d#torch.nn.InstanceNorm1d |
class torch.nn.InstanceNorm2d(num_features, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) [source]
Applies Instance Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension) as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylizat... | torch.generated.torch.nn.instancenorm2d#torch.nn.InstanceNorm2d |
class torch.nn.InstanceNorm3d(num_features, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) [source]
Applies Instance Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension) as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylizat... | torch.generated.torch.nn.instancenorm3d#torch.nn.InstanceNorm3d |
class torch.nn.intrinsic.ConvBn1d(conv, bn) [source]
This is a sequential container which calls the Conv 1d and Batch Norm 1d modules. During quantization this will be replaced with the corresponding fused module. | torch.nn.intrinsic#torch.nn.intrinsic.ConvBn1d |
class torch.nn.intrinsic.ConvBn2d(conv, bn) [source]
This is a sequential container which calls the Conv 2d and Batch Norm 2d modules. During quantization this will be replaced with the corresponding fused module. | torch.nn.intrinsic#torch.nn.intrinsic.ConvBn2d |
class torch.nn.intrinsic.ConvBnReLU1d(conv, bn, relu) [source]
This is a sequential container which calls the Conv 1d, Batch Norm 1d, and ReLU modules. During quantization this will be replaced with the corresponding fused module. | torch.nn.intrinsic#torch.nn.intrinsic.ConvBnReLU1d |
class torch.nn.intrinsic.ConvBnReLU2d(conv, bn, relu) [source]
This is a sequential container which calls the Conv 2d, Batch Norm 2d, and ReLU modules. During quantization this will be replaced with the corresponding fused module. | torch.nn.intrinsic#torch.nn.intrinsic.ConvBnReLU2d |
class torch.nn.intrinsic.ConvReLU1d(conv, relu) [source]
This is a sequential container which calls the Conv1d and ReLU modules. During quantization this will be replaced with the corresponding fused module. | torch.nn.intrinsic#torch.nn.intrinsic.ConvReLU1d |
class torch.nn.intrinsic.ConvReLU2d(conv, relu) [source]
This is a sequential container which calls the Conv2d and ReLU modules. During quantization this will be replaced with the corresponding fused module. | torch.nn.intrinsic#torch.nn.intrinsic.ConvReLU2d |
class torch.nn.intrinsic.qat.ConvBn2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=None, padding_mode='zeros', eps=1e-05, momentum=0.1, freeze_bn=False, qconfig=None) [source]
A ConvBn2d module is a module fused from Conv2d and BatchNorm2d, attached with FakeQuantize modules... | torch.nn.intrinsic.qat#torch.nn.intrinsic.qat.ConvBn2d |
class torch.nn.intrinsic.qat.ConvBnReLU2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=None, padding_mode='zeros', eps=1e-05, momentum=0.1, freeze_bn=False, qconfig=None) [source]
A ConvBnReLU2d module is a module fused from Conv2d, BatchNorm2d and ReLU, attached with FakeQu... | torch.nn.intrinsic.qat#torch.nn.intrinsic.qat.ConvBnReLU2d |
class torch.nn.intrinsic.qat.ConvReLU2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', qconfig=None) [source]
A ConvReLU2d module is a fused module of Conv2d and ReLU, attached with FakeQuantize modules for weight for quantization aware training. We... | torch.nn.intrinsic.qat#torch.nn.intrinsic.qat.ConvReLU2d |
class torch.nn.intrinsic.qat.LinearReLU(in_features, out_features, bias=True, qconfig=None) [source]
A LinearReLU module fused from Linear and ReLU modules, attached with FakeQuantize modules for weight, used in quantization aware training. We adopt the same interface as torch.nn.Linear. Similar to torch.nn.intrinsic... | torch.nn.intrinsic.qat#torch.nn.intrinsic.qat.LinearReLU |
class torch.nn.intrinsic.quantized.ConvReLU2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros') [source]
A ConvReLU2d module is a fused module of Conv2d and ReLU We adopt the same interface as torch.nn.quantized.Conv2d. Variables
as torch.nn.quantize... | torch.nn.intrinsic.quantized#torch.nn.intrinsic.quantized.ConvReLU2d |
class torch.nn.intrinsic.quantized.ConvReLU3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros') [source]
A ConvReLU3d module is a fused module of Conv3d and ReLU We adopt the same interface as torch.nn.quantized.Conv3d. Attributes: Same as torch.nn.qua... | torch.nn.intrinsic.quantized#torch.nn.intrinsic.quantized.ConvReLU3d |
class torch.nn.intrinsic.quantized.LinearReLU(in_features, out_features, bias=True, dtype=torch.qint8) [source]
A LinearReLU module fused from Linear and ReLU modules We adopt the same interface as torch.nn.quantized.Linear. Variables
as torch.nn.quantized.Linear (Same) – Examples: >>> m = nn.intrinsic.LinearReL... | torch.nn.intrinsic.quantized#torch.nn.intrinsic.quantized.LinearReLU |
class torch.nn.KLDivLoss(size_average=None, reduce=None, reduction='mean', log_target=False) [source]
The Kullback-Leibler divergence loss measure Kullback-Leibler divergence is a useful distance measure for continuous distributions and is often useful when performing direct regression over the space of (discretely s... | torch.generated.torch.nn.kldivloss#torch.nn.KLDivLoss |
class torch.nn.L1Loss(size_average=None, reduce=None, reduction='mean') [source]
Creates a criterion that measures the mean absolute error (MAE) between each element in the input xx and target yy . The unreduced (i.e. with reduction set to 'none') loss can be described as: ℓ(x,y)=L={l1,…,lN}⊤,ln=∣xn−yn∣,\ell(x, y) ... | torch.generated.torch.nn.l1loss#torch.nn.L1Loss |
class torch.nn.LayerNorm(normalized_shape, eps=1e-05, elementwise_affine=True) [source]
Applies Layer Normalization over a mini-batch of inputs as described in the paper Layer Normalization y=x−E[x]Var[x]+ϵ∗γ+βy = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta
The mean and standard-d... | torch.generated.torch.nn.layernorm#torch.nn.LayerNorm |
class torch.nn.LazyConv1d(out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros') [source]
A torch.nn.Conv1d module with lazy initialization of the in_channels argument of the Conv1d that is inferred from the input.size(1). Parameters
out_channels (int) – Number of c... | torch.generated.torch.nn.lazyconv1d#torch.nn.LazyConv1d |
cls_to_become
alias of Conv1d | torch.generated.torch.nn.lazyconv1d#torch.nn.LazyConv1d.cls_to_become |
class torch.nn.LazyConv2d(out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros') [source]
A torch.nn.Conv2d module with lazy initialization of the in_channels argument of the Conv2d that is inferred from the input.size(1). Parameters
out_channels (int) – Number of c... | torch.generated.torch.nn.lazyconv2d#torch.nn.LazyConv2d |
cls_to_become
alias of Conv2d | torch.generated.torch.nn.lazyconv2d#torch.nn.LazyConv2d.cls_to_become |
class torch.nn.LazyConv3d(out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros') [source]
A torch.nn.Conv3d module with lazy initialization of the in_channels argument of the Conv3d that is inferred from the input.size(1). Parameters
out_channels (int) – Number of c... | torch.generated.torch.nn.lazyconv3d#torch.nn.LazyConv3d |
cls_to_become
alias of Conv3d | torch.generated.torch.nn.lazyconv3d#torch.nn.LazyConv3d.cls_to_become |
class torch.nn.LazyConvTranspose1d(out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros') [source]
A torch.nn.ConvTranspose1d module with lazy initialization of the in_channels argument of the ConvTranspose1d that is inferred from the input.size(1). P... | torch.generated.torch.nn.lazyconvtranspose1d#torch.nn.LazyConvTranspose1d |
cls_to_become
alias of ConvTranspose1d | torch.generated.torch.nn.lazyconvtranspose1d#torch.nn.LazyConvTranspose1d.cls_to_become |
class torch.nn.LazyConvTranspose2d(out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros') [source]
A torch.nn.ConvTranspose2d module with lazy initialization of the in_channels argument of the ConvTranspose2d that is inferred from the input.size(1). P... | torch.generated.torch.nn.lazyconvtranspose2d#torch.nn.LazyConvTranspose2d |
cls_to_become
alias of ConvTranspose2d | torch.generated.torch.nn.lazyconvtranspose2d#torch.nn.LazyConvTranspose2d.cls_to_become |
class torch.nn.LazyConvTranspose3d(out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros') [source]
A torch.nn.ConvTranspose3d module with lazy initialization of the in_channels argument of the ConvTranspose3d that is inferred from the input.size(1). P... | torch.generated.torch.nn.lazyconvtranspose3d#torch.nn.LazyConvTranspose3d |
cls_to_become
alias of ConvTranspose3d | torch.generated.torch.nn.lazyconvtranspose3d#torch.nn.LazyConvTranspose3d.cls_to_become |
class torch.nn.LazyLinear(out_features, bias=True) [source]
A torch.nn.Linear module with lazy initialization. In this module, the weight and bias are of torch.nn.UninitializedParameter class. They will be initialized after the first call to forward is done and the module will become a regular torch.nn.Linear module.... | torch.generated.torch.nn.lazylinear#torch.nn.LazyLinear |
cls_to_become
alias of Linear | torch.generated.torch.nn.lazylinear#torch.nn.LazyLinear.cls_to_become |
class torch.nn.LeakyReLU(negative_slope=0.01, inplace=False) [source]
Applies the element-wise function: LeakyReLU(x)=max(0,x)+negative_slope∗min(0,x)\text{LeakyReLU}(x) = \max(0, x) + \text{negative\_slope} * \min(0, x)
or LeakyRELU(x)={x, if x≥0negative_slope×x, otherwise \text{LeakyRELU}(x) = \begin{cases} x... | torch.generated.torch.nn.leakyrelu#torch.nn.LeakyReLU |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.