doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
class torch.nn.Linear(in_features, out_features, bias=True) [source] Applies a linear transformation to the incoming data: y=xAT+by = xA^T + b This module supports TensorFloat32. Parameters in_features – size of each input sample out_features – size of each output sample bias – If set to False, the layer will ...
torch.generated.torch.nn.linear#torch.nn.Linear
class torch.nn.LocalResponseNorm(size, alpha=0.0001, beta=0.75, k=1.0) [source] Applies local response normalization over an input signal composed of several input planes, where channels occupy the second dimension. Applies normalization across channels. bc=ac(k+αn∑c′=max⁡(0,c−n/2)min⁡(N−1,c+n/2)ac′2)−βb_{c} = a_{c}...
torch.generated.torch.nn.localresponsenorm#torch.nn.LocalResponseNorm
class torch.nn.LogSigmoid [source] Applies the element-wise function: LogSigmoid(x)=log⁡(11+exp⁡(−x))\text{LogSigmoid}(x) = \log\left(\frac{ 1 }{ 1 + \exp(-x)}\right) Shape: Input: (N,∗)(N, *) where * means, any number of additional dimensions Output: (N,∗)(N, *) , same shape as the input Examples: >>> m =...
torch.generated.torch.nn.logsigmoid#torch.nn.LogSigmoid
class torch.nn.LogSoftmax(dim=None) [source] Applies the log⁡(Softmax(x))\log(\text{Softmax}(x)) function to an n-dimensional input Tensor. The LogSoftmax formulation can be simplified as: LogSoftmax(xi)=log⁡(exp⁡(xi)∑jexp⁡(xj))\text{LogSoftmax}(x_{i}) = \log\left(\frac{\exp(x_i) }{ \sum_j \exp(x_j)} \right) Sha...
torch.generated.torch.nn.logsoftmax#torch.nn.LogSoftmax
class torch.nn.LPPool1d(norm_type, kernel_size, stride=None, ceil_mode=False) [source] Applies a 1D power-average pooling over an input signal composed of several input planes. On each window, the function computed is: f(X)=∑x∈Xxppf(X) = \sqrt[p]{\sum_{x \in X} x^{p}} At p = ∞\infty , one gets Max Pooling At p = ...
torch.generated.torch.nn.lppool1d#torch.nn.LPPool1d
class torch.nn.LPPool2d(norm_type, kernel_size, stride=None, ceil_mode=False) [source] Applies a 2D power-average pooling over an input signal composed of several input planes. On each window, the function computed is: f(X)=∑x∈Xxppf(X) = \sqrt[p]{\sum_{x \in X} x^{p}} At p = ∞\infty , one gets Max Pooling At p = ...
torch.generated.torch.nn.lppool2d#torch.nn.LPPool2d
class torch.nn.LSTM(*args, **kwargs) [source] Applies a multi-layer long short-term memory (LSTM) RNN to an input sequence. For each element in the input sequence, each layer computes the following function: it=σ(Wiixt+bii+Whiht−1+bhi)ft=σ(Wifxt+bif+Whfht−1+bhf)gt=tanh⁡(Wigxt+big+Whght−1+bhg)ot=σ(Wioxt+bio+Whoht−1+b...
torch.generated.torch.nn.lstm#torch.nn.LSTM
class torch.nn.LSTMCell(input_size, hidden_size, bias=True) [source] A long short-term memory (LSTM) cell. i=σ(Wiix+bii+Whih+bhi)f=σ(Wifx+bif+Whfh+bhf)g=tanh⁡(Wigx+big+Whgh+bhg)o=σ(Wiox+bio+Whoh+bho)c′=f∗c+i∗gh′=o∗tanh⁡(c′)\begin{array}{ll} i = \sigma(W_{ii} x + b_{ii} + W_{hi} h + b_{hi}) \\ f = \sigma(W_{if} x + b...
torch.generated.torch.nn.lstmcell#torch.nn.LSTMCell
class torch.nn.MarginRankingLoss(margin=0.0, size_average=None, reduce=None, reduction='mean') [source] Creates a criterion that measures the loss given inputs x1x1 , x2x2 , two 1D mini-batch Tensors, and a label 1D mini-batch tensor yy (containing 1 or -1). If y=1y = 1 then it assumed the first input should be ran...
torch.generated.torch.nn.marginrankingloss#torch.nn.MarginRankingLoss
class torch.nn.MaxPool1d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False) [source] Applies a 1D max pooling over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size (N,C,L)(N, C, L) and output (N,C,Lout)(N, C, L_...
torch.generated.torch.nn.maxpool1d#torch.nn.MaxPool1d
class torch.nn.MaxPool2d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False) [source] Applies a 2D max pooling over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size (N,C,H,W)(N, C, H, W) , output (N,C,Hout,Wout)(N...
torch.generated.torch.nn.maxpool2d#torch.nn.MaxPool2d
class torch.nn.MaxPool3d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False) [source] Applies a 3D max pooling over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size (N,C,D,H,W)(N, C, D, H, W) , output (N,C,Dout,Ho...
torch.generated.torch.nn.maxpool3d#torch.nn.MaxPool3d
class torch.nn.MaxUnpool1d(kernel_size, stride=None, padding=0) [source] Computes a partial inverse of MaxPool1d. MaxPool1d is not fully invertible, since the non-maximal values are lost. MaxUnpool1d takes in as input the output of MaxPool1d including the indices of the maximal values and computes a partial inverse i...
torch.generated.torch.nn.maxunpool1d#torch.nn.MaxUnpool1d
class torch.nn.MaxUnpool2d(kernel_size, stride=None, padding=0) [source] Computes a partial inverse of MaxPool2d. MaxPool2d is not fully invertible, since the non-maximal values are lost. MaxUnpool2d takes in as input the output of MaxPool2d including the indices of the maximal values and computes a partial inverse i...
torch.generated.torch.nn.maxunpool2d#torch.nn.MaxUnpool2d
class torch.nn.MaxUnpool3d(kernel_size, stride=None, padding=0) [source] Computes a partial inverse of MaxPool3d. MaxPool3d is not fully invertible, since the non-maximal values are lost. MaxUnpool3d takes in as input the output of MaxPool3d including the indices of the maximal values and computes a partial inverse i...
torch.generated.torch.nn.maxunpool3d#torch.nn.MaxUnpool3d
class torch.nn.Module [source] Base class for all neural network modules. Your models should also subclass this class. Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes: import torch.nn as nn import torch.nn.functional as F class Mo...
torch.generated.torch.nn.module#torch.nn.Module
add_module(name, module) [source] Adds a child module to the current module. The module can be accessed as an attribute using the given name. Parameters name (string) – name of the child module. The child module can be accessed from this module using the given name module (Module) – child module to be added to t...
torch.generated.torch.nn.module#torch.nn.Module.add_module
apply(fn) [source] Applies fn recursively to every submodule (as returned by .children()) as well as self. Typical use includes initializing the parameters of a model (see also torch.nn.init). Parameters fn (Module -> None) – function to be applied to each submodule Returns self Return type Module Example: >>...
torch.generated.torch.nn.module#torch.nn.Module.apply
bfloat16() [source] Casts all floating point parameters and buffers to bfloat16 datatype. Returns self Return type Module
torch.generated.torch.nn.module#torch.nn.Module.bfloat16
buffers(recurse=True) [source] Returns an iterator over module buffers. Parameters recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module. Yields torch.Tensor – module buffer Example: >>> for buf in model.buffers(): ...
torch.generated.torch.nn.module#torch.nn.Module.buffers
children() [source] Returns an iterator over immediate children modules. Yields Module – a child module
torch.generated.torch.nn.module#torch.nn.Module.children
cpu() [source] Moves all model parameters and buffers to the CPU. Returns self Return type Module
torch.generated.torch.nn.module#torch.nn.Module.cpu
cuda(device=None) [source] Moves all model parameters and buffers to the GPU. This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized. Parameters device (int, optional) – if specified, all paramete...
torch.generated.torch.nn.module#torch.nn.Module.cuda
double() [source] Casts all floating point parameters and buffers to double datatype. Returns self Return type Module
torch.generated.torch.nn.module#torch.nn.Module.double
dump_patches: bool = False This allows better BC support for load_state_dict(). In state_dict(), the version number will be saved as in the attribute _metadata of the returned state dict, and thus pickled. _metadata is a dictionary with keys that follow the naming convention of state dict. See _load_from_state_dict o...
torch.generated.torch.nn.module#torch.nn.Module.dump_patches
eval() [source] Sets the module in evaluation mode. This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc. This is equivalent with self.train(False). Returns self Return ty...
torch.generated.torch.nn.module#torch.nn.Module.eval
extra_repr() [source] Set the extra representation of the module To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
torch.generated.torch.nn.module#torch.nn.Module.extra_repr
float() [source] Casts all floating point parameters and buffers to float datatype. Returns self Return type Module
torch.generated.torch.nn.module#torch.nn.Module.float
forward(*input) Defines the computation performed at every call. Should be overridden by all subclasses. Note Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while...
torch.generated.torch.nn.module#torch.nn.Module.forward
half() [source] Casts all floating point parameters and buffers to half datatype. Returns self Return type Module
torch.generated.torch.nn.module#torch.nn.Module.half
load_state_dict(state_dict, strict=True) [source] Copies parameters and buffers from state_dict into this module and its descendants. If strict is True, then the keys of state_dict must exactly match the keys returned by this module’s state_dict() function. Parameters state_dict (dict) – a dict containing paramet...
torch.generated.torch.nn.module#torch.nn.Module.load_state_dict
modules() [source] Returns an iterator over all modules in the network. Yields Module – a module in the network Note Duplicate modules are returned only once. In the following example, l will be returned only once. Example: >>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.mo...
torch.generated.torch.nn.module#torch.nn.Module.modules
named_buffers(prefix='', recurse=True) [source] Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself. Parameters prefix (str) – prefix to prepend to all buffer names. recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, ...
torch.generated.torch.nn.module#torch.nn.Module.named_buffers
named_children() [source] Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself. Yields (string, Module) – Tuple containing a name and child module Example: >>> for name, module in model.named_children(): >>> if name in ['conv4', 'conv5']: >>> ...
torch.generated.torch.nn.module#torch.nn.Module.named_children
named_modules(memo=None, prefix='') [source] Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself. Yields (string, Module) – Tuple of name and module Note Duplicate modules are returned only once. In the following example, l will be returned only ...
torch.generated.torch.nn.module#torch.nn.Module.named_modules
named_parameters(prefix='', recurse=True) [source] Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself. Parameters prefix (str) – prefix to prepend to all parameter names. recurse (bool) – if True, then yields parameters of this module and all submo...
torch.generated.torch.nn.module#torch.nn.Module.named_parameters
parameters(recurse=True) [source] Returns an iterator over module parameters. This is typically passed to an optimizer. Parameters recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module. Yields Parameter – module...
torch.generated.torch.nn.module#torch.nn.Module.parameters
register_backward_hook(hook) [source] Registers a backward hook on the module. This function is deprecated in favor of nn.Module.register_full_backward_hook() and the behavior of this function will change in future versions. Returns a handle that can be used to remove the added hook by calling handle.remove() Retu...
torch.generated.torch.nn.module#torch.nn.Module.register_backward_hook
register_buffer(name, tensor, persistent=True) [source] Adds a buffer to the module. This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s running_mean is not a parameter, but is part of the module’s state. Buffers, by default, are persistent and will...
torch.generated.torch.nn.module#torch.nn.Module.register_buffer
register_forward_hook(hook) [source] Registers a forward hook on the module. The hook will be called every time after forward() has computed an output. It should have the following signature: hook(module, input, output) -> None or modified output The input contains only the positional arguments given to the module. ...
torch.generated.torch.nn.module#torch.nn.Module.register_forward_hook
register_forward_pre_hook(hook) [source] Registers a forward pre-hook on the module. The hook will be called every time before forward() is invoked. It should have the following signature: hook(module, input) -> None or modified input The input contains only the positional arguments given to the module. Keyword argu...
torch.generated.torch.nn.module#torch.nn.Module.register_forward_pre_hook
register_full_backward_hook(hook) [source] Registers a backward hook on the module. The hook will be called every time the gradients with respect to module inputs are computed. The hook should have the following signature: hook(module, grad_input, grad_output) -> tuple(Tensor) or None The grad_input and grad_output ...
torch.generated.torch.nn.module#torch.nn.Module.register_full_backward_hook
register_parameter(name, param) [source] Adds a parameter to the module. The parameter can be accessed as an attribute using given name. Parameters name (string) – name of the parameter. The parameter can be accessed from this module using the given name param (Parameter) – parameter to be added to the module.
torch.generated.torch.nn.module#torch.nn.Module.register_parameter
requires_grad_(requires_grad=True) [source] Change if autograd should record operations on parameters in this module. This method sets the parameters’ requires_grad attributes in-place. This method is helpful for freezing part of the module for finetuning or training parts of a model individually (e.g., GAN training)...
torch.generated.torch.nn.module#torch.nn.Module.requires_grad_
state_dict(destination=None, prefix='', keep_vars=False) [source] Returns a dictionary containing a whole state of the module. Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Returns a dictionary containing a whole state of the module ...
torch.generated.torch.nn.module#torch.nn.Module.state_dict
to(*args, **kwargs) [source] Moves and/or casts the parameters and buffers. This can be called as to(device=None, dtype=None, non_blocking=False) [source] to(dtype, non_blocking=False) [source] to(tensor, non_blocking=False) [source] to(memory_format=torch.channels_last) [source] Its signature is simi...
torch.generated.torch.nn.module#torch.nn.Module.to
train(mode=True) [source] Sets the module in training mode. This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc. Parameters mode (bool) – whether to set training mode (Tru...
torch.generated.torch.nn.module#torch.nn.Module.train
type(dst_type) [source] Casts all parameters and buffers to dst_type. Parameters dst_type (type or string) – the desired type Returns self Return type Module
torch.generated.torch.nn.module#torch.nn.Module.type
xpu(device=None) [source] Moves all model parameters and buffers to the XPU. This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on XPU while being optimized. Parameters device (int, optional) – if specified, all parameter...
torch.generated.torch.nn.module#torch.nn.Module.xpu
zero_grad(set_to_none=False) [source] Sets gradients of all model parameters to zero. See similar function under torch.optim.Optimizer for more context. Parameters set_to_none (bool) – instead of setting to zero, set the grads to None. See torch.optim.Optimizer.zero_grad() for details.
torch.generated.torch.nn.module#torch.nn.Module.zero_grad
class torch.nn.ModuleDict(modules=None) [source] Holds submodules in a dictionary. ModuleDict can be indexed like a regular Python dictionary, but modules it contains are properly registered, and will be visible by all Module methods. ModuleDict is an ordered dictionary that respects the order of insertion, and in u...
torch.generated.torch.nn.moduledict#torch.nn.ModuleDict
clear() [source] Remove all items from the ModuleDict.
torch.generated.torch.nn.moduledict#torch.nn.ModuleDict.clear
items() [source] Return an iterable of the ModuleDict key/value pairs.
torch.generated.torch.nn.moduledict#torch.nn.ModuleDict.items
keys() [source] Return an iterable of the ModuleDict keys.
torch.generated.torch.nn.moduledict#torch.nn.ModuleDict.keys
pop(key) [source] Remove key from the ModuleDict and return its module. Parameters key (string) – key to pop from the ModuleDict
torch.generated.torch.nn.moduledict#torch.nn.ModuleDict.pop
update(modules) [source] Update the ModuleDict with the key-value pairs from a mapping or an iterable, overwriting existing keys. Note If modules is an OrderedDict, a ModuleDict, or an iterable of key-value pairs, the order of new elements in it is preserved. Parameters modules (iterable) – a mapping (dictionary)...
torch.generated.torch.nn.moduledict#torch.nn.ModuleDict.update
values() [source] Return an iterable of the ModuleDict values.
torch.generated.torch.nn.moduledict#torch.nn.ModuleDict.values
class torch.nn.ModuleList(modules=None) [source] Holds submodules in a list. ModuleList can be indexed like a regular Python list, but modules it contains are properly registered, and will be visible by all Module methods. Parameters modules (iterable, optional) – an iterable of modules to add Example: class MyMo...
torch.generated.torch.nn.modulelist#torch.nn.ModuleList
append(module) [source] Appends a given module to the end of the list. Parameters module (nn.Module) – module to append
torch.generated.torch.nn.modulelist#torch.nn.ModuleList.append
extend(modules) [source] Appends modules from a Python iterable to the end of the list. Parameters modules (iterable) – iterable of modules to append
torch.generated.torch.nn.modulelist#torch.nn.ModuleList.extend
insert(index, module) [source] Insert a given module before a given index in the list. Parameters index (int) – index to insert. module (nn.Module) – module to insert
torch.generated.torch.nn.modulelist#torch.nn.ModuleList.insert
class torch.nn.modules.lazy.LazyModuleMixin(*args, **kwargs) [source] A mixin for modules that lazily initialize parameters, also known as “lazy modules.” Modules that lazily initialize parameters, or “lazy modules”, derive the shapes of their parameters from the first input(s) to their forward method. Until that fir...
torch.generated.torch.nn.modules.lazy.lazymodulemixin#torch.nn.modules.lazy.LazyModuleMixin
has_uninitialized_params() [source] Check if a module has parameters that are not initialized
torch.generated.torch.nn.modules.lazy.lazymodulemixin#torch.nn.modules.lazy.LazyModuleMixin.has_uninitialized_params
initialize_parameters(*args, **kwargs) [source] Initialize parameters according to the input batch properties. This adds an interface to isolate parameter initialization from the forward pass when doing parameter shape inference.
torch.generated.torch.nn.modules.lazy.lazymodulemixin#torch.nn.modules.lazy.LazyModuleMixin.initialize_parameters
torch.nn.modules.module.register_module_backward_hook(hook) [source] Registers a backward hook common to all the modules. This function is deprecated in favor of nn.module.register_module_full_backward_hook() and the behavior of this function will change in future versions. Returns a handle that can be used to remo...
torch.generated.torch.nn.modules.module.register_module_backward_hook#torch.nn.modules.module.register_module_backward_hook
torch.nn.modules.module.register_module_forward_hook(hook) [source] Registers a global forward hook for all the modules Warning This adds global state to the nn.module module and it is only intended for debugging/profiling purposes. The hook will be called every time after forward() has computed an output. It shoul...
torch.generated.torch.nn.modules.module.register_module_forward_hook#torch.nn.modules.module.register_module_forward_hook
torch.nn.modules.module.register_module_forward_pre_hook(hook) [source] Registers a forward pre-hook common to all modules. Warning This adds global state to the nn.module module and it is only intended for debugging/profiling purposes. The hook will be called every time before forward() is invoked. It should have ...
torch.generated.torch.nn.modules.module.register_module_forward_pre_hook#torch.nn.modules.module.register_module_forward_pre_hook
class torch.nn.MSELoss(size_average=None, reduce=None, reduction='mean') [source] Creates a criterion that measures the mean squared error (squared L2 norm) between each element in the input xx and target yy . The unreduced (i.e. with reduction set to 'none') loss can be described as: ℓ(x,y)=L={l1,…,lN}⊤,ln=(xn−yn)...
torch.generated.torch.nn.mseloss#torch.nn.MSELoss
class torch.nn.MultiheadAttention(embed_dim, num_heads, dropout=0.0, bias=True, add_bias_kv=False, add_zero_attn=False, kdim=None, vdim=None) [source] Allows the model to jointly attend to information from different representation subspaces. See Attention Is All You Need MultiHead(Q,K,V)=Concat(head1,…,headh)WO\text...
torch.generated.torch.nn.multiheadattention#torch.nn.MultiheadAttention
forward(query, key, value, key_padding_mask=None, need_weights=True, attn_mask=None) [source] Parameters key, value (query,) – map a query and a set of key-value pairs to an output. See “Attention Is All You Need” for more details. key_padding_mask – if provided, specified padding elements in the key will be ign...
torch.generated.torch.nn.multiheadattention#torch.nn.MultiheadAttention.forward
class torch.nn.MultiLabelMarginLoss(size_average=None, reduce=None, reduction='mean') [source] Creates a criterion that optimizes a multi-class multi-classification hinge loss (margin-based loss) between input xx (a 2D mini-batch Tensor) and output yy (which is a 2D Tensor of target class indices). For each sample ...
torch.generated.torch.nn.multilabelmarginloss#torch.nn.MultiLabelMarginLoss
class torch.nn.MultiLabelSoftMarginLoss(weight=None, size_average=None, reduce=None, reduction='mean') [source] Creates a criterion that optimizes a multi-label one-versus-all loss based on max-entropy, between input xx and target yy of size (N,C)(N, C) . For each sample in the minibatch: loss(x,y)=−1C∗∑iy[i]∗log⁡...
torch.generated.torch.nn.multilabelsoftmarginloss#torch.nn.MultiLabelSoftMarginLoss
class torch.nn.MultiMarginLoss(p=1, margin=1.0, weight=None, size_average=None, reduce=None, reduction='mean') [source] Creates a criterion that optimizes a multi-class classification hinge loss (margin-based loss) between input xx (a 2D mini-batch Tensor) and output yy (which is a 1D tensor of target class indices...
torch.generated.torch.nn.multimarginloss#torch.nn.MultiMarginLoss
class torch.nn.NLLLoss(weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean') [source] The negative log likelihood loss. It is useful to train a classification problem with C classes. If provided, the optional argument weight should be a 1D Tensor assigning weight to each of the classes. Th...
torch.generated.torch.nn.nllloss#torch.nn.NLLLoss
class torch.nn.PairwiseDistance(p=2.0, eps=1e-06, keepdim=False) [source] Computes the batchwise pairwise distance between vectors v1v_1 , v2v_2 using the p-norm: ∥x∥p=(∑i=1n∣xi∣p)1/p.\Vert x \Vert _p = \left( \sum_{i=1}^n \vert x_i \vert ^ p \right) ^ {1/p}. Parameters p (real) – the norm degree. Default: 2 ...
torch.generated.torch.nn.pairwisedistance#torch.nn.PairwiseDistance
torch.nn.parallel.data_parallel(module, inputs, device_ids=None, output_device=None, dim=0, module_kwargs=None) [source] Evaluates module(input) in parallel across the GPUs given in device_ids. This is the functional version of the DataParallel module. Parameters module (Module) – the module to evaluate in parall...
torch.nn.functional#torch.nn.parallel.data_parallel
class torch.nn.parallel.DistributedDataParallel(module, device_ids=None, output_device=None, dim=0, broadcast_buffers=True, process_group=None, bucket_cap_mb=25, find_unused_parameters=False, check_reduction=False, gradient_as_bucket_view=False) [source] Implements distributed data parallelism that is based on torch....
torch.generated.torch.nn.parallel.distributeddataparallel#torch.nn.parallel.DistributedDataParallel
join(divide_by_initial_world_size=True, enable=True) [source] A context manager to be used in conjunction with an instance of torch.nn.parallel.DistributedDataParallel to be able to train with uneven inputs across participating processes. This context manager will keep track of already-joined DDP processes, and “shad...
torch.generated.torch.nn.parallel.distributeddataparallel#torch.nn.parallel.DistributedDataParallel.join
no_sync() [source] A context manager to disable gradient synchronizations across DDP processes. Within this context, gradients will be accumulated on module variables, which will later be synchronized in the first forward-backward pass exiting the context. Example: >>> ddp = torch.nn.parallel.DistributedDataParallel(...
torch.generated.torch.nn.parallel.distributeddataparallel#torch.nn.parallel.DistributedDataParallel.no_sync
register_comm_hook(state, hook) [source] Registers a communication hook which is an enhancement that provides a flexible hook to users where they can specify how DDP aggregates gradients across multiple workers. This hook would be very useful for researchers to try out new ideas. For example, this hook can be used to...
torch.generated.torch.nn.parallel.distributeddataparallel#torch.nn.parallel.DistributedDataParallel.register_comm_hook
class torch.nn.parameter.Parameter [source] A kind of Tensor that is to be considered a module parameter. Parameters are Tensor subclasses, that have a very special property when used with Module s - when they’re assigned as Module attributes they are automatically added to the list of its parameters, and will appear...
torch.generated.torch.nn.parameter.parameter#torch.nn.parameter.Parameter
class torch.nn.parameter.UninitializedParameter [source] A parameter that is not initialized. Unitialized Parameters are a a special case of torch.nn.Parameter where the shape of the data is still unknown. Unlikely a torch.nn.Parameter, uninitialized parameters hold no data and attempting to access some properties, l...
torch.generated.torch.nn.parameter.uninitializedparameter#torch.nn.parameter.UninitializedParameter
materialize(shape, device=None, dtype=None) [source] Create a Parameter with the same properties of the uninitialized one. Given a shape, it materializes a parameter in the same device and with the same dtype as the current one or the specified ones in the arguments. Parameters shape – (tuple): the shape for the ...
torch.generated.torch.nn.parameter.uninitializedparameter#torch.nn.parameter.UninitializedParameter.materialize
class torch.nn.ParameterDict(parameters=None) [source] Holds parameters in a dictionary. ParameterDict can be indexed like a regular Python dictionary, but parameters it contains are properly registered, and will be visible by all Module methods. ParameterDict is an ordered dictionary that respects the order of inse...
torch.generated.torch.nn.parameterdict#torch.nn.ParameterDict
clear() [source] Remove all items from the ParameterDict.
torch.generated.torch.nn.parameterdict#torch.nn.ParameterDict.clear
items() [source] Return an iterable of the ParameterDict key/value pairs.
torch.generated.torch.nn.parameterdict#torch.nn.ParameterDict.items
keys() [source] Return an iterable of the ParameterDict keys.
torch.generated.torch.nn.parameterdict#torch.nn.ParameterDict.keys
pop(key) [source] Remove key from the ParameterDict and return its parameter. Parameters key (string) – key to pop from the ParameterDict
torch.generated.torch.nn.parameterdict#torch.nn.ParameterDict.pop
update(parameters) [source] Update the ParameterDict with the key-value pairs from a mapping or an iterable, overwriting existing keys. Note If parameters is an OrderedDict, a ParameterDict, or an iterable of key-value pairs, the order of new elements in it is preserved. Parameters parameters (iterable) – a mappi...
torch.generated.torch.nn.parameterdict#torch.nn.ParameterDict.update
values() [source] Return an iterable of the ParameterDict values.
torch.generated.torch.nn.parameterdict#torch.nn.ParameterDict.values
class torch.nn.ParameterList(parameters=None) [source] Holds parameters in a list. ParameterList can be indexed like a regular Python list, but parameters it contains are properly registered, and will be visible by all Module methods. Parameters parameters (iterable, optional) – an iterable of Parameter to add Ex...
torch.generated.torch.nn.parameterlist#torch.nn.ParameterList
append(parameter) [source] Appends a given parameter at the end of the list. Parameters parameter (nn.Parameter) – parameter to append
torch.generated.torch.nn.parameterlist#torch.nn.ParameterList.append
extend(parameters) [source] Appends parameters from a Python iterable to the end of the list. Parameters parameters (iterable) – iterable of parameters to append
torch.generated.torch.nn.parameterlist#torch.nn.ParameterList.extend
class torch.nn.PixelShuffle(upscale_factor) [source] Rearranges elements in a tensor of shape (∗,C×r2,H,W)(*, C \times r^2, H, W) to a tensor of shape (∗,C,H×r,W×r)(*, C, H \times r, W \times r) , where r is an upscale factor. This is useful for implementing efficient sub-pixel convolution with a stride of 1/r1/r . ...
torch.generated.torch.nn.pixelshuffle#torch.nn.PixelShuffle
class torch.nn.PixelUnshuffle(downscale_factor) [source] Reverses the PixelShuffle operation by rearranging elements in a tensor of shape (∗,C,H×r,W×r)(*, C, H \times r, W \times r) to a tensor of shape (∗,C×r2,H,W)(*, C \times r^2, H, W) , where r is a downscale factor. See the paper: Real-Time Single Image and Vid...
torch.generated.torch.nn.pixelunshuffle#torch.nn.PixelUnshuffle
class torch.nn.PoissonNLLLoss(log_input=True, full=False, size_average=None, eps=1e-08, reduce=None, reduction='mean') [source] Negative log likelihood loss with Poisson distribution of target. The loss can be described as: target∼Poisson(input)loss(input,target)=input−target∗log⁡(input)+log⁡(target!)\text{target} \...
torch.generated.torch.nn.poissonnllloss#torch.nn.PoissonNLLLoss
class torch.nn.PReLU(num_parameters=1, init=0.25) [source] Applies the element-wise function: PReLU(x)=max⁡(0,x)+a∗min⁡(0,x)\text{PReLU}(x) = \max(0,x) + a * \min(0,x) or PReLU(x)={x, if x≥0ax, otherwise \text{PReLU}(x) = \begin{cases} x, & \text{ if } x \geq 0 \\ ax, & \text{ otherwise } \end{cases} Here aa i...
torch.generated.torch.nn.prelu#torch.nn.PReLU
class torch.nn.qat.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', qconfig=None) [source] A Conv2d module attached with FakeQuantize modules for weight, used for quantization aware training. We adopt the same interface as torch.nn.Conv2d, plea...
torch.nn.qat#torch.nn.qat.Conv2d
classmethod from_float(mod) [source] Create a qat module from a float module or qparams_dict Args: mod a float module, either produced by torch.quantization utilities or directly from user
torch.nn.qat#torch.nn.qat.Conv2d.from_float
class torch.nn.qat.Linear(in_features, out_features, bias=True, qconfig=None) [source] A linear module attached with FakeQuantize modules for weight, used for quantization aware training. We adopt the same interface as torch.nn.Linear, please see https://pytorch.org/docs/stable/nn.html#torch.nn.Linear for documentati...
torch.nn.qat#torch.nn.qat.Linear