doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
load_state_dict(state_dict, strict=True)
Copies parameters and buffers from state_dict into this module and its descendants. If strict is True, then the keys of state_dict must exactly match the keys returned by this module’s state_dict() function. Parameters
state_dict (dict) – a dict containing parameters and p... | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.load_state_dict |
modules()
Returns an iterator over all modules in the network. Yields
Module – a module in the network Note Duplicate modules are returned only once. In the following example, l will be returned only once. Example: >>> l = nn.Linear(2, 2)
>>> net = nn.Sequential(l, l)
>>> for idx, m in enumerate(net.modules()):... | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.modules |
named_buffers(prefix='', recurse=True)
Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself. Parameters
prefix (str) – prefix to prepend to all buffer names.
recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields on... | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.named_buffers |
named_children()
Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself. Yields
(string, Module) – Tuple containing a name and child module Example: >>> for name, module in model.named_children():
>>> if name in ['conv4', 'conv5']:
>>> pr... | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.named_children |
named_modules(memo=None, prefix='')
Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself. Yields
(string, Module) – Tuple of name and module Note Duplicate modules are returned only once. In the following example, l will be returned only once. Ex... | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.named_modules |
named_parameters(prefix='', recurse=True)
Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself. Parameters
prefix (str) – prefix to prepend to all parameter names.
recurse (bool) – if True, then yields parameters of this module and all submodules. Ot... | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.named_parameters |
parameters(recurse=True)
Returns an iterator over module parameters. This is typically passed to an optimizer. Parameters
recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module. Yields
Parameter – module paramete... | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.parameters |
register_backward_hook(hook)
Registers a backward hook on the module. This function is deprecated in favor of nn.Module.register_full_backward_hook() and the behavior of this function will change in future versions. Returns
a handle that can be used to remove the added hook by calling handle.remove() Return type
... | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.register_backward_hook |
register_buffer(name, tensor, persistent=True)
Adds a buffer to the module. This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s running_mean is not a parameter, but is part of the module’s state. Buffers, by default, are persistent and will be saved... | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.register_buffer |
register_forward_hook(hook)
Registers a forward hook on the module. The hook will be called every time after forward() has computed an output. It should have the following signature: hook(module, input, output) -> None or modified output
The input contains only the positional arguments given to the module. Keyword a... | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.register_forward_hook |
register_forward_pre_hook(hook)
Registers a forward pre-hook on the module. The hook will be called every time before forward() is invoked. It should have the following signature: hook(module, input) -> None or modified input
The input contains only the positional arguments given to the module. Keyword arguments won... | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.register_forward_pre_hook |
register_full_backward_hook(hook)
Registers a backward hook on the module. The hook will be called every time the gradients with respect to module inputs are computed. The hook should have the following signature: hook(module, grad_input, grad_output) -> tuple(Tensor) or None
The grad_input and grad_output are tuple... | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.register_full_backward_hook |
register_parameter(name, param)
Adds a parameter to the module. The parameter can be accessed as an attribute using given name. Parameters
name (string) – name of the parameter. The parameter can be accessed from this module using the given name
param (Parameter) – parameter to be added to the module. | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.register_parameter |
requires_grad_(requires_grad=True)
Change if autograd should record operations on parameters in this module. This method sets the parameters’ requires_grad attributes in-place. This method is helpful for freezing part of the module for finetuning or training parts of a model individually (e.g., GAN training). Parame... | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.requires_grad_ |
state_dict(destination=None, prefix='', keep_vars=False)
Returns a dictionary containing a whole state of the module. Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Returns
a dictionary containing a whole state of the module Return ty... | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.state_dict |
to(*args, **kwargs)
Moves and/or casts the parameters and buffers. This can be called as
to(device=None, dtype=None, non_blocking=False)
to(dtype, non_blocking=False)
to(tensor, non_blocking=False)
to(memory_format=torch.channels_last)
Its signature is similar to torch.Tensor.to(), but only accepts fl... | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.to |
train(mode=True)
Sets the module in training mode. This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc. Parameters
mode (bool) – whether to set training mode (True) or eva... | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.train |
type(dst_type)
Casts all parameters and buffers to dst_type. Parameters
dst_type (type or string) – the desired type Returns
self Return type
Module | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.type |
xpu(device=None)
Moves all model parameters and buffers to the XPU. This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on XPU while being optimized. Parameters
device (int, optional) – if specified, all parameters will be... | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.xpu |
zero_grad(set_to_none=False)
Sets gradients of all model parameters to zero. See similar function under torch.optim.Optimizer for more context. Parameters
set_to_none (bool) – instead of setting to zero, set the grads to None. See torch.optim.Optimizer.zero_grad() for details. | torch.generated.torch.nn.unflatten#torch.nn.Unflatten.zero_grad |
class torch.nn.Unfold(kernel_size, dilation=1, padding=0, stride=1) [source]
Extracts sliding local blocks from a batched input tensor. Consider a batched input tensor of shape (N,C,∗)(N, C, *) , where NN is the batch dimension, CC is the channel dimension, and ∗* represent arbitrary spatial dimensions. This opera... | torch.generated.torch.nn.unfold#torch.nn.Unfold |
class torch.nn.Upsample(size=None, scale_factor=None, mode='nearest', align_corners=None) [source]
Upsamples a given multi-channel 1D (temporal), 2D (spatial) or 3D (volumetric) data. The input data is assumed to be of the form minibatch x channels x [optional depth] x [optional height] x width. Hence, for spatial in... | torch.generated.torch.nn.upsample#torch.nn.Upsample |
class torch.nn.UpsamplingBilinear2d(size=None, scale_factor=None) [source]
Applies a 2D bilinear upsampling to an input signal composed of several input channels. To specify the scale, it takes either the size or the scale_factor as it’s constructor argument. When size is given, it is the output size of the image (h,... | torch.generated.torch.nn.upsamplingbilinear2d#torch.nn.UpsamplingBilinear2d |
class torch.nn.UpsamplingNearest2d(size=None, scale_factor=None) [source]
Applies a 2D nearest neighbor upsampling to an input signal composed of several input channels. To specify the scale, it takes either the size or the scale_factor as it’s constructor argument. When size is given, it is the output size of the im... | torch.generated.torch.nn.upsamplingnearest2d#torch.nn.UpsamplingNearest2d |
torch.nn.utils.clip_grad_norm_(parameters, max_norm, norm_type=2.0) [source]
Clips gradient norm of an iterable of parameters. The norm is computed over all gradients together, as if they were concatenated into a single vector. Gradients are modified in-place. Parameters
parameters (Iterable[Tensor] or Tensor) – ... | torch.generated.torch.nn.utils.clip_grad_norm_#torch.nn.utils.clip_grad_norm_ |
torch.nn.utils.clip_grad_value_(parameters, clip_value) [source]
Clips gradient of an iterable of parameters at specified value. Gradients are modified in-place. Parameters
parameters (Iterable[Tensor] or Tensor) – an iterable of Tensors or a single Tensor that will have gradients normalized
clip_value (float or... | torch.generated.torch.nn.utils.clip_grad_value_#torch.nn.utils.clip_grad_value_ |
torch.nn.utils.parameters_to_vector(parameters) [source]
Convert parameters to one vector Parameters
parameters (Iterable[Tensor]) – an iterator of Tensors that are the parameters of a model. Returns
The parameters represented by a single vector | torch.generated.torch.nn.utils.parameters_to_vector#torch.nn.utils.parameters_to_vector |
class torch.nn.utils.prune.BasePruningMethod [source]
Abstract base class for creation of new pruning techniques. Provides a skeleton for customization requiring the overriding of methods such as compute_mask() and apply().
classmethod apply(module, name, *args, importance_scores=None, **kwargs) [source]
Adds the... | torch.generated.torch.nn.utils.prune.basepruningmethod#torch.nn.utils.prune.BasePruningMethod |
classmethod apply(module, name, *args, importance_scores=None, **kwargs) [source]
Adds the forward pre-hook that enables pruning on the fly and the reparametrization of a tensor in terms of the original tensor and the pruning mask. Parameters
module (nn.Module) – module containing the tensor to prune
name (str) ... | torch.generated.torch.nn.utils.prune.basepruningmethod#torch.nn.utils.prune.BasePruningMethod.apply |
apply_mask(module) [source]
Simply handles the multiplication between the parameter being pruned and the generated mask. Fetches the mask and the original tensor from the module and returns the pruned version of the tensor. Parameters
module (nn.Module) – module containing the tensor to prune Returns
pruned versi... | torch.generated.torch.nn.utils.prune.basepruningmethod#torch.nn.utils.prune.BasePruningMethod.apply_mask |
abstract compute_mask(t, default_mask) [source]
Computes and returns a mask for the input tensor t. Starting from a base default_mask (which should be a mask of ones if the tensor has not been pruned yet), generate a random mask to apply on top of the default_mask according to the specific pruning method recipe. Par... | torch.generated.torch.nn.utils.prune.basepruningmethod#torch.nn.utils.prune.BasePruningMethod.compute_mask |
prune(t, default_mask=None, importance_scores=None) [source]
Computes and returns a pruned version of input tensor t according to the pruning rule specified in compute_mask(). Parameters
t (torch.Tensor) – tensor to prune (of same dimensions as default_mask).
importance_scores (torch.Tensor) – tensor of importan... | torch.generated.torch.nn.utils.prune.basepruningmethod#torch.nn.utils.prune.BasePruningMethod.prune |
remove(module) [source]
Removes the pruning reparameterization from a module. The pruned parameter named name remains permanently pruned, and the parameter named name+'_orig' is removed from the parameter list. Similarly, the buffer named name+'_mask' is removed from the buffers. Note Pruning itself is NOT undone or... | torch.generated.torch.nn.utils.prune.basepruningmethod#torch.nn.utils.prune.BasePruningMethod.remove |
class torch.nn.utils.prune.CustomFromMask(mask) [source]
classmethod apply(module, name, mask) [source]
Adds the forward pre-hook that enables pruning on the fly and the reparametrization of a tensor in terms of the original tensor and the pruning mask. Parameters
module (nn.Module) – module containing the te... | torch.generated.torch.nn.utils.prune.customfrommask#torch.nn.utils.prune.CustomFromMask |
classmethod apply(module, name, mask) [source]
Adds the forward pre-hook that enables pruning on the fly and the reparametrization of a tensor in terms of the original tensor and the pruning mask. Parameters
module (nn.Module) – module containing the tensor to prune
name (str) – parameter name within module on w... | torch.generated.torch.nn.utils.prune.customfrommask#torch.nn.utils.prune.CustomFromMask.apply |
apply_mask(module)
Simply handles the multiplication between the parameter being pruned and the generated mask. Fetches the mask and the original tensor from the module and returns the pruned version of the tensor. Parameters
module (nn.Module) – module containing the tensor to prune Returns
pruned version of the... | torch.generated.torch.nn.utils.prune.customfrommask#torch.nn.utils.prune.CustomFromMask.apply_mask |
prune(t, default_mask=None, importance_scores=None)
Computes and returns a pruned version of input tensor t according to the pruning rule specified in compute_mask(). Parameters
t (torch.Tensor) – tensor to prune (of same dimensions as default_mask).
importance_scores (torch.Tensor) – tensor of importance scores... | torch.generated.torch.nn.utils.prune.customfrommask#torch.nn.utils.prune.CustomFromMask.prune |
remove(module)
Removes the pruning reparameterization from a module. The pruned parameter named name remains permanently pruned, and the parameter named name+'_orig' is removed from the parameter list. Similarly, the buffer named name+'_mask' is removed from the buffers. Note Pruning itself is NOT undone or reversed... | torch.generated.torch.nn.utils.prune.customfrommask#torch.nn.utils.prune.CustomFromMask.remove |
torch.nn.utils.prune.custom_from_mask(module, name, mask) [source]
Prunes tensor corresponding to parameter called name in module by applying the pre-computed mask in mask. Modifies module in place (and also return the modified module) by: 1) adding a named buffer called name+'_mask' corresponding to the binary mask ... | torch.generated.torch.nn.utils.prune.custom_from_mask#torch.nn.utils.prune.custom_from_mask |
torch.nn.utils.prune.global_unstructured(parameters, pruning_method, importance_scores=None, **kwargs) [source]
Globally prunes tensors corresponding to all parameters in parameters by applying the specified pruning_method. Modifies modules in place by: 1) adding a named buffer called name+'_mask' corresponding to th... | torch.generated.torch.nn.utils.prune.global_unstructured#torch.nn.utils.prune.global_unstructured |
class torch.nn.utils.prune.Identity [source]
Utility pruning method that does not prune any units but generates the pruning parametrization with a mask of ones.
classmethod apply(module, name) [source]
Adds the forward pre-hook that enables pruning on the fly and the reparametrization of a tensor in terms of the ... | torch.generated.torch.nn.utils.prune.identity#torch.nn.utils.prune.Identity |
classmethod apply(module, name) [source]
Adds the forward pre-hook that enables pruning on the fly and the reparametrization of a tensor in terms of the original tensor and the pruning mask. Parameters
module (nn.Module) – module containing the tensor to prune
name (str) – parameter name within module on which p... | torch.generated.torch.nn.utils.prune.identity#torch.nn.utils.prune.Identity.apply |
apply_mask(module)
Simply handles the multiplication between the parameter being pruned and the generated mask. Fetches the mask and the original tensor from the module and returns the pruned version of the tensor. Parameters
module (nn.Module) – module containing the tensor to prune Returns
pruned version of the... | torch.generated.torch.nn.utils.prune.identity#torch.nn.utils.prune.Identity.apply_mask |
prune(t, default_mask=None, importance_scores=None)
Computes and returns a pruned version of input tensor t according to the pruning rule specified in compute_mask(). Parameters
t (torch.Tensor) – tensor to prune (of same dimensions as default_mask).
importance_scores (torch.Tensor) – tensor of importance scores... | torch.generated.torch.nn.utils.prune.identity#torch.nn.utils.prune.Identity.prune |
remove(module)
Removes the pruning reparameterization from a module. The pruned parameter named name remains permanently pruned, and the parameter named name+'_orig' is removed from the parameter list. Similarly, the buffer named name+'_mask' is removed from the buffers. Note Pruning itself is NOT undone or reversed... | torch.generated.torch.nn.utils.prune.identity#torch.nn.utils.prune.Identity.remove |
torch.nn.utils.prune.is_pruned(module) [source]
Check whether module is pruned by looking for forward_pre_hooks in its modules that inherit from the BasePruningMethod. Parameters
module (nn.Module) – object that is either pruned or unpruned Returns
binary answer to whether module is pruned. Examples >>> m = nn.... | torch.generated.torch.nn.utils.prune.is_pruned#torch.nn.utils.prune.is_pruned |
class torch.nn.utils.prune.L1Unstructured(amount) [source]
Prune (currently unpruned) units in a tensor by zeroing out the ones with the lowest L1-norm. Parameters
amount (int or float) – quantity of parameters to prune. If float, should be between 0.0 and 1.0 and represent the fraction of parameters to prune. If i... | torch.generated.torch.nn.utils.prune.l1unstructured#torch.nn.utils.prune.L1Unstructured |
classmethod apply(module, name, amount, importance_scores=None) [source]
Adds the forward pre-hook that enables pruning on the fly and the reparametrization of a tensor in terms of the original tensor and the pruning mask. Parameters
module (nn.Module) – module containing the tensor to prune
name (str) – paramet... | torch.generated.torch.nn.utils.prune.l1unstructured#torch.nn.utils.prune.L1Unstructured.apply |
apply_mask(module)
Simply handles the multiplication between the parameter being pruned and the generated mask. Fetches the mask and the original tensor from the module and returns the pruned version of the tensor. Parameters
module (nn.Module) – module containing the tensor to prune Returns
pruned version of the... | torch.generated.torch.nn.utils.prune.l1unstructured#torch.nn.utils.prune.L1Unstructured.apply_mask |
prune(t, default_mask=None, importance_scores=None)
Computes and returns a pruned version of input tensor t according to the pruning rule specified in compute_mask(). Parameters
t (torch.Tensor) – tensor to prune (of same dimensions as default_mask).
importance_scores (torch.Tensor) – tensor of importance scores... | torch.generated.torch.nn.utils.prune.l1unstructured#torch.nn.utils.prune.L1Unstructured.prune |
remove(module)
Removes the pruning reparameterization from a module. The pruned parameter named name remains permanently pruned, and the parameter named name+'_orig' is removed from the parameter list. Similarly, the buffer named name+'_mask' is removed from the buffers. Note Pruning itself is NOT undone or reversed... | torch.generated.torch.nn.utils.prune.l1unstructured#torch.nn.utils.prune.L1Unstructured.remove |
torch.nn.utils.prune.l1_unstructured(module, name, amount, importance_scores=None) [source]
Prunes tensor corresponding to parameter called name in module by removing the specified amount of (currently unpruned) units with the lowest L1-norm. Modifies module in place (and also return the modified module) by: 1) addin... | torch.generated.torch.nn.utils.prune.l1_unstructured#torch.nn.utils.prune.l1_unstructured |
class torch.nn.utils.prune.LnStructured(amount, n, dim=-1) [source]
Prune entire (currently unpruned) channels in a tensor based on their Ln-norm. Parameters
amount (int or float) – quantity of channels to prune. If float, should be between 0.0 and 1.0 and represent the fraction of parameters to prune. If int, it... | torch.generated.torch.nn.utils.prune.lnstructured#torch.nn.utils.prune.LnStructured |
classmethod apply(module, name, amount, n, dim, importance_scores=None) [source]
Adds the forward pre-hook that enables pruning on the fly and the reparametrization of a tensor in terms of the original tensor and the pruning mask. Parameters
module (nn.Module) – module containing the tensor to prune
name (str) –... | torch.generated.torch.nn.utils.prune.lnstructured#torch.nn.utils.prune.LnStructured.apply |
apply_mask(module)
Simply handles the multiplication between the parameter being pruned and the generated mask. Fetches the mask and the original tensor from the module and returns the pruned version of the tensor. Parameters
module (nn.Module) – module containing the tensor to prune Returns
pruned version of the... | torch.generated.torch.nn.utils.prune.lnstructured#torch.nn.utils.prune.LnStructured.apply_mask |
compute_mask(t, default_mask) [source]
Computes and returns a mask for the input tensor t. Starting from a base default_mask (which should be a mask of ones if the tensor has not been pruned yet), generate a mask to apply on top of the default_mask by zeroing out the channels along the specified dim with the lowest L... | torch.generated.torch.nn.utils.prune.lnstructured#torch.nn.utils.prune.LnStructured.compute_mask |
prune(t, default_mask=None, importance_scores=None)
Computes and returns a pruned version of input tensor t according to the pruning rule specified in compute_mask(). Parameters
t (torch.Tensor) – tensor to prune (of same dimensions as default_mask).
importance_scores (torch.Tensor) – tensor of importance scores... | torch.generated.torch.nn.utils.prune.lnstructured#torch.nn.utils.prune.LnStructured.prune |
remove(module)
Removes the pruning reparameterization from a module. The pruned parameter named name remains permanently pruned, and the parameter named name+'_orig' is removed from the parameter list. Similarly, the buffer named name+'_mask' is removed from the buffers. Note Pruning itself is NOT undone or reversed... | torch.generated.torch.nn.utils.prune.lnstructured#torch.nn.utils.prune.LnStructured.remove |
torch.nn.utils.prune.ln_structured(module, name, amount, n, dim, importance_scores=None) [source]
Prunes tensor corresponding to parameter called name in module by removing the specified amount of (currently unpruned) channels along the specified dim with the lowest L``n``-norm. Modifies module in place (and also ret... | torch.generated.torch.nn.utils.prune.ln_structured#torch.nn.utils.prune.ln_structured |
class torch.nn.utils.prune.PruningContainer(*args) [source]
Container holding a sequence of pruning methods for iterative pruning. Keeps track of the order in which pruning methods are applied and handles combining successive pruning calls. Accepts as argument an instance of a BasePruningMethod or an iterable of them... | torch.generated.torch.nn.utils.prune.pruningcontainer#torch.nn.utils.prune.PruningContainer |
add_pruning_method(method) [source]
Adds a child pruning method to the container. Parameters
method (subclass of BasePruningMethod) – child pruning method to be added to the container. | torch.generated.torch.nn.utils.prune.pruningcontainer#torch.nn.utils.prune.PruningContainer.add_pruning_method |
classmethod apply(module, name, *args, importance_scores=None, **kwargs)
Adds the forward pre-hook that enables pruning on the fly and the reparametrization of a tensor in terms of the original tensor and the pruning mask. Parameters
module (nn.Module) – module containing the tensor to prune
name (str) – paramet... | torch.generated.torch.nn.utils.prune.pruningcontainer#torch.nn.utils.prune.PruningContainer.apply |
apply_mask(module)
Simply handles the multiplication between the parameter being pruned and the generated mask. Fetches the mask and the original tensor from the module and returns the pruned version of the tensor. Parameters
module (nn.Module) – module containing the tensor to prune Returns
pruned version of the... | torch.generated.torch.nn.utils.prune.pruningcontainer#torch.nn.utils.prune.PruningContainer.apply_mask |
compute_mask(t, default_mask) [source]
Applies the latest method by computing the new partial masks and returning its combination with the default_mask. The new partial mask should be computed on the entries or channels that were not zeroed out by the default_mask. Which portions of the tensor t the new mask will be ... | torch.generated.torch.nn.utils.prune.pruningcontainer#torch.nn.utils.prune.PruningContainer.compute_mask |
prune(t, default_mask=None, importance_scores=None)
Computes and returns a pruned version of input tensor t according to the pruning rule specified in compute_mask(). Parameters
t (torch.Tensor) – tensor to prune (of same dimensions as default_mask).
importance_scores (torch.Tensor) – tensor of importance scores... | torch.generated.torch.nn.utils.prune.pruningcontainer#torch.nn.utils.prune.PruningContainer.prune |
remove(module)
Removes the pruning reparameterization from a module. The pruned parameter named name remains permanently pruned, and the parameter named name+'_orig' is removed from the parameter list. Similarly, the buffer named name+'_mask' is removed from the buffers. Note Pruning itself is NOT undone or reversed... | torch.generated.torch.nn.utils.prune.pruningcontainer#torch.nn.utils.prune.PruningContainer.remove |
class torch.nn.utils.prune.RandomStructured(amount, dim=-1) [source]
Prune entire (currently unpruned) channels in a tensor at random. Parameters
amount (int or float) – quantity of parameters to prune. If float, should be between 0.0 and 1.0 and represent the fraction of parameters to prune. If int, it represent... | torch.generated.torch.nn.utils.prune.randomstructured#torch.nn.utils.prune.RandomStructured |
classmethod apply(module, name, amount, dim=-1) [source]
Adds the forward pre-hook that enables pruning on the fly and the reparametrization of a tensor in terms of the original tensor and the pruning mask. Parameters
module (nn.Module) – module containing the tensor to prune
name (str) – parameter name within m... | torch.generated.torch.nn.utils.prune.randomstructured#torch.nn.utils.prune.RandomStructured.apply |
apply_mask(module)
Simply handles the multiplication between the parameter being pruned and the generated mask. Fetches the mask and the original tensor from the module and returns the pruned version of the tensor. Parameters
module (nn.Module) – module containing the tensor to prune Returns
pruned version of the... | torch.generated.torch.nn.utils.prune.randomstructured#torch.nn.utils.prune.RandomStructured.apply_mask |
compute_mask(t, default_mask) [source]
Computes and returns a mask for the input tensor t. Starting from a base default_mask (which should be a mask of ones if the tensor has not been pruned yet), generate a random mask to apply on top of the default_mask by randomly zeroing out channels along the specified dim of th... | torch.generated.torch.nn.utils.prune.randomstructured#torch.nn.utils.prune.RandomStructured.compute_mask |
prune(t, default_mask=None, importance_scores=None)
Computes and returns a pruned version of input tensor t according to the pruning rule specified in compute_mask(). Parameters
t (torch.Tensor) – tensor to prune (of same dimensions as default_mask).
importance_scores (torch.Tensor) – tensor of importance scores... | torch.generated.torch.nn.utils.prune.randomstructured#torch.nn.utils.prune.RandomStructured.prune |
remove(module)
Removes the pruning reparameterization from a module. The pruned parameter named name remains permanently pruned, and the parameter named name+'_orig' is removed from the parameter list. Similarly, the buffer named name+'_mask' is removed from the buffers. Note Pruning itself is NOT undone or reversed... | torch.generated.torch.nn.utils.prune.randomstructured#torch.nn.utils.prune.RandomStructured.remove |
class torch.nn.utils.prune.RandomUnstructured(amount) [source]
Prune (currently unpruned) units in a tensor at random. Parameters
name (str) – parameter name within module on which pruning will act.
amount (int or float) – quantity of parameters to prune. If float, should be between 0.0 and 1.0 and represent the... | torch.generated.torch.nn.utils.prune.randomunstructured#torch.nn.utils.prune.RandomUnstructured |
classmethod apply(module, name, amount) [source]
Adds the forward pre-hook that enables pruning on the fly and the reparametrization of a tensor in terms of the original tensor and the pruning mask. Parameters
module (nn.Module) – module containing the tensor to prune
name (str) – parameter name within module on... | torch.generated.torch.nn.utils.prune.randomunstructured#torch.nn.utils.prune.RandomUnstructured.apply |
apply_mask(module)
Simply handles the multiplication between the parameter being pruned and the generated mask. Fetches the mask and the original tensor from the module and returns the pruned version of the tensor. Parameters
module (nn.Module) – module containing the tensor to prune Returns
pruned version of the... | torch.generated.torch.nn.utils.prune.randomunstructured#torch.nn.utils.prune.RandomUnstructured.apply_mask |
prune(t, default_mask=None, importance_scores=None)
Computes and returns a pruned version of input tensor t according to the pruning rule specified in compute_mask(). Parameters
t (torch.Tensor) – tensor to prune (of same dimensions as default_mask).
importance_scores (torch.Tensor) – tensor of importance scores... | torch.generated.torch.nn.utils.prune.randomunstructured#torch.nn.utils.prune.RandomUnstructured.prune |
remove(module)
Removes the pruning reparameterization from a module. The pruned parameter named name remains permanently pruned, and the parameter named name+'_orig' is removed from the parameter list. Similarly, the buffer named name+'_mask' is removed from the buffers. Note Pruning itself is NOT undone or reversed... | torch.generated.torch.nn.utils.prune.randomunstructured#torch.nn.utils.prune.RandomUnstructured.remove |
torch.nn.utils.prune.random_structured(module, name, amount, dim) [source]
Prunes tensor corresponding to parameter called name in module by removing the specified amount of (currently unpruned) channels along the specified dim selected at random. Modifies module in place (and also return the modified module) by: 1) ... | torch.generated.torch.nn.utils.prune.random_structured#torch.nn.utils.prune.random_structured |
torch.nn.utils.prune.random_unstructured(module, name, amount) [source]
Prunes tensor corresponding to parameter called name in module by removing the specified amount of (currently unpruned) units selected at random. Modifies module in place (and also return the modified module) by: 1) adding a named buffer called n... | torch.generated.torch.nn.utils.prune.random_unstructured#torch.nn.utils.prune.random_unstructured |
torch.nn.utils.prune.remove(module, name) [source]
Removes the pruning reparameterization from a module and the pruning method from the forward hook. The pruned parameter named name remains permanently pruned, and the parameter named name+'_orig' is removed from the parameter list. Similarly, the buffer named name+'_... | torch.generated.torch.nn.utils.prune.remove#torch.nn.utils.prune.remove |
torch.nn.utils.remove_spectral_norm(module, name='weight') [source]
Removes the spectral normalization reparameterization from a module. Parameters
module (Module) – containing module
name (str, optional) – name of weight parameter Example >>> m = spectral_norm(nn.Linear(40, 10))
>>> remove_spectral_norm(m) | torch.generated.torch.nn.utils.remove_spectral_norm#torch.nn.utils.remove_spectral_norm |
torch.nn.utils.remove_weight_norm(module, name='weight') [source]
Removes the weight normalization reparameterization from a module. Parameters
module (Module) – containing module
name (str, optional) – name of weight parameter Example >>> m = weight_norm(nn.Linear(20, 40))
>>> remove_weight_norm(m) | torch.generated.torch.nn.utils.remove_weight_norm#torch.nn.utils.remove_weight_norm |
class torch.nn.utils.rnn.PackedSequence [source]
Holds the data and list of batch_sizes of a packed sequence. All RNN modules accept packed sequences as inputs. Note Instances of this class should never be created manually. They are meant to be instantiated by functions like pack_padded_sequence(). Batch sizes repre... | torch.generated.torch.nn.utils.rnn.packedsequence#torch.nn.utils.rnn.PackedSequence |
property batch_sizes
Alias for field number 1 | torch.generated.torch.nn.utils.rnn.packedsequence#torch.nn.utils.rnn.PackedSequence.batch_sizes |
count()
Return number of occurrences of value. | torch.generated.torch.nn.utils.rnn.packedsequence#torch.nn.utils.rnn.PackedSequence.count |
property data
Alias for field number 0 | torch.generated.torch.nn.utils.rnn.packedsequence#torch.nn.utils.rnn.PackedSequence.data |
index()
Return first index of value. Raises ValueError if the value is not present. | torch.generated.torch.nn.utils.rnn.packedsequence#torch.nn.utils.rnn.PackedSequence.index |
property is_cuda
Returns true if self.data stored on a gpu | torch.generated.torch.nn.utils.rnn.packedsequence#torch.nn.utils.rnn.PackedSequence.is_cuda |
is_pinned() [source]
Returns true if self.data stored on in pinned memory | torch.generated.torch.nn.utils.rnn.packedsequence#torch.nn.utils.rnn.PackedSequence.is_pinned |
property sorted_indices
Alias for field number 2 | torch.generated.torch.nn.utils.rnn.packedsequence#torch.nn.utils.rnn.PackedSequence.sorted_indices |
to(*args, **kwargs) [source]
Performs dtype and/or device conversion on self.data. It has similar signature as torch.Tensor.to(), except optional arguments like non_blocking and copy should be passed as kwargs, not args, or they will not apply to the index tensors. Note If the self.data Tensor already has the correc... | torch.generated.torch.nn.utils.rnn.packedsequence#torch.nn.utils.rnn.PackedSequence.to |
property unsorted_indices
Alias for field number 3 | torch.generated.torch.nn.utils.rnn.packedsequence#torch.nn.utils.rnn.PackedSequence.unsorted_indices |
torch.nn.utils.rnn.pack_padded_sequence(input, lengths, batch_first=False, enforce_sorted=True) [source]
Packs a Tensor containing padded sequences of variable length. input can be of size T x B x * where T is the length of the longest sequence (equal to lengths[0]), B is the batch size, and * is any number of dimens... | torch.generated.torch.nn.utils.rnn.pack_padded_sequence#torch.nn.utils.rnn.pack_padded_sequence |
torch.nn.utils.rnn.pack_sequence(sequences, enforce_sorted=True) [source]
Packs a list of variable length Tensors sequences should be a list of Tensors of size L x *, where L is the length of a sequence and * is any number of trailing dimensions, including zero. For unsorted sequences, use enforce_sorted = False. If ... | torch.generated.torch.nn.utils.rnn.pack_sequence#torch.nn.utils.rnn.pack_sequence |
torch.nn.utils.rnn.pad_packed_sequence(sequence, batch_first=False, padding_value=0.0, total_length=None) [source]
Pads a packed batch of variable length sequences. It is an inverse operation to pack_padded_sequence(). The returned Tensor’s data will be of size T x B x *, where T is the length of the longest sequence... | torch.generated.torch.nn.utils.rnn.pad_packed_sequence#torch.nn.utils.rnn.pad_packed_sequence |
torch.nn.utils.rnn.pad_sequence(sequences, batch_first=False, padding_value=0.0) [source]
Pad a list of variable length Tensors with padding_value pad_sequence stacks a list of Tensors along a new dimension, and pads them to equal length. For example, if the input is list of sequences with size L x * and if batch_fir... | torch.generated.torch.nn.utils.rnn.pad_sequence#torch.nn.utils.rnn.pad_sequence |
torch.nn.utils.spectral_norm(module, name='weight', n_power_iterations=1, eps=1e-12, dim=None) [source]
Applies spectral normalization to a parameter in the given module. WSN=Wσ(W),σ(W)=maxh:h≠0∥Wh∥2∥h∥2\mathbf{W}_{SN} = \dfrac{\mathbf{W}}{\sigma(\mathbf{W})}, \sigma(\mathbf{W}) = \max_{\mathbf{h}: \mathbf{h} \ne 0... | torch.generated.torch.nn.utils.spectral_norm#torch.nn.utils.spectral_norm |
torch.nn.utils.vector_to_parameters(vec, parameters) [source]
Convert one vector to the parameters Parameters
vec (Tensor) – a single vector represents the parameters of a model.
parameters (Iterable[Tensor]) – an iterator of Tensors that are the parameters of a model. | torch.generated.torch.nn.utils.vector_to_parameters#torch.nn.utils.vector_to_parameters |
torch.nn.utils.weight_norm(module, name='weight', dim=0) [source]
Applies weight normalization to a parameter in the given module. w=gv∥v∥\mathbf{w} = g \dfrac{\mathbf{v}}{\|\mathbf{v}\|}
Weight normalization is a reparameterization that decouples the magnitude of a weight tensor from its direction. This replaces ... | torch.generated.torch.nn.utils.weight_norm#torch.nn.utils.weight_norm |
class torch.nn.ZeroPad2d(padding) [source]
Pads the input tensor boundaries with zero. For N-dimensional padding, use torch.nn.functional.pad(). Parameters
padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 4-tuple, uses (padding_left\text{padding\_left} , paddi... | torch.generated.torch.nn.zeropad2d#torch.nn.ZeroPad2d |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.