doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
torch.isclose(input, other, rtol=1e-05, atol=1e-08, equal_nan=False) → Tensor
Returns a new tensor with boolean elements representing if each element of input is “close” to the corresponding element of other. Closeness is defined as: ∣input−other∣≤atol+rtol×∣other∣\lvert \text{input} - \text{other} \rvert \leq \text... | torch.generated.torch.isclose#torch.isclose |
torch.isfinite(input) → Tensor
Returns a new tensor with boolean elements representing if each element is finite or not. Real values are finite when they are not NaN, negative infinity, or infinity. Complex values are finite when both their real and imaginary parts are finite. Args:
input (Tensor): the input tensor... | torch.generated.torch.isfinite#torch.isfinite |
torch.isinf(input) → Tensor
Tests if each element of input is infinite (positive or negative infinity) or not. Note Complex values are infinite when their real or imaginary part is infinite. Args:
{input} Returns:
A boolean tensor that is True where input is infinite and False elsewhere Example: >>> torch.isin... | torch.generated.torch.isinf#torch.isinf |
torch.isnan(input) → Tensor
Returns a new tensor with boolean elements representing if each element of input is NaN or not. Complex values are considered NaN when either their real and/or imaginary part is NaN. Parameters
input (Tensor) – the input tensor. Returns
A boolean tensor that is True where input is NaN ... | torch.generated.torch.isnan#torch.isnan |
torch.isneginf(input, *, out=None) → Tensor
Tests if each element of input is negative infinity or not. Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example::
>>> a = torch.tensor([-float('inf'), float('inf'), 1.2])
>>> torch.isneginf(a)
tensor([ ... | torch.generated.torch.isneginf#torch.isneginf |
torch.isposinf(input, *, out=None) → Tensor
Tests if each element of input is positive infinity or not. Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example::
>>> a = torch.tensor([-float('inf'), float('inf'), 1.2])
>>> torch.isposinf(a)
tensor([F... | torch.generated.torch.isposinf#torch.isposinf |
torch.isreal(input) → Tensor
Returns a new tensor with boolean elements representing if each element of input is real-valued or not. All real-valued types are considered real. Complex values are considered real when their imaginary part is 0. Parameters
input (Tensor) – the input tensor. Returns
A boolean tensor ... | torch.generated.torch.isreal#torch.isreal |
torch.istft(input, n_fft, hop_length=None, win_length=None, window=None, center=True, normalized=False, onesided=None, length=None, return_complex=False) [source]
Inverse short time Fourier Transform. This is expected to be the inverse of stft(). It has the same parameters (+ additional optional parameter of length) ... | torch.generated.torch.istft#torch.istft |
torch.is_complex(input) -> (bool)
Returns True if the data type of input is a complex data type i.e., one of torch.complex64, and torch.complex128. Parameters
input (Tensor) – the input tensor. | torch.generated.torch.is_complex#torch.is_complex |
torch.is_floating_point(input) -> (bool)
Returns True if the data type of input is a floating point data type i.e., one of torch.float64, torch.float32, torch.float16, and torch.bfloat16. Parameters
input (Tensor) – the input tensor. | torch.generated.torch.is_floating_point#torch.is_floating_point |
torch.is_nonzero(input) -> (bool)
Returns True if the input is a single element tensor which is not equal to zero after type conversions. i.e. not equal to torch.tensor([0.]) or torch.tensor([0]) or torch.tensor([False]). Throws a RuntimeError if torch.numel() != 1 (even in case of sparse tensors). Parameters
input... | torch.generated.torch.is_nonzero#torch.is_nonzero |
torch.is_storage(obj) [source]
Returns True if obj is a PyTorch storage object. Parameters
obj (Object) – Object to test | torch.generated.torch.is_storage#torch.is_storage |
torch.is_tensor(obj) [source]
Returns True if obj is a PyTorch tensor. Note that this function is simply doing isinstance(obj, Tensor). Using that isinstance check is better for typechecking with mypy, and more explicit - so it’s recommended to use that instead of is_tensor. Parameters
obj (Object) – Object to test | torch.generated.torch.is_tensor#torch.is_tensor |
torch.jit.export(fn) [source]
This decorator indicates that a method on an nn.Module is used as an entry point into a ScriptModule and should be compiled. forward implicitly is assumed to be an entry point, so it does not need this decorator. Functions and methods called from forward are compiled as they are seen by ... | torch.jit#torch.jit.export |
torch.jit.fork(func, *args, **kwargs) [source]
Creates an asynchronous task executing func and a reference to the value of the result of this execution. fork will return immediately, so the return value of func may not have been computed yet. To force completion of the task and access the return value invoke torch.ji... | torch.generated.torch.jit.fork#torch.jit.fork |
torch.jit.freeze(mod, preserved_attrs=None, optimize_numerics=True) [source]
Freezing a ScriptModule will clone it and attempt to inline the cloned module’s submodules, parameters, and attributes as constants in the TorchScript IR Graph. By default, forward will be preserved, as well as attributes & methods specified... | torch.generated.torch.jit.freeze#torch.jit.freeze |
torch.jit.ignore(drop=False, **kwargs) [source]
This decorator indicates to the compiler that a function or method should be ignored and left as a Python function. This allows you to leave code in your model that is not yet TorchScript compatible. If called from TorchScript, ignored functions will dispatch the call t... | torch.generated.torch.jit.ignore#torch.jit.ignore |
torch.jit.isinstance(obj, target_type) [source]
This function provides for conatiner type refinement in TorchScript. It can refine parameterized containers of the List, Dict, Tuple, and Optional types. E.g. List[str], Dict[str, List[torch.Tensor]], Optional[Tuple[int,str,int]]. It can also refine basic types such as ... | torch.generated.torch.jit.isinstance#torch.jit.isinstance |
torch.jit.is_scripting() [source]
Function that returns True when in compilation and False otherwise. This is useful especially with the @unused decorator to leave code in your model that is not yet TorchScript compatible. .. testcode: import torch
@torch.jit.unused
def unsupported_linear_op(x):
return x
def li... | torch.jit_language_reference#torch.jit.is_scripting |
torch.jit.load(f, map_location=None, _extra_files=None) [source]
Load a ScriptModule or ScriptFunction previously saved with torch.jit.save All previously saved modules, no matter their device, are first loaded onto CPU, and then are moved to the devices they were saved from. If this fails (e.g. because the run time ... | torch.generated.torch.jit.load#torch.jit.load |
torch.jit.save(m, f, _extra_files=None) [source]
Save an offline version of this module for use in a separate process. The saved module serializes all of the methods, submodules, parameters, and attributes of this module. It can be loaded into the C++ API using torch::jit::load(filename) or into the Python API with t... | torch.generated.torch.jit.save#torch.jit.save |
torch.jit.script(obj, optimize=None, _frames_up=0, _rcb=None) [source]
Scripting a function or nn.Module will inspect the source code, compile it as TorchScript code using the TorchScript compiler, and return a ScriptModule or ScriptFunction. TorchScript itself is a subset of the Python language, so not all features ... | torch.generated.torch.jit.script#torch.jit.script |
class torch.jit.ScriptFunction
Functionally equivalent to a ScriptModule, but represents a single function and does not have any attributes or Parameters.
get_debug_state(self: torch._C.ScriptFunction) → torch._C.GraphExecutorState
save(self: torch._C.ScriptFunction, filename: str, _extra_files: Dict[str, str] ... | torch.generated.torch.jit.scriptfunction#torch.jit.ScriptFunction |
get_debug_state(self: torch._C.ScriptFunction) → torch._C.GraphExecutorState | torch.generated.torch.jit.scriptfunction#torch.jit.ScriptFunction.get_debug_state |
save(self: torch._C.ScriptFunction, filename: str, _extra_files: Dict[str, str] = {}) → None | torch.generated.torch.jit.scriptfunction#torch.jit.ScriptFunction.save |
save_to_buffer(self: torch._C.ScriptFunction, _extra_files: Dict[str, str] = {}) → bytes | torch.generated.torch.jit.scriptfunction#torch.jit.ScriptFunction.save_to_buffer |
class torch.jit.ScriptModule [source]
A wrapper around C++ torch::jit::Module. ScriptModules contain methods, attributes, parameters, and constants. These can be accessed the same as on a normal nn.Module.
add_module(name, module)
Adds a child module to the current module. The module can be accessed as an attribu... | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule |
add_module(name, module)
Adds a child module to the current module. The module can be accessed as an attribute using the given name. Parameters
name (string) – name of the child module. The child module can be accessed from this module using the given name
module (Module) – child module to be added to the module... | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.add_module |
apply(fn)
Applies fn recursively to every submodule (as returned by .children()) as well as self. Typical use includes initializing the parameters of a model (see also torch.nn.init). Parameters
fn (Module -> None) – function to be applied to each submodule Returns
self Return type
Module Example: >>> @torch.... | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.apply |
bfloat16()
Casts all floating point parameters and buffers to bfloat16 datatype. Returns
self Return type
Module | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.bfloat16 |
buffers(recurse=True)
Returns an iterator over module buffers. Parameters
recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module. Yields
torch.Tensor – module buffer Example: >>> for buf in model.buffers():
>>> p... | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.buffers |
children()
Returns an iterator over immediate children modules. Yields
Module – a child module | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.children |
property code
Returns a pretty-printed representation (as valid Python syntax) of the internal graph for the forward method. See Inspecting Code for details. | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.code |
property code_with_constants
Returns a tuple of: [0] a pretty-printed representation (as valid Python syntax) of the internal graph for the forward method. See code. [1] a ConstMap following the CONSTANT.cN format of the output in [0]. The indices in the [0] output are keys to the underlying constant’s values. See In... | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.code_with_constants |
cpu()
Moves all model parameters and buffers to the CPU. Returns
self Return type
Module | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.cpu |
cuda(device=None)
Moves all model parameters and buffers to the GPU. This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized. Parameters
device (int, optional) – if specified, all parameters will b... | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.cuda |
double()
Casts all floating point parameters and buffers to double datatype. Returns
self Return type
Module | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.double |
eval()
Sets the module in evaluation mode. This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc. This is equivalent with self.train(False). Returns
self Return type
Modul... | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.eval |
extra_repr()
Set the extra representation of the module To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable. | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.extra_repr |
float()
Casts all floating point parameters and buffers to float datatype. Returns
self Return type
Module | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.float |
property graph
Returns a string representation of the internal graph for the forward method. See Interpreting Graphs for details. | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.graph |
half()
Casts all floating point parameters and buffers to half datatype. Returns
self Return type
Module | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.half |
property inlined_graph
Returns a string representation of the internal graph for the forward method. This graph will be preprocessed to inline all function and method calls. See Interpreting Graphs for details. | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.inlined_graph |
load_state_dict(state_dict, strict=True)
Copies parameters and buffers from state_dict into this module and its descendants. If strict is True, then the keys of state_dict must exactly match the keys returned by this module’s state_dict() function. Parameters
state_dict (dict) – a dict containing parameters and p... | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.load_state_dict |
modules()
Returns an iterator over all modules in the network. Yields
Module – a module in the network Note Duplicate modules are returned only once. In the following example, l will be returned only once. Example: >>> l = nn.Linear(2, 2)
>>> net = nn.Sequential(l, l)
>>> for idx, m in enumerate(net.modules()):... | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.modules |
named_buffers(prefix='', recurse=True)
Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself. Parameters
prefix (str) – prefix to prepend to all buffer names.
recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields on... | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.named_buffers |
named_children()
Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself. Yields
(string, Module) – Tuple containing a name and child module Example: >>> for name, module in model.named_children():
>>> if name in ['conv4', 'conv5']:
>>> pr... | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.named_children |
named_modules(memo=None, prefix='')
Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself. Yields
(string, Module) – Tuple of name and module Note Duplicate modules are returned only once. In the following example, l will be returned only once. Ex... | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.named_modules |
named_parameters(prefix='', recurse=True)
Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself. Parameters
prefix (str) – prefix to prepend to all parameter names.
recurse (bool) – if True, then yields parameters of this module and all submodules. Ot... | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.named_parameters |
parameters(recurse=True)
Returns an iterator over module parameters. This is typically passed to an optimizer. Parameters
recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module. Yields
Parameter – module paramete... | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.parameters |
register_backward_hook(hook)
Registers a backward hook on the module. This function is deprecated in favor of nn.Module.register_full_backward_hook() and the behavior of this function will change in future versions. Returns
a handle that can be used to remove the added hook by calling handle.remove() Return type
... | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.register_backward_hook |
register_buffer(name, tensor, persistent=True)
Adds a buffer to the module. This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s running_mean is not a parameter, but is part of the module’s state. Buffers, by default, are persistent and will be saved... | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.register_buffer |
register_forward_hook(hook)
Registers a forward hook on the module. The hook will be called every time after forward() has computed an output. It should have the following signature: hook(module, input, output) -> None or modified output
The input contains only the positional arguments given to the module. Keyword a... | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.register_forward_hook |
register_forward_pre_hook(hook)
Registers a forward pre-hook on the module. The hook will be called every time before forward() is invoked. It should have the following signature: hook(module, input) -> None or modified input
The input contains only the positional arguments given to the module. Keyword arguments won... | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.register_forward_pre_hook |
register_full_backward_hook(hook)
Registers a backward hook on the module. The hook will be called every time the gradients with respect to module inputs are computed. The hook should have the following signature: hook(module, grad_input, grad_output) -> tuple(Tensor) or None
The grad_input and grad_output are tuple... | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.register_full_backward_hook |
register_parameter(name, param)
Adds a parameter to the module. The parameter can be accessed as an attribute using given name. Parameters
name (string) – name of the parameter. The parameter can be accessed from this module using the given name
param (Parameter) – parameter to be added to the module. | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.register_parameter |
requires_grad_(requires_grad=True)
Change if autograd should record operations on parameters in this module. This method sets the parameters’ requires_grad attributes in-place. This method is helpful for freezing part of the module for finetuning or training parts of a model individually (e.g., GAN training). Parame... | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.requires_grad_ |
save(f, _extra_files={})
See torch.jit.save for details. | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.save |
state_dict(destination=None, prefix='', keep_vars=False)
Returns a dictionary containing a whole state of the module. Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Returns
a dictionary containing a whole state of the module Return ty... | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.state_dict |
to(*args, **kwargs)
Moves and/or casts the parameters and buffers. This can be called as
to(device=None, dtype=None, non_blocking=False)
to(dtype, non_blocking=False)
to(tensor, non_blocking=False)
to(memory_format=torch.channels_last)
Its signature is similar to torch.Tensor.to(), but only accepts fl... | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.to |
train(mode=True)
Sets the module in training mode. This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc. Parameters
mode (bool) – whether to set training mode (True) or eva... | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.train |
type(dst_type)
Casts all parameters and buffers to dst_type. Parameters
dst_type (type or string) – the desired type Returns
self Return type
Module | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.type |
xpu(device=None)
Moves all model parameters and buffers to the XPU. This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on XPU while being optimized. Parameters
device (int, optional) – if specified, all parameters will be... | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.xpu |
zero_grad(set_to_none=False)
Sets gradients of all model parameters to zero. See similar function under torch.optim.Optimizer for more context. Parameters
set_to_none (bool) – instead of setting to zero, set the grads to None. See torch.optim.Optimizer.zero_grad() for details. | torch.generated.torch.jit.scriptmodule#torch.jit.ScriptModule.zero_grad |
torch.jit.script_if_tracing(fn) [source]
Compiles fn when it is first called during tracing. torch.jit.script has a non-negligible start up time when it is first called due to lazy-initializations of many compiler builtins. Therefore you should not use it in library code. However, you may want to have parts of your l... | torch.generated.torch.jit.script_if_tracing#torch.jit.script_if_tracing |
torch.jit.trace(func, example_inputs, optimize=None, check_trace=True, check_inputs=None, check_tolerance=1e-05, strict=True, _force_outplace=False, _module_class=None, _compilation_unit=<torch.jit.CompilationUnit object>) [source]
Trace a function and return an executable or ScriptFunction that will be optimized usi... | torch.generated.torch.jit.trace#torch.jit.trace |
torch.jit.trace_module(mod, inputs, optimize=None, check_trace=True, check_inputs=None, check_tolerance=1e-05, strict=True, _force_outplace=False, _module_class=None, _compilation_unit=<torch.jit.CompilationUnit object>) [source]
Trace a module and return an executable ScriptModule that will be optimized using just-i... | torch.generated.torch.jit.trace_module#torch.jit.trace_module |
torch.jit.unused(fn) [source]
This decorator indicates to the compiler that a function or method should be ignored and replaced with the raising of an exception. This allows you to leave code in your model that is not yet TorchScript compatible and still export your model. Example (using @torch.jit.unused on a method... | torch.generated.torch.jit.unused#torch.jit.unused |
torch.jit.wait(future) [source]
Forces completion of a torch.jit.Future[T] asynchronous task, returning the result of the task. See fork() for docs and examples. :param func: an asynchronous task reference, created through torch.jit.fork :type func: torch.jit.Future[T] Returns
the return value of the the completed ... | torch.generated.torch.jit.wait#torch.jit.wait |
torch.kaiser_window(window_length, periodic=True, beta=12.0, *, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
Computes the Kaiser window with window length window_length and shape parameter beta. Let I_0 be the zeroth order modified Bessel function of the first kind (see torch.i0()) and... | torch.generated.torch.kaiser_window#torch.kaiser_window |
torch.kron(input, other, *, out=None) → Tensor
Computes the Kronecker product, denoted by ⊗\otimes , of input and other. If input is a (a0×a1×⋯×an)(a_0 \times a_1 \times \dots \times a_n) tensor and other is a (b0×b1×⋯×bn)(b_0 \times b_1 \times \dots \times b_n) tensor, the result will be a (a0∗b0×a1∗b1×⋯×an∗bn)(a_... | torch.generated.torch.kron#torch.kron |
torch.kthvalue(input, k, dim=None, keepdim=False, *, out=None) -> (Tensor, LongTensor)
Returns a namedtuple (values, indices) where values is the k th smallest element of each row of the input tensor in the given dimension dim. And indices is the index location of each element found. If dim is not given, the last dim... | torch.generated.torch.kthvalue#torch.kthvalue |
torch.lcm(input, other, *, out=None) → Tensor
Computes the element-wise least common multiple (LCM) of input and other. Both input and other must have integer types. Note This defines lcm(0,0)=0lcm(0, 0) = 0 and lcm(0,a)=0lcm(0, a) = 0 . Parameters
input (Tensor) – the input tensor.
other (Tensor) – the secon... | torch.generated.torch.lcm#torch.lcm |
torch.ldexp(input, other, *, out=None) → Tensor
Multiplies input by 2**:attr:other. outi=inputi∗2iother\text{{out}}_i = \text{{input}}_i * 2^\text{{other}}_i
Typically this function is used to construct floating point numbers by multiplying mantissas in input with integral powers of two created from the exponents ... | torch.generated.torch.ldexp#torch.ldexp |
torch.le(input, other, *, out=None) → Tensor
Computes input≤other\text{input} \leq \text{other} element-wise. The second argument can be a number or a tensor whose shape is broadcastable with the first argument. Parameters
input (Tensor) – the tensor to compare
other (Tensor or Scalar) – the tensor or value to ... | torch.generated.torch.le#torch.le |
torch.lerp(input, end, weight, *, out=None)
Does a linear interpolation of two tensors start (given by input) and end based on a scalar or tensor weight and returns the resulting out tensor. outi=starti+weighti×(endi−starti)\text{out}_i = \text{start}_i + \text{weight}_i \times (\text{end}_i - \text{start}_i)
The ... | torch.generated.torch.lerp#torch.lerp |
torch.less(input, other, *, out=None) → Tensor
Alias for torch.lt(). | torch.generated.torch.less#torch.less |
torch.less_equal(input, other, *, out=None) → Tensor
Alias for torch.le(). | torch.generated.torch.less_equal#torch.less_equal |
torch.lgamma(input, *, out=None) → Tensor
Computes the logarithm of the gamma function on input. outi=logΓ(inputi)\text{out}_{i} = \log \Gamma(\text{input}_{i})
Parameters
input (Tensor) – the input tensor.
out (Tensor, optional) – the output tensor. Example: >>> a = torch.arange(0.5, 2, 0.5)
>>> torch.lg... | torch.generated.torch.lgamma#torch.lgamma |
torch.linalg Common linear algebra operations. This module is in BETA. New functions are still being added, and some functions may change in future PyTorch releases. See the documentation of each function for details. Functions
torch.linalg.cholesky(input, *, out=None) → Tensor
Computes the Cholesky decomposition o... | torch.linalg |
torch.linalg.cholesky(input, *, out=None) → Tensor
Computes the Cholesky decomposition of a Hermitian (or symmetric for real-valued matrices) positive-definite matrix or the Cholesky decompositions for a batch of such matrices. Each decomposition has the form: input=LLH\text{input} = LL^H
where LL is a lower-trian... | torch.linalg#torch.linalg.cholesky |
torch.linalg.cond(input, p=None, *, out=None) → Tensor
Computes the condition number of a matrix input, or of each matrix in a batched input, using the matrix norm defined by p. For norms {‘fro’, ‘nuc’, inf, -inf, 1, -1} this is defined as the matrix norm of input times the matrix norm of the inverse of input compute... | torch.linalg#torch.linalg.cond |
torch.linalg.det(input) → Tensor
Computes the determinant of a square matrix input, or of each square matrix in a batched input. This function supports float, double, cfloat and cdouble dtypes. Note When given inputs on a CUDA device, this function synchronizes that device with the CPU. Note The determinant is com... | torch.linalg#torch.linalg.det |
torch.linalg.eigh(input, UPLO='L', *, out=None) -> (Tensor, Tensor)
Computes the eigenvalues and eigenvectors of a complex Hermitian (or real symmetric) matrix input, or of each such matrix in a batched input. For a single matrix input, the tensor of eigenvalues w and the tensor of eigenvectors V decompose the input ... | torch.linalg#torch.linalg.eigh |
torch.linalg.eigvalsh(input, UPLO='L', *, out=None) → Tensor
Computes the eigenvalues of a complex Hermitian (or real symmetric) matrix input, or of each such matrix in a batched input. The eigenvalues are returned in ascending order. Since the matrix or matrices in input are assumed to be Hermitian, the imaginary pa... | torch.linalg#torch.linalg.eigvalsh |
torch.linalg.inv(input, *, out=None) → Tensor
Computes the multiplicative inverse matrix of a square matrix input, or of each square matrix in a batched input. The result satisfies the relation: matmul(inv(input),input) = matmul(input,inv(input)) = eye(input.shape[0]).expand_as(input). Supports input of float, double... | torch.linalg#torch.linalg.inv |
torch.linalg.matrix_rank(input, tol=None, hermitian=False, *, out=None) → Tensor
Computes the numerical rank of a matrix input, or of each matrix in a batched input. The matrix rank is computed as the number of singular values (or absolute eigenvalues when hermitian is True) that are greater than the specified tol th... | torch.linalg#torch.linalg.matrix_rank |
torch.linalg.norm(input, ord=None, dim=None, keepdim=False, *, out=None, dtype=None) → Tensor
Returns the matrix norm or vector norm of a given tensor. This function can calculate one of eight different types of matrix norms, or one of an infinite number of vector norms, depending on both the number of reduction dime... | torch.linalg#torch.linalg.norm |
torch.linalg.pinv(input, rcond=1e-15, hermitian=False, *, out=None) → Tensor
Computes the pseudo-inverse (also known as the Moore-Penrose inverse) of a matrix input, or of each matrix in a batched input. The singular values (or the absolute values of the eigenvalues when hermitian is True) that are below the specifie... | torch.linalg#torch.linalg.pinv |
torch.linalg.qr(input, mode='reduced', *, out=None) -> (Tensor, Tensor)
Computes the QR decomposition of a matrix or a batch of matrices input, and returns a namedtuple (Q, R) of tensors such that input=QR\text{input} = Q R with QQ being an orthogonal matrix or batch of orthogonal matrices and RR being an upper tr... | torch.linalg#torch.linalg.qr |
torch.linalg.slogdet(input, *, out=None) -> (Tensor, Tensor)
Calculates the sign and natural logarithm of the absolute value of a square matrix’s determinant, or of the absolute values of the determinants of a batch of square matrices input. The determinant can be computed with sign * exp(logabsdet). Supports input o... | torch.linalg#torch.linalg.slogdet |
torch.linalg.solve(input, other, *, out=None) → Tensor
Computes the solution x to the matrix equation matmul(input, x) = other with a square matrix, or batches of such matrices, input and one or more right-hand side vectors other. If input is batched and other is not, then other is broadcast to have the same batch di... | torch.linalg#torch.linalg.solve |
torch.linalg.svd(input, full_matrices=True, compute_uv=True, *, out=None) -> (Tensor, Tensor, Tensor)
Computes the singular value decomposition of either a matrix or batch of matrices input.” The singular value decomposition is represented as a namedtuple (U, S, Vh), such that input=U@diag(S)×Vhinput = U \mathbin{@} ... | torch.linalg#torch.linalg.svd |
torch.linalg.tensorinv(input, ind=2, *, out=None) → Tensor
Computes a tensor input_inv such that tensordot(input_inv, input, ind) == I_n (inverse tensor equation), where I_n is the n-dimensional identity tensor and n is equal to input.ndim. The resulting tensor input_inv has shape equal to input.shape[ind:] + input.s... | torch.linalg#torch.linalg.tensorinv |
torch.linalg.tensorsolve(input, other, dims=None, *, out=None) → Tensor
Computes a tensor x such that tensordot(input, x, dims=x.ndim) = other. The resulting tensor x has the same shape as input[other.ndim:]. Supports real-valued and complex-valued inputs. Note If input does not satisfy the requirement prod(input.sh... | torch.linalg#torch.linalg.tensorsolve |
torch.linspace(start, end, steps, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
Creates a one-dimensional tensor of size steps whose values are evenly spaced from start to end, inclusive. That is, the value are: (start,start+end−startsteps−1,…,start+(steps−2)∗end−startstep... | torch.generated.torch.linspace#torch.linspace |
torch.load(f, map_location=None, pickle_module=<module 'pickle' from '/home/matti/miniconda3/lib/python3.7/pickle.py'>, **pickle_load_args) [source]
Loads an object saved with torch.save() from a file. torch.load() uses Python’s unpickling facilities but treats storages, which underlie tensors, specially. They are fi... | torch.generated.torch.load#torch.load |
torch.lobpcg(A, k=None, B=None, X=None, n=None, iK=None, niter=None, tol=None, largest=None, method=None, tracker=None, ortho_iparams=None, ortho_fparams=None, ortho_bparams=None) [source]
Find the k largest (or smallest) eigenvalues and the corresponding eigenvectors of a symmetric positive defined generalized eigen... | torch.generated.torch.lobpcg#torch.lobpcg |
torch.log(input, *, out=None) → Tensor
Returns a new tensor with the natural logarithm of the elements of input. yi=loge(xi)y_{i} = \log_{e} (x_{i})
Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.randn(5)
>>> a
tensor([-0.71... | torch.generated.torch.log#torch.log |
torch.log10(input, *, out=None) → Tensor
Returns a new tensor with the logarithm to the base 10 of the elements of input. yi=log10(xi)y_{i} = \log_{10} (x_{i})
Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.rand(5)
>>> a
ten... | torch.generated.torch.log10#torch.log10 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.