doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
mode(dim=None, keepdim=False) -> (Tensor, LongTensor)
See torch.mode() | torch.tensors#torch.Tensor.mode |
moveaxis(source, destination) β Tensor
See torch.moveaxis() | torch.tensors#torch.Tensor.moveaxis |
movedim(source, destination) β Tensor
See torch.movedim() | torch.tensors#torch.Tensor.movedim |
msort() β Tensor
See torch.msort() | torch.tensors#torch.Tensor.msort |
mul(value) β Tensor
See torch.mul(). | torch.tensors#torch.Tensor.mul |
multinomial(num_samples, replacement=False, *, generator=None) β Tensor
See torch.multinomial() | torch.tensors#torch.Tensor.multinomial |
multiply(value) β Tensor
See torch.multiply(). | torch.tensors#torch.Tensor.multiply |
multiply_(value) β Tensor
In-place version of multiply(). | torch.tensors#torch.Tensor.multiply_ |
mul_(value) β Tensor
In-place version of mul(). | torch.tensors#torch.Tensor.mul_ |
mv(vec) β Tensor
See torch.mv() | torch.tensors#torch.Tensor.mv |
mvlgamma(p) β Tensor
See torch.mvlgamma() | torch.tensors#torch.Tensor.mvlgamma |
mvlgamma_(p) β Tensor
In-place version of mvlgamma() | torch.tensors#torch.Tensor.mvlgamma_ |
names
Stores names for each of this tensorβs dimensions. names[idx] corresponds to the name of tensor dimension idx. Names are either a string if the dimension is named or None if the dimension is unnamed. Dimension names may contain characters or underscore. Furthermore, a dimension name must be a valid Python varia... | torch.named_tensor#torch.Tensor.names |
nanmedian(dim=None, keepdim=False) -> (Tensor, LongTensor)
See torch.nanmedian() | torch.tensors#torch.Tensor.nanmedian |
nanquantile(q, dim=None, keepdim=False) β Tensor
See torch.nanquantile() | torch.tensors#torch.Tensor.nanquantile |
nansum(dim=None, keepdim=False, dtype=None) β Tensor
See torch.nansum() | torch.tensors#torch.Tensor.nansum |
nan_to_num(nan=0.0, posinf=None, neginf=None) β Tensor
See torch.nan_to_num(). | torch.tensors#torch.Tensor.nan_to_num |
nan_to_num_(nan=0.0, posinf=None, neginf=None) β Tensor
In-place version of nan_to_num(). | torch.tensors#torch.Tensor.nan_to_num_ |
narrow(dimension, start, length) β Tensor
See torch.narrow() Example: >>> x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
>>> x.narrow(0, 0, 2)
tensor([[ 1, 2, 3],
[ 4, 5, 6]])
>>> x.narrow(1, 1, 2)
tensor([[ 2, 3],
[ 5, 6],
[ 8, 9]]) | torch.tensors#torch.Tensor.narrow |
narrow_copy(dimension, start, length) β Tensor
Same as Tensor.narrow() except returning a copy rather than shared storage. This is primarily for sparse tensors, which do not have a shared-storage narrow method. Calling `narrow_copy with `dimemsion > self.sparse_dim()` will return a copy with the relevant dense dimens... | torch.tensors#torch.Tensor.narrow_copy |
ndim
Alias for dim() | torch.tensors#torch.Tensor.ndim |
ndimension() β int
Alias for dim() | torch.tensors#torch.Tensor.ndimension |
ne(other) β Tensor
See torch.ne(). | torch.tensors#torch.Tensor.ne |
neg() β Tensor
See torch.neg() | torch.tensors#torch.Tensor.neg |
negative() β Tensor
See torch.negative() | torch.tensors#torch.Tensor.negative |
negative_() β Tensor
In-place version of negative() | torch.tensors#torch.Tensor.negative_ |
neg_() β Tensor
In-place version of neg() | torch.tensors#torch.Tensor.neg_ |
nelement() β int
Alias for numel() | torch.tensors#torch.Tensor.nelement |
new_empty(size, dtype=None, device=None, requires_grad=False) β Tensor
Returns a Tensor of size size filled with uninitialized data. By default, the returned Tensor has the same torch.dtype and torch.device as this tensor. Parameters
dtype (torch.dtype, optional) β the desired type of returned tensor. Default: if... | torch.tensors#torch.Tensor.new_empty |
new_full(size, fill_value, dtype=None, device=None, requires_grad=False) β Tensor
Returns a Tensor of size size filled with fill_value. By default, the returned Tensor has the same torch.dtype and torch.device as this tensor. Parameters
fill_value (scalar) β the number to fill the output tensor with.
dtype (torc... | torch.tensors#torch.Tensor.new_full |
new_ones(size, dtype=None, device=None, requires_grad=False) β Tensor
Returns a Tensor of size size filled with 1. By default, the returned Tensor has the same torch.dtype and torch.device as this tensor. Parameters
size (int...) β a list, tuple, or torch.Size of integers defining the shape of the output tensor. ... | torch.tensors#torch.Tensor.new_ones |
new_tensor(data, dtype=None, device=None, requires_grad=False) β Tensor
Returns a new Tensor with data as the tensor data. By default, the returned Tensor has the same torch.dtype and torch.device as this tensor. Warning new_tensor() always copies data. If you have a Tensor data and want to avoid a copy, use torch.T... | torch.tensors#torch.Tensor.new_tensor |
new_zeros(size, dtype=None, device=None, requires_grad=False) β Tensor
Returns a Tensor of size size filled with 0. By default, the returned Tensor has the same torch.dtype and torch.device as this tensor. Parameters
size (int...) β a list, tuple, or torch.Size of integers defining the shape of the output tensor.... | torch.tensors#torch.Tensor.new_zeros |
nextafter(other) β Tensor
See torch.nextafter() | torch.tensors#torch.Tensor.nextafter |
nextafter_(other) β Tensor
In-place version of nextafter() | torch.tensors#torch.Tensor.nextafter_ |
ne_(other) β Tensor
In-place version of ne(). | torch.tensors#torch.Tensor.ne_ |
nonzero() β LongTensor
See torch.nonzero() | torch.tensors#torch.Tensor.nonzero |
norm(p='fro', dim=None, keepdim=False, dtype=None) [source]
See torch.norm() | torch.tensors#torch.Tensor.norm |
normal_(mean=0, std=1, *, generator=None) β Tensor
Fills self tensor with elements samples from the normal distribution parameterized by mean and std. | torch.tensors#torch.Tensor.normal_ |
not_equal(other) β Tensor
See torch.not_equal(). | torch.tensors#torch.Tensor.not_equal |
not_equal_(other) β Tensor
In-place version of not_equal(). | torch.tensors#torch.Tensor.not_equal_ |
numel() β int
See torch.numel() | torch.tensors#torch.Tensor.numel |
numpy() β numpy.ndarray
Returns self tensor as a NumPy ndarray. This tensor and the returned ndarray share the same underlying storage. Changes to self tensor will be reflected in the ndarray and vice versa. | torch.tensors#torch.Tensor.numpy |
orgqr(input2) β Tensor
See torch.orgqr() | torch.tensors#torch.Tensor.orgqr |
ormqr(input2, input3, left=True, transpose=False) β Tensor
See torch.ormqr() | torch.tensors#torch.Tensor.ormqr |
outer(vec2) β Tensor
See torch.outer(). | torch.tensors#torch.Tensor.outer |
permute(*dims) β Tensor
Returns a view of the original tensor with its dimensions permuted. Parameters
*dims (int...) β The desired ordering of dimensions Example >>> x = torch.randn(2, 3, 5)
>>> x.size()
torch.Size([2, 3, 5])
>>> x.permute(2, 0, 1).size()
torch.Size([5, 2, 3]) | torch.tensors#torch.Tensor.permute |
pinverse() β Tensor
See torch.pinverse() | torch.tensors#torch.Tensor.pinverse |
pin_memory() β Tensor
Copies the tensor to pinned memory, if itβs not already pinned. | torch.tensors#torch.Tensor.pin_memory |
polygamma(n) β Tensor
See torch.polygamma() | torch.tensors#torch.Tensor.polygamma |
polygamma_(n) β Tensor
In-place version of polygamma() | torch.tensors#torch.Tensor.polygamma_ |
pow(exponent) β Tensor
See torch.pow() | torch.tensors#torch.Tensor.pow |
pow_(exponent) β Tensor
In-place version of pow() | torch.tensors#torch.Tensor.pow_ |
prod(dim=None, keepdim=False, dtype=None) β Tensor
See torch.prod() | torch.tensors#torch.Tensor.prod |
put_(indices, tensor, accumulate=False) β Tensor
Copies the elements from tensor into the positions specified by indices. For the purpose of indexing, the self tensor is treated as if it were a 1-D tensor. If accumulate is True, the elements in tensor are added to self. If accumulate is False, the behavior is undefin... | torch.tensors#torch.Tensor.put_ |
qr(some=True) -> (Tensor, Tensor)
See torch.qr() | torch.tensors#torch.Tensor.qr |
qscheme() β torch.qscheme
Returns the quantization scheme of a given QTensor. | torch.tensors#torch.Tensor.qscheme |
quantile(q, dim=None, keepdim=False) β Tensor
See torch.quantile() | torch.tensors#torch.Tensor.quantile |
q_per_channel_axis() β int
Given a Tensor quantized by linear (affine) per-channel quantization, returns the index of dimension on which per-channel quantization is applied. | torch.tensors#torch.Tensor.q_per_channel_axis |
q_per_channel_scales() β Tensor
Given a Tensor quantized by linear (affine) per-channel quantization, returns a Tensor of scales of the underlying quantizer. It has the number of elements that matches the corresponding dimensions (from q_per_channel_axis) of the tensor. | torch.tensors#torch.Tensor.q_per_channel_scales |
q_per_channel_zero_points() β Tensor
Given a Tensor quantized by linear (affine) per-channel quantization, returns a tensor of zero_points of the underlying quantizer. It has the number of elements that matches the corresponding dimensions (from q_per_channel_axis) of the tensor. | torch.tensors#torch.Tensor.q_per_channel_zero_points |
q_scale() β float
Given a Tensor quantized by linear(affine) quantization, returns the scale of the underlying quantizer(). | torch.tensors#torch.Tensor.q_scale |
q_zero_point() β int
Given a Tensor quantized by linear(affine) quantization, returns the zero_point of the underlying quantizer(). | torch.tensors#torch.Tensor.q_zero_point |
rad2deg() β Tensor
See torch.rad2deg() | torch.tensors#torch.Tensor.rad2deg |
random_(from=0, to=None, *, generator=None) β Tensor
Fills self tensor with numbers sampled from the discrete uniform distribution over [from, to - 1]. If not specified, the values are usually only bounded by self tensorβs data type. However, for floating point types, if unspecified, range will be [0, 2^mantissa] to ... | torch.tensors#torch.Tensor.random_ |
ravel(input) β Tensor
see torch.ravel() | torch.tensors#torch.Tensor.ravel |
real
Returns a new tensor containing real values of the self tensor. The returned tensor and self share the same underlying storage. Warning real() is only supported for tensors with complex dtypes. Example::
>>> x=torch.randn(4, dtype=torch.cfloat)
>>> x
tensor([(0.3100+0.3553j), (-0.5445-0.7896j), (-1.6492-0.06... | torch.tensors#torch.Tensor.real |
reciprocal() β Tensor
See torch.reciprocal() | torch.tensors#torch.Tensor.reciprocal |
reciprocal_() β Tensor
In-place version of reciprocal() | torch.tensors#torch.Tensor.reciprocal_ |
record_stream(stream)
Ensures that the tensor memory is not reused for another tensor until all current work queued on stream are complete. Note The caching allocator is aware of only the stream where a tensor was allocated. Due to the awareness, it already correctly manages the life cycle of tensors on only one str... | torch.tensors#torch.Tensor.record_stream |
refine_names(*names) [source]
Refines the dimension names of self according to names. Refining is a special case of renaming that βliftsβ unnamed dimensions. A None dim can be refined to have any name; a named dim can only be refined to have the same name. Because named tensors can coexist with unnamed tensors, refin... | torch.named_tensor#torch.Tensor.refine_names |
register_hook(hook) [source]
Registers a backward hook. The hook will be called every time a gradient with respect to the Tensor is computed. The hook should have the following signature: hook(grad) -> Tensor or None
The hook should not modify its argument, but it can optionally return a new gradient which will be u... | torch.autograd#torch.Tensor.register_hook |
remainder(divisor) β Tensor
See torch.remainder() | torch.tensors#torch.Tensor.remainder |
remainder_(divisor) β Tensor
In-place version of remainder() | torch.tensors#torch.Tensor.remainder_ |
rename(*names, **rename_map) [source]
Renames dimension names of self. There are two main usages: self.rename(**rename_map) returns a view on tensor that has dims renamed as specified in the mapping rename_map. self.rename(*names) returns a view on tensor, renaming all dimensions positionally using names. Use self.re... | torch.named_tensor#torch.Tensor.rename |
rename_(*names, **rename_map) [source]
In-place version of rename(). | torch.named_tensor#torch.Tensor.rename_ |
renorm(p, dim, maxnorm) β Tensor
See torch.renorm() | torch.tensors#torch.Tensor.renorm |
renorm_(p, dim, maxnorm) β Tensor
In-place version of renorm() | torch.tensors#torch.Tensor.renorm_ |
repeat(*sizes) β Tensor
Repeats this tensor along the specified dimensions. Unlike expand(), this function copies the tensorβs data. Warning repeat() behaves differently from numpy.repeat, but is more similar to numpy.tile. For the operator similar to numpy.repeat, see torch.repeat_interleave(). Parameters
sizes ... | torch.tensors#torch.Tensor.repeat |
repeat_interleave(repeats, dim=None) β Tensor
See torch.repeat_interleave(). | torch.tensors#torch.Tensor.repeat_interleave |
requires_grad
Is True if gradients need to be computed for this Tensor, False otherwise. Note The fact that gradients need to be computed for a Tensor do not mean that the grad attribute will be populated, see is_leaf for more details. | torch.autograd#torch.Tensor.requires_grad |
requires_grad_(requires_grad=True) β Tensor
Change if autograd should record operations on this tensor: sets this tensorβs requires_grad attribute in-place. Returns this tensor. requires_grad_()βs main use case is to tell autograd to begin recording operations on a Tensor tensor. If tensor has requires_grad=False (be... | torch.tensors#torch.Tensor.requires_grad_ |
reshape(*shape) β Tensor
Returns a tensor with the same data and number of elements as self but with the specified shape. This method returns a view if shape is compatible with the current shape. See torch.Tensor.view() on when it is possible to return a view. See torch.reshape() Parameters
shape (tuple of python:i... | torch.tensors#torch.Tensor.reshape |
reshape_as(other) β Tensor
Returns this tensor as the same shape as other. self.reshape_as(other) is equivalent to self.reshape(other.sizes()). This method returns a view if other.sizes() is compatible with the current shape. See torch.Tensor.view() on when it is possible to return a view. Please see reshape() for mo... | torch.tensors#torch.Tensor.reshape_as |
resize_(*sizes, memory_format=torch.contiguous_format) β Tensor
Resizes self tensor to the specified size. If the number of elements is larger than the current storage size, then the underlying storage is resized to fit the new number of elements. If the number of elements is smaller, the underlying storage is not ch... | torch.tensors#torch.Tensor.resize_ |
resize_as_(tensor, memory_format=torch.contiguous_format) β Tensor
Resizes the self tensor to be the same size as the specified tensor. This is equivalent to self.resize_(tensor.size()). Parameters
memory_format (torch.memory_format, optional) β the desired memory format of Tensor. Default: torch.contiguous_format.... | torch.tensors#torch.Tensor.resize_as_ |
retain_grad() [source]
Enables .grad attribute for non-leaf Tensors. | torch.autograd#torch.Tensor.retain_grad |
roll(shifts, dims) β Tensor
See torch.roll() | torch.tensors#torch.Tensor.roll |
rot90(k, dims) β Tensor
See torch.rot90() | torch.tensors#torch.Tensor.rot90 |
round() β Tensor
See torch.round() | torch.tensors#torch.Tensor.round |
round_() β Tensor
In-place version of round() | torch.tensors#torch.Tensor.round_ |
rsqrt() β Tensor
See torch.rsqrt() | torch.tensors#torch.Tensor.rsqrt |
rsqrt_() β Tensor
In-place version of rsqrt() | torch.tensors#torch.Tensor.rsqrt_ |
scatter(dim, index, src) β Tensor
Out-of-place version of torch.Tensor.scatter_() | torch.tensors#torch.Tensor.scatter |
scatter_(dim, index, src, reduce=None) β Tensor
Writes all values from the tensor src into self at the indices specified in the index tensor. For each value in src, its output index is specified by its index in src for dimension != dim and by the corresponding value in index for dimension = dim. For a 3-D tensor, sel... | torch.tensors#torch.Tensor.scatter_ |
scatter_add(dim, index, src) β Tensor
Out-of-place version of torch.Tensor.scatter_add_() | torch.tensors#torch.Tensor.scatter_add |
scatter_add_(dim, index, src) β Tensor
Adds all values from the tensor other into self at the indices specified in the index tensor in a similar fashion as scatter_(). For each value in src, it is added to an index in self which is specified by its index in src for dimension != dim and by the corresponding value in i... | torch.tensors#torch.Tensor.scatter_add_ |
select(dim, index) β Tensor
Slices the self tensor along the selected dimension at the given index. This function returns a view of the original tensor with the given dimension removed. Parameters
dim (int) β the dimension to slice
index (int) β the index to select with Note select() is equivalent to slicing... | torch.tensors#torch.Tensor.select |
set_(source=None, storage_offset=0, size=None, stride=None) β Tensor
Sets the underlying storage, size, and strides. If source is a tensor, self tensor will share the same storage and have the same size and strides as source. Changes to elements in one tensor will be reflected in the other. If source is a Storage, th... | torch.tensors#torch.Tensor.set_ |
sgn() β Tensor
See torch.sgn() | torch.tensors#torch.Tensor.sgn |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.