doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
torch.Tensor A torch.Tensor is a multi-dimensional matrix containing elements of a single data type. Torch defines 10 tensor types with CPU and GPU variants which are as follows:
Data type dtype CPU tensor GPU tensor
32-bit floating point torch.float32 or torch.float torch.FloatTensor torch.cuda.FloatTensor
64-... | torch.tensors |
torch.tensor(data, *, dtype=None, device=None, requires_grad=False, pin_memory=False) β Tensor
Constructs a tensor with data. Warning torch.tensor() always copies data. If you have a Tensor data and want to avoid a copy, use torch.Tensor.requires_grad_() or torch.Tensor.detach(). If you have a NumPy ndarray and want... | torch.generated.torch.tensor#torch.tensor |
abs() β Tensor
See torch.abs() | torch.tensors#torch.Tensor.abs |
absolute() β Tensor
Alias for abs() | torch.tensors#torch.Tensor.absolute |
absolute_() β Tensor
In-place version of absolute() Alias for abs_() | torch.tensors#torch.Tensor.absolute_ |
abs_() β Tensor
In-place version of abs() | torch.tensors#torch.Tensor.abs_ |
acos() β Tensor
See torch.acos() | torch.tensors#torch.Tensor.acos |
acosh() β Tensor
See torch.acosh() | torch.tensors#torch.Tensor.acosh |
acosh_() β Tensor
In-place version of acosh() | torch.tensors#torch.Tensor.acosh_ |
acos_() β Tensor
In-place version of acos() | torch.tensors#torch.Tensor.acos_ |
add(other, *, alpha=1) β Tensor
Add a scalar or tensor to self tensor. If both alpha and other are specified, each element of other is scaled by alpha before being used. When other is a tensor, the shape of other must be broadcastable with the shape of the underlying tensor See torch.add() | torch.tensors#torch.Tensor.add |
addbmm(batch1, batch2, *, beta=1, alpha=1) β Tensor
See torch.addbmm() | torch.tensors#torch.Tensor.addbmm |
addbmm_(batch1, batch2, *, beta=1, alpha=1) β Tensor
In-place version of addbmm() | torch.tensors#torch.Tensor.addbmm_ |
addcdiv(tensor1, tensor2, *, value=1) β Tensor
See torch.addcdiv() | torch.tensors#torch.Tensor.addcdiv |
addcdiv_(tensor1, tensor2, *, value=1) β Tensor
In-place version of addcdiv() | torch.tensors#torch.Tensor.addcdiv_ |
addcmul(tensor1, tensor2, *, value=1) β Tensor
See torch.addcmul() | torch.tensors#torch.Tensor.addcmul |
addcmul_(tensor1, tensor2, *, value=1) β Tensor
In-place version of addcmul() | torch.tensors#torch.Tensor.addcmul_ |
addmm(mat1, mat2, *, beta=1, alpha=1) β Tensor
See torch.addmm() | torch.tensors#torch.Tensor.addmm |
addmm_(mat1, mat2, *, beta=1, alpha=1) β Tensor
In-place version of addmm() | torch.tensors#torch.Tensor.addmm_ |
addmv(mat, vec, *, beta=1, alpha=1) β Tensor
See torch.addmv() | torch.tensors#torch.Tensor.addmv |
addmv_(mat, vec, *, beta=1, alpha=1) β Tensor
In-place version of addmv() | torch.tensors#torch.Tensor.addmv_ |
addr(vec1, vec2, *, beta=1, alpha=1) β Tensor
See torch.addr() | torch.tensors#torch.Tensor.addr |
addr_(vec1, vec2, *, beta=1, alpha=1) β Tensor
In-place version of addr() | torch.tensors#torch.Tensor.addr_ |
add_(other, *, alpha=1) β Tensor
In-place version of add() | torch.tensors#torch.Tensor.add_ |
align_as(other) β Tensor
Permutes the dimensions of the self tensor to match the dimension order in the other tensor, adding size-one dims for any new names. This operation is useful for explicit broadcasting by names (see examples). All of the dims of self must be named in order to use this method. The resulting ten... | torch.named_tensor#torch.Tensor.align_as |
align_to(*names) [source]
Permutes the dimensions of the self tensor to match the order specified in names, adding size-one dims for any new names. All of the dims of self must be named in order to use this method. The resulting tensor is a view on the original tensor. All dimension names of self must be present in n... | torch.named_tensor#torch.Tensor.align_to |
all(dim=None, keepdim=False) β Tensor
See torch.all() | torch.tensors#torch.Tensor.all |
allclose(other, rtol=1e-05, atol=1e-08, equal_nan=False) β Tensor
See torch.allclose() | torch.tensors#torch.Tensor.allclose |
amax(dim=None, keepdim=False) β Tensor
See torch.amax() | torch.tensors#torch.Tensor.amax |
amin(dim=None, keepdim=False) β Tensor
See torch.amin() | torch.tensors#torch.Tensor.amin |
angle() β Tensor
See torch.angle() | torch.tensors#torch.Tensor.angle |
any(dim=None, keepdim=False) β Tensor
See torch.any() | torch.tensors#torch.Tensor.any |
apply_(callable) β Tensor
Applies the function callable to each element in the tensor, replacing each element with the value returned by callable. Note This function only works with CPU tensors and should not be used in code sections that require high performance. | torch.tensors#torch.Tensor.apply_ |
arccos() β Tensor
See torch.arccos() | torch.tensors#torch.Tensor.arccos |
arccosh()
acosh() -> Tensor See torch.arccosh() | torch.tensors#torch.Tensor.arccosh |
arccosh_()
acosh_() -> Tensor In-place version of arccosh() | torch.tensors#torch.Tensor.arccosh_ |
arccos_() β Tensor
In-place version of arccos() | torch.tensors#torch.Tensor.arccos_ |
arcsin() β Tensor
See torch.arcsin() | torch.tensors#torch.Tensor.arcsin |
arcsinh() β Tensor
See torch.arcsinh() | torch.tensors#torch.Tensor.arcsinh |
arcsinh_() β Tensor
In-place version of arcsinh() | torch.tensors#torch.Tensor.arcsinh_ |
arcsin_() β Tensor
In-place version of arcsin() | torch.tensors#torch.Tensor.arcsin_ |
arctan() β Tensor
See torch.arctan() | torch.tensors#torch.Tensor.arctan |
arctanh() β Tensor
See torch.arctanh() | torch.tensors#torch.Tensor.arctanh |
arctanh_(other) β Tensor
In-place version of arctanh() | torch.tensors#torch.Tensor.arctanh_ |
arctan_() β Tensor
In-place version of arctan() | torch.tensors#torch.Tensor.arctan_ |
argmax(dim=None, keepdim=False) β LongTensor
See torch.argmax() | torch.tensors#torch.Tensor.argmax |
argmin(dim=None, keepdim=False) β LongTensor
See torch.argmin() | torch.tensors#torch.Tensor.argmin |
argsort(dim=-1, descending=False) β LongTensor
See torch.argsort() | torch.tensors#torch.Tensor.argsort |
asin() β Tensor
See torch.asin() | torch.tensors#torch.Tensor.asin |
asinh() β Tensor
See torch.asinh() | torch.tensors#torch.Tensor.asinh |
asinh_() β Tensor
In-place version of asinh() | torch.tensors#torch.Tensor.asinh_ |
asin_() β Tensor
In-place version of asin() | torch.tensors#torch.Tensor.asin_ |
as_strided(size, stride, storage_offset=0) β Tensor
See torch.as_strided() | torch.tensors#torch.Tensor.as_strided |
as_subclass(cls) β Tensor
Makes a cls instance with the same data pointer as self. Changes in the output mirror changes in self, and the output stays attached to the autograd graph. cls must be a subclass of Tensor. | torch.tensors#torch.Tensor.as_subclass |
atan() β Tensor
See torch.atan() | torch.tensors#torch.Tensor.atan |
atan2(other) β Tensor
See torch.atan2() | torch.tensors#torch.Tensor.atan2 |
atan2_(other) β Tensor
In-place version of atan2() | torch.tensors#torch.Tensor.atan2_ |
atanh() β Tensor
See torch.atanh() | torch.tensors#torch.Tensor.atanh |
atanh_(other) β Tensor
In-place version of atanh() | torch.tensors#torch.Tensor.atanh_ |
atan_() β Tensor
In-place version of atan() | torch.tensors#torch.Tensor.atan_ |
backward(gradient=None, retain_graph=None, create_graph=False, inputs=None) [source]
Computes the gradient of current tensor w.r.t. graph leaves. The graph is differentiated using the chain rule. If the tensor is non-scalar (i.e. its data has more than one element) and requires gradient, the function additionally req... | torch.autograd#torch.Tensor.backward |
baddbmm(batch1, batch2, *, beta=1, alpha=1) β Tensor
See torch.baddbmm() | torch.tensors#torch.Tensor.baddbmm |
baddbmm_(batch1, batch2, *, beta=1, alpha=1) β Tensor
In-place version of baddbmm() | torch.tensors#torch.Tensor.baddbmm_ |
bernoulli(*, generator=None) β Tensor
Returns a result tensor where each result[i]\texttt{result[i]} is independently sampled from Bernoulli(self[i])\text{Bernoulli}(\texttt{self[i]}) . self must have floating point dtype, and the result will have the same dtype. See torch.bernoulli() | torch.tensors#torch.Tensor.bernoulli |
bernoulli_()
bernoulli_(p=0.5, *, generator=None) β Tensor
Fills each location of self with an independent sample from Bernoulli(p)\text{Bernoulli}(\texttt{p}) . self can have integral dtype.
bernoulli_(p_tensor, *, generator=None) β Tensor
p_tensor should be a tensor containing probabilities to be used for... | torch.tensors#torch.Tensor.bernoulli_ |
bfloat16(memory_format=torch.preserve_format) β Tensor
self.bfloat16() is equivalent to self.to(torch.bfloat16). See to(). Parameters
memory_format (torch.memory_format, optional) β the desired memory format of returned Tensor. Default: torch.preserve_format. | torch.tensors#torch.Tensor.bfloat16 |
bincount(weights=None, minlength=0) β Tensor
See torch.bincount() | torch.tensors#torch.Tensor.bincount |
bitwise_and() β Tensor
See torch.bitwise_and() | torch.tensors#torch.Tensor.bitwise_and |
bitwise_and_() β Tensor
In-place version of bitwise_and() | torch.tensors#torch.Tensor.bitwise_and_ |
bitwise_not() β Tensor
See torch.bitwise_not() | torch.tensors#torch.Tensor.bitwise_not |
bitwise_not_() β Tensor
In-place version of bitwise_not() | torch.tensors#torch.Tensor.bitwise_not_ |
bitwise_or() β Tensor
See torch.bitwise_or() | torch.tensors#torch.Tensor.bitwise_or |
bitwise_or_() β Tensor
In-place version of bitwise_or() | torch.tensors#torch.Tensor.bitwise_or_ |
bitwise_xor() β Tensor
See torch.bitwise_xor() | torch.tensors#torch.Tensor.bitwise_xor |
bitwise_xor_() β Tensor
In-place version of bitwise_xor() | torch.tensors#torch.Tensor.bitwise_xor_ |
bmm(batch2) β Tensor
See torch.bmm() | torch.tensors#torch.Tensor.bmm |
bool(memory_format=torch.preserve_format) β Tensor
self.bool() is equivalent to self.to(torch.bool). See to(). Parameters
memory_format (torch.memory_format, optional) β the desired memory format of returned Tensor. Default: torch.preserve_format. | torch.tensors#torch.Tensor.bool |
broadcast_to(shape) β Tensor
See torch.broadcast_to(). | torch.tensors#torch.Tensor.broadcast_to |
byte(memory_format=torch.preserve_format) β Tensor
self.byte() is equivalent to self.to(torch.uint8). See to(). Parameters
memory_format (torch.memory_format, optional) β the desired memory format of returned Tensor. Default: torch.preserve_format. | torch.tensors#torch.Tensor.byte |
cauchy_(median=0, sigma=1, *, generator=None) β Tensor
Fills the tensor with numbers drawn from the Cauchy distribution: f(x)=1ΟΟ(xβmedian)2+Ο2f(x) = \dfrac{1}{\pi} \dfrac{\sigma}{(x - \text{median})^2 + \sigma^2} | torch.tensors#torch.Tensor.cauchy_ |
ceil() β Tensor
See torch.ceil() | torch.tensors#torch.Tensor.ceil |
ceil_() β Tensor
In-place version of ceil() | torch.tensors#torch.Tensor.ceil_ |
char(memory_format=torch.preserve_format) β Tensor
self.char() is equivalent to self.to(torch.int8). See to(). Parameters
memory_format (torch.memory_format, optional) β the desired memory format of returned Tensor. Default: torch.preserve_format. | torch.tensors#torch.Tensor.char |
cholesky(upper=False) β Tensor
See torch.cholesky() | torch.tensors#torch.Tensor.cholesky |
cholesky_inverse(upper=False) β Tensor
See torch.cholesky_inverse() | torch.tensors#torch.Tensor.cholesky_inverse |
cholesky_solve(input2, upper=False) β Tensor
See torch.cholesky_solve() | torch.tensors#torch.Tensor.cholesky_solve |
chunk(chunks, dim=0) β List of Tensors
See torch.chunk() | torch.tensors#torch.Tensor.chunk |
clamp(min, max) β Tensor
See torch.clamp() | torch.tensors#torch.Tensor.clamp |
clamp_(min, max) β Tensor
In-place version of clamp() | torch.tensors#torch.Tensor.clamp_ |
clip(min, max) β Tensor
Alias for clamp(). | torch.tensors#torch.Tensor.clip |
clip_(min, max) β Tensor
Alias for clamp_(). | torch.tensors#torch.Tensor.clip_ |
clone(*, memory_format=torch.preserve_format) β Tensor
See torch.clone() | torch.tensors#torch.Tensor.clone |
coalesce() β Tensor
Returns a coalesced copy of self if self is an uncoalesced tensor. Returns self if self is a coalesced tensor. Warning Throws an error if self is not a sparse COO tensor. | torch.sparse#torch.Tensor.coalesce |
conj() β Tensor
See torch.conj() | torch.tensors#torch.Tensor.conj |
contiguous(memory_format=torch.contiguous_format) β Tensor
Returns a contiguous in memory tensor containing the same data as self tensor. If self tensor is already in the specified memory format, this function returns the self tensor. Parameters
memory_format (torch.memory_format, optional) β the desired memory for... | torch.tensors#torch.Tensor.contiguous |
copysign(other) β Tensor
See torch.copysign() | torch.tensors#torch.Tensor.copysign |
copysign_(other) β Tensor
In-place version of copysign() | torch.tensors#torch.Tensor.copysign_ |
copy_(src, non_blocking=False) β Tensor
Copies the elements from src into self tensor and returns self. The src tensor must be broadcastable with the self tensor. It may be of a different data type or reside on a different device. Parameters
src (Tensor) β the source tensor to copy from
non_blocking (bool) β if ... | torch.tensors#torch.Tensor.copy_ |
cos() β Tensor
See torch.cos() | torch.tensors#torch.Tensor.cos |
cosh() β Tensor
See torch.cosh() | torch.tensors#torch.Tensor.cosh |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.