doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
property variance | torch.distributions#torch.distributions.poisson.Poisson.variance |
class torch.distributions.relaxed_bernoulli.LogitRelaxedBernoulli(temperature, probs=None, logits=None, validate_args=None) [source]
Bases: torch.distributions.distribution.Distribution Creates a LogitRelaxedBernoulli distribution parameterized by probs or logits (but not both), which is the logit of a RelaxedBernoul... | torch.distributions#torch.distributions.relaxed_bernoulli.LogitRelaxedBernoulli |
arg_constraints = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0)} | torch.distributions#torch.distributions.relaxed_bernoulli.LogitRelaxedBernoulli.arg_constraints |
expand(batch_shape, _instance=None) [source] | torch.distributions#torch.distributions.relaxed_bernoulli.LogitRelaxedBernoulli.expand |
logits [source] | torch.distributions#torch.distributions.relaxed_bernoulli.LogitRelaxedBernoulli.logits |
log_prob(value) [source] | torch.distributions#torch.distributions.relaxed_bernoulli.LogitRelaxedBernoulli.log_prob |
property param_shape | torch.distributions#torch.distributions.relaxed_bernoulli.LogitRelaxedBernoulli.param_shape |
probs [source] | torch.distributions#torch.distributions.relaxed_bernoulli.LogitRelaxedBernoulli.probs |
rsample(sample_shape=torch.Size([])) [source] | torch.distributions#torch.distributions.relaxed_bernoulli.LogitRelaxedBernoulli.rsample |
support = Real() | torch.distributions#torch.distributions.relaxed_bernoulli.LogitRelaxedBernoulli.support |
class torch.distributions.relaxed_bernoulli.RelaxedBernoulli(temperature, probs=None, logits=None, validate_args=None) [source]
Bases: torch.distributions.transformed_distribution.TransformedDistribution Creates a RelaxedBernoulli distribution, parametrized by temperature, and either probs or logits (but not both). T... | torch.distributions#torch.distributions.relaxed_bernoulli.RelaxedBernoulli |
arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0)} | torch.distributions#torch.distributions.relaxed_bernoulli.RelaxedBernoulli.arg_constraints |
expand(batch_shape, _instance=None) [source] | torch.distributions#torch.distributions.relaxed_bernoulli.RelaxedBernoulli.expand |
has_rsample = True | torch.distributions#torch.distributions.relaxed_bernoulli.RelaxedBernoulli.has_rsample |
property logits | torch.distributions#torch.distributions.relaxed_bernoulli.RelaxedBernoulli.logits |
property probs | torch.distributions#torch.distributions.relaxed_bernoulli.RelaxedBernoulli.probs |
support = Interval(lower_bound=0.0, upper_bound=1.0) | torch.distributions#torch.distributions.relaxed_bernoulli.RelaxedBernoulli.support |
property temperature | torch.distributions#torch.distributions.relaxed_bernoulli.RelaxedBernoulli.temperature |
class torch.distributions.relaxed_categorical.RelaxedOneHotCategorical(temperature, probs=None, logits=None, validate_args=None) [source]
Bases: torch.distributions.transformed_distribution.TransformedDistribution Creates a RelaxedOneHotCategorical distribution parametrized by temperature, and either probs or logits.... | torch.distributions#torch.distributions.relaxed_categorical.RelaxedOneHotCategorical |
arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {'logits': IndependentConstraint(Real(), 1), 'probs': Simplex()} | torch.distributions#torch.distributions.relaxed_categorical.RelaxedOneHotCategorical.arg_constraints |
expand(batch_shape, _instance=None) [source] | torch.distributions#torch.distributions.relaxed_categorical.RelaxedOneHotCategorical.expand |
has_rsample = True | torch.distributions#torch.distributions.relaxed_categorical.RelaxedOneHotCategorical.has_rsample |
property logits | torch.distributions#torch.distributions.relaxed_categorical.RelaxedOneHotCategorical.logits |
property probs | torch.distributions#torch.distributions.relaxed_categorical.RelaxedOneHotCategorical.probs |
support = Simplex() | torch.distributions#torch.distributions.relaxed_categorical.RelaxedOneHotCategorical.support |
property temperature | torch.distributions#torch.distributions.relaxed_categorical.RelaxedOneHotCategorical.temperature |
class torch.distributions.studentT.StudentT(df, loc=0.0, scale=1.0, validate_args=None) [source]
Bases: torch.distributions.distribution.Distribution Creates a Student’s t-distribution parameterized by degree of freedom df, mean loc and scale scale. Example: >>> m = StudentT(torch.tensor([2.0]))
>>> m.sample() # Stu... | torch.distributions#torch.distributions.studentT.StudentT |
arg_constraints = {'df': GreaterThan(lower_bound=0.0), 'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)} | torch.distributions#torch.distributions.studentT.StudentT.arg_constraints |
entropy() [source] | torch.distributions#torch.distributions.studentT.StudentT.entropy |
expand(batch_shape, _instance=None) [source] | torch.distributions#torch.distributions.studentT.StudentT.expand |
has_rsample = True | torch.distributions#torch.distributions.studentT.StudentT.has_rsample |
log_prob(value) [source] | torch.distributions#torch.distributions.studentT.StudentT.log_prob |
property mean | torch.distributions#torch.distributions.studentT.StudentT.mean |
rsample(sample_shape=torch.Size([])) [source] | torch.distributions#torch.distributions.studentT.StudentT.rsample |
support = Real() | torch.distributions#torch.distributions.studentT.StudentT.support |
property variance | torch.distributions#torch.distributions.studentT.StudentT.variance |
class torch.distributions.transformed_distribution.TransformedDistribution(base_distribution, transforms, validate_args=None) [source]
Bases: torch.distributions.distribution.Distribution Extension of the Distribution class, which applies a sequence of Transforms to a base distribution. Let f be the composition of tr... | torch.distributions#torch.distributions.transformed_distribution.TransformedDistribution |
arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {} | torch.distributions#torch.distributions.transformed_distribution.TransformedDistribution.arg_constraints |
cdf(value) [source]
Computes the cumulative distribution function by inverting the transform(s) and computing the score of the base distribution. | torch.distributions#torch.distributions.transformed_distribution.TransformedDistribution.cdf |
expand(batch_shape, _instance=None) [source] | torch.distributions#torch.distributions.transformed_distribution.TransformedDistribution.expand |
property has_rsample | torch.distributions#torch.distributions.transformed_distribution.TransformedDistribution.has_rsample |
icdf(value) [source]
Computes the inverse cumulative distribution function using transform(s) and computing the score of the base distribution. | torch.distributions#torch.distributions.transformed_distribution.TransformedDistribution.icdf |
log_prob(value) [source]
Scores the sample by inverting the transform(s) and computing the score using the score of the base distribution and the log abs det jacobian. | torch.distributions#torch.distributions.transformed_distribution.TransformedDistribution.log_prob |
rsample(sample_shape=torch.Size([])) [source]
Generates a sample_shape shaped reparameterized sample or sample_shape shaped batch of reparameterized samples if the distribution parameters are batched. Samples first from base distribution and applies transform() for every transform in the list. | torch.distributions#torch.distributions.transformed_distribution.TransformedDistribution.rsample |
sample(sample_shape=torch.Size([])) [source]
Generates a sample_shape shaped sample or sample_shape shaped batch of samples if the distribution parameters are batched. Samples first from base distribution and applies transform() for every transform in the list. | torch.distributions#torch.distributions.transformed_distribution.TransformedDistribution.sample |
property support | torch.distributions#torch.distributions.transformed_distribution.TransformedDistribution.support |
class torch.distributions.transforms.AbsTransform(cache_size=0) [source]
Transform via the mapping y=∣x∣y = |x| . | torch.distributions#torch.distributions.transforms.AbsTransform |
class torch.distributions.transforms.AffineTransform(loc, scale, event_dim=0, cache_size=0) [source]
Transform via the pointwise affine mapping y=loc+scale×xy = \text{loc} + \text{scale} \times x . Parameters
loc (Tensor or float) – Location parameter.
scale (Tensor or float) – Scale parameter.
event_dim (int) ... | torch.distributions#torch.distributions.transforms.AffineTransform |
class torch.distributions.transforms.ComposeTransform(parts, cache_size=0) [source]
Composes multiple transforms in a chain. The transforms being composed are responsible for caching. Parameters
parts (list of Transform) – A list of transforms to compose.
cache_size (int) – Size of cache. If zero, no caching is ... | torch.distributions#torch.distributions.transforms.ComposeTransform |
class torch.distributions.transforms.CorrCholeskyTransform(cache_size=0) [source]
Transforms an uncontrained real vector xx with length D∗(D−1)/2D*(D-1)/2 into the Cholesky factor of a D-dimension correlation matrix. This Cholesky factor is a lower triangular matrix with positive diagonals and unit Euclidean norm f... | torch.distributions#torch.distributions.transforms.CorrCholeskyTransform |
class torch.distributions.transforms.ExpTransform(cache_size=0) [source]
Transform via the mapping y=exp(x)y = \exp(x) . | torch.distributions#torch.distributions.transforms.ExpTransform |
class torch.distributions.transforms.IndependentTransform(base_transform, reinterpreted_batch_ndims, cache_size=0) [source]
Wrapper around another transform to treat reinterpreted_batch_ndims-many extra of the right most dimensions as dependent. This has no effect on the forward or backward transforms, but does sum o... | torch.distributions#torch.distributions.transforms.IndependentTransform |
class torch.distributions.transforms.LowerCholeskyTransform(cache_size=0) [source]
Transform from unconstrained matrices to lower-triangular matrices with nonnegative diagonal entries. This is useful for parameterizing positive definite matrices in terms of their Cholesky factorization. | torch.distributions#torch.distributions.transforms.LowerCholeskyTransform |
class torch.distributions.transforms.PowerTransform(exponent, cache_size=0) [source]
Transform via the mapping y=xexponenty = x^{\text{exponent}} . | torch.distributions#torch.distributions.transforms.PowerTransform |
class torch.distributions.transforms.ReshapeTransform(in_shape, out_shape, cache_size=0) [source]
Unit Jacobian transform to reshape the rightmost part of a tensor. Note that in_shape and out_shape must have the same number of elements, just as for torch.Tensor.reshape(). Parameters
in_shape (torch.Size) – The in... | torch.distributions#torch.distributions.transforms.ReshapeTransform |
class torch.distributions.transforms.SigmoidTransform(cache_size=0) [source]
Transform via the mapping y=11+exp(−x)y = \frac{1}{1 + \exp(-x)} and x=logit(y)x = \text{logit}(y) . | torch.distributions#torch.distributions.transforms.SigmoidTransform |
class torch.distributions.transforms.SoftmaxTransform(cache_size=0) [source]
Transform from unconstrained space to the simplex via y=exp(x)y = \exp(x) then normalizing. This is not bijective and cannot be used for HMC. However this acts mostly coordinate-wise (except for the final normalization), and thus is approp... | torch.distributions#torch.distributions.transforms.SoftmaxTransform |
class torch.distributions.transforms.StackTransform(tseq, dim=0, cache_size=0) [source]
Transform functor that applies a sequence of transforms tseq component-wise to each submatrix at dim in a way compatible with torch.stack(). Example::
x = torch.stack([torch.range(1, 10), torch.range(1, 10)], dim=1) t = StackTra... | torch.distributions#torch.distributions.transforms.StackTransform |
class torch.distributions.transforms.StickBreakingTransform(cache_size=0) [source]
Transform from unconstrained space to the simplex of one additional dimension via a stick-breaking process. This transform arises as an iterated sigmoid transform in a stick-breaking construction of the Dirichlet distribution: the firs... | torch.distributions#torch.distributions.transforms.StickBreakingTransform |
class torch.distributions.transforms.TanhTransform(cache_size=0) [source]
Transform via the mapping y=tanh(x)y = \tanh(x) . It is equivalent to `
ComposeTransform([AffineTransform(0., 2.), SigmoidTransform(), AffineTransform(-1., 2.)])
` However this might not be numerically stable, thus it is recommended to use Tan... | torch.distributions#torch.distributions.transforms.TanhTransform |
class torch.distributions.transforms.Transform(cache_size=0) [source]
Abstract class for invertable transformations with computable log det jacobians. They are primarily used in torch.distributions.TransformedDistribution. Caching is useful for transforms whose inverses are either expensive or numerically unstable. N... | torch.distributions#torch.distributions.transforms.Transform |
forward_shape(shape) [source]
Infers the shape of the forward computation, given the input shape. Defaults to preserving shape. | torch.distributions#torch.distributions.transforms.Transform.forward_shape |
property inv
Returns the inverse Transform of this transform. This should satisfy t.inv.inv is t. | torch.distributions#torch.distributions.transforms.Transform.inv |
inverse_shape(shape) [source]
Infers the shapes of the inverse computation, given the output shape. Defaults to preserving shape. | torch.distributions#torch.distributions.transforms.Transform.inverse_shape |
log_abs_det_jacobian(x, y) [source]
Computes the log det jacobian log |dy/dx| given input and output. | torch.distributions#torch.distributions.transforms.Transform.log_abs_det_jacobian |
property sign
Returns the sign of the determinant of the Jacobian, if applicable. In general this only makes sense for bijective transforms. | torch.distributions#torch.distributions.transforms.Transform.sign |
class torch.distributions.uniform.Uniform(low, high, validate_args=None) [source]
Bases: torch.distributions.distribution.Distribution Generates uniformly distributed random samples from the half-open interval [low, high). Example: >>> m = Uniform(torch.tensor([0.0]), torch.tensor([5.0]))
>>> m.sample() # uniformly ... | torch.distributions#torch.distributions.uniform.Uniform |
arg_constraints = {'high': Dependent(), 'low': Dependent()} | torch.distributions#torch.distributions.uniform.Uniform.arg_constraints |
cdf(value) [source] | torch.distributions#torch.distributions.uniform.Uniform.cdf |
entropy() [source] | torch.distributions#torch.distributions.uniform.Uniform.entropy |
expand(batch_shape, _instance=None) [source] | torch.distributions#torch.distributions.uniform.Uniform.expand |
has_rsample = True | torch.distributions#torch.distributions.uniform.Uniform.has_rsample |
icdf(value) [source] | torch.distributions#torch.distributions.uniform.Uniform.icdf |
log_prob(value) [source] | torch.distributions#torch.distributions.uniform.Uniform.log_prob |
property mean | torch.distributions#torch.distributions.uniform.Uniform.mean |
rsample(sample_shape=torch.Size([])) [source] | torch.distributions#torch.distributions.uniform.Uniform.rsample |
property stddev | torch.distributions#torch.distributions.uniform.Uniform.stddev |
property support | torch.distributions#torch.distributions.uniform.Uniform.support |
property variance | torch.distributions#torch.distributions.uniform.Uniform.variance |
class torch.distributions.von_mises.VonMises(loc, concentration, validate_args=None) [source]
Bases: torch.distributions.distribution.Distribution A circular von Mises distribution. This implementation uses polar coordinates. The loc and value args can be any real number (to facilitate unconstrained optimization), bu... | torch.distributions#torch.distributions.von_mises.VonMises |
arg_constraints = {'concentration': GreaterThan(lower_bound=0.0), 'loc': Real()} | torch.distributions#torch.distributions.von_mises.VonMises.arg_constraints |
expand(batch_shape) [source] | torch.distributions#torch.distributions.von_mises.VonMises.expand |
has_rsample = False | torch.distributions#torch.distributions.von_mises.VonMises.has_rsample |
log_prob(value) [source] | torch.distributions#torch.distributions.von_mises.VonMises.log_prob |
property mean
The provided mean is the circular one. | torch.distributions#torch.distributions.von_mises.VonMises.mean |
sample(sample_shape=torch.Size([])) [source]
The sampling algorithm for the von Mises distribution is based on the following paper: Best, D. J., and Nicholas I. Fisher. “Efficient simulation of the von Mises distribution.” Applied Statistics (1979): 152-157. | torch.distributions#torch.distributions.von_mises.VonMises.sample |
support = Real() | torch.distributions#torch.distributions.von_mises.VonMises.support |
variance [source]
The provided variance is the circular one. | torch.distributions#torch.distributions.von_mises.VonMises.variance |
class torch.distributions.weibull.Weibull(scale, concentration, validate_args=None) [source]
Bases: torch.distributions.transformed_distribution.TransformedDistribution Samples from a two-parameter Weibull distribution. Example >>> m = Weibull(torch.tensor([1.0]), torch.tensor([1.0]))
>>> m.sample() # sample from a ... | torch.distributions#torch.distributions.weibull.Weibull |
arg_constraints: Dict[str, torch.distributions.constraints.Constraint] = {'concentration': GreaterThan(lower_bound=0.0), 'scale': GreaterThan(lower_bound=0.0)} | torch.distributions#torch.distributions.weibull.Weibull.arg_constraints |
entropy() [source] | torch.distributions#torch.distributions.weibull.Weibull.entropy |
expand(batch_shape, _instance=None) [source] | torch.distributions#torch.distributions.weibull.Weibull.expand |
property mean | torch.distributions#torch.distributions.weibull.Weibull.mean |
support = GreaterThan(lower_bound=0.0) | torch.distributions#torch.distributions.weibull.Weibull.support |
property variance | torch.distributions#torch.distributions.weibull.Weibull.variance |
torch.div(input, other, *, rounding_mode=None, out=None) → Tensor
Divides each element of the input input by the corresponding element of other. outi=inputiotheri\text{out}_i = \frac{\text{input}_i}{\text{other}_i}
Note By default, this performs a “true” division like Python 3. See the rounding_mode argument for ... | torch.generated.torch.div#torch.div |
torch.divide(input, other, *, rounding_mode=None, out=None) → Tensor
Alias for torch.div(). | torch.generated.torch.divide#torch.divide |
torch.dot(input, other, *, out=None) → Tensor
Computes the dot product of two 1D tensors. Note Unlike NumPy’s dot, torch.dot intentionally only supports computing the dot product of two 1D tensors with the same number of elements. Parameters
input (Tensor) – first tensor in the dot product, must be 1D.
other (... | torch.generated.torch.dot#torch.dot |
torch.dstack(tensors, *, out=None) → Tensor
Stack tensors in sequence depthwise (along third axis). This is equivalent to concatenation along the third axis after 1-D and 2-D tensors have been reshaped by torch.atleast_3d(). Parameters
tensors (sequence of Tensors) – sequence of tensors to concatenate Keyword Argu... | torch.generated.torch.dstack#torch.dstack |
torch.eig(input, eigenvectors=False, *, out=None) -> (Tensor, Tensor)
Computes the eigenvalues and eigenvectors of a real square matrix. Note Since eigenvalues and eigenvectors might be complex, backward pass is supported only if eigenvalues and eigenvectors are all real valued. When input is on CUDA, torch.eig() ca... | torch.generated.torch.eig#torch.eig |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.