doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
property rpc_timeout
A float indicating the timeout to use for all RPCs. If an RPC does not complete in this timeframe, it will complete with an exception indicating that it has timed out. | torch.rpc#torch.distributed.rpc.RpcBackendOptions.rpc_timeout |
torch.distributed.rpc.rpc_async(to, func, args=None, kwargs=None, timeout=-1.0) [source]
Make a non-blocking RPC call to run function func on worker to. RPC messages are sent and received in parallel to execution of Python code. This method is thread-safe. This method will immediately return a Future that can be awai... | torch.rpc#torch.distributed.rpc.rpc_async |
torch.distributed.rpc.rpc_sync(to, func, args=None, kwargs=None, timeout=-1.0) [source]
Make a blocking RPC call to run function func on worker to. RPC messages are sent and received in parallel to execution of Python code. This method is thread-safe. Parameters
to (str or WorkerInfo or int) – name/rank/WorkerInf... | torch.rpc#torch.distributed.rpc.rpc_sync |
class torch.distributed.rpc.RRef [source]
backward(self: torch._C._distributed_rpc.PyRRef, dist_autograd_ctx_id: int = -1, retain_graph: bool = False) → None Runs the backward pass using the RRef as the root of the backward pass. If dist_autograd_ctx_id is provided, we perform a distributed backward pass using th... | torch.rpc#torch.distributed.rpc.RRef |
backward(self: torch._C._distributed_rpc.PyRRef, dist_autograd_ctx_id: int = -1, retain_graph: bool = False) → None Runs the backward pass using the RRef as the root of the backward pass. If dist_autograd_ctx_id is provided, we perform a distributed backward pass using the provided ctx_id starting from the owner of t... | torch.rpc#torch.distributed.rpc.RRef.backward |
confirmed_by_owner(self: torch._C._distributed_rpc.PyRRef) → bool
Returns whether this RRef has been confirmed by the owner. OwnerRRef always returns true, while UserRRef only returns true when the owner knowns about this UserRRef. | torch.rpc#torch.distributed.rpc.RRef.confirmed_by_owner |
is_owner(self: torch._C._distributed_rpc.PyRRef) → bool
Returns whether or not the current node is the owner of this RRef. | torch.rpc#torch.distributed.rpc.RRef.is_owner |
local_value(self: torch._C._distributed_rpc.PyRRef) → object
If the current node is the owner, returns a reference to the local value. Otherwise, throws an exception. | torch.rpc#torch.distributed.rpc.RRef.local_value |
owner(self: torch._C._distributed_rpc.PyRRef) → torch._C._distributed_rpc.WorkerInfo
Returns worker information of the node that owns this RRef. | torch.rpc#torch.distributed.rpc.RRef.owner |
owner_name(self: torch._C._distributed_rpc.PyRRef) → str
Returns worker name of the node that owns this RRef. | torch.rpc#torch.distributed.rpc.RRef.owner_name |
remote(self: torch._C._distributed_rpc.PyRRef, timeout: float = -1.0) → object
Create a helper proxy to easily launch a remote using the owner of the RRef as the destination to run functions on the object referenced by this RRef. More specifically, rref.remote().func_name(*args, **kwargs) is the same as the following... | torch.rpc#torch.distributed.rpc.RRef.remote |
rpc_async(self: torch._C._distributed_rpc.PyRRef, timeout: float = -1.0) → object
Create a helper proxy to easily launch an rpc_async using the owner of the RRef as the destination to run functions on the object referenced by this RRef. More specifically, rref.rpc_async().func_name(*args, **kwargs) is the same as the... | torch.rpc#torch.distributed.rpc.RRef.rpc_async |
rpc_sync(self: torch._C._distributed_rpc.PyRRef, timeout: float = -1.0) → object
Create a helper proxy to easily launch an rpc_sync using the owner of the RRef as the destination to run functions on the object referenced by this RRef. More specifically, rref.rpc_sync().func_name(*args, **kwargs) is the same as the fo... | torch.rpc#torch.distributed.rpc.RRef.rpc_sync |
to_here(self: torch._C._distributed_rpc.PyRRef, timeout: float = -1.0) → object
Blocking call that copies the value of the RRef from the owner to the local node and returns it. If the current node is the owner, returns a reference to the local value. Parameters
timeout (float, optional) – Timeout for to_here. If th... | torch.rpc#torch.distributed.rpc.RRef.to_here |
torch.distributed.rpc.shutdown(graceful=True) [source]
Perform a shutdown of the RPC agent, and then destroy the RPC agent. This stops the local agent from accepting outstanding requests, and shuts down the RPC framework by terminating all RPC threads. If graceful=True, this will block until all local and remote RPC ... | torch.rpc#torch.distributed.rpc.shutdown |
class torch.distributed.rpc.TensorPipeRpcBackendOptions(*, num_worker_threads=16, rpc_timeout=60.0, init_method='env://', device_maps=None, _transports=None, _channels=None) [source]
The backend options for TensorPipeAgent, derived from RpcBackendOptions. Parameters
num_worker_threads (int, optional) – The number... | torch.rpc#torch.distributed.rpc.TensorPipeRpcBackendOptions |
property device_maps
The device map locations. | torch.rpc#torch.distributed.rpc.TensorPipeRpcBackendOptions.device_maps |
property init_method
URL specifying how to initialize the process group. Default is env:// | torch.rpc#torch.distributed.rpc.TensorPipeRpcBackendOptions.init_method |
property num_worker_threads
The number of threads in the thread-pool used by TensorPipeAgent to execute requests. | torch.rpc#torch.distributed.rpc.TensorPipeRpcBackendOptions.num_worker_threads |
property rpc_timeout
A float indicating the timeout to use for all RPCs. If an RPC does not complete in this timeframe, it will complete with an exception indicating that it has timed out. | torch.rpc#torch.distributed.rpc.TensorPipeRpcBackendOptions.rpc_timeout |
set_device_map(to, device_map) [source]
Set device mapping between each RPC caller and callee pair. This function can be called multiple times to incrementally add device placement configurations. Parameters
worker_name (str) – Callee name.
device_map (Dict of python:int, str, or torch.device) – Device placement... | torch.rpc#torch.distributed.rpc.TensorPipeRpcBackendOptions.set_device_map |
class torch.distributed.rpc.WorkerInfo
A structure that encapsulates information of a worker in the system. Contains the name and ID of the worker. This class is not meant to be constructed directly, rather, an instance can be retrieved through get_worker_info() and the result can be passed in to functions such as rp... | torch.rpc#torch.distributed.rpc.WorkerInfo |
property id
Globally unique id to identify the worker. | torch.rpc#torch.distributed.rpc.WorkerInfo.id |
property name
The name of the worker. | torch.rpc#torch.distributed.rpc.WorkerInfo.name |
torch.distributed.scatter(tensor, scatter_list=None, src=0, group=None, async_op=False) [source]
Scatters a list of tensors to all processes in a group. Each process will receive exactly one tensor and store its data in the tensor argument. Parameters
tensor (Tensor) – Output tensor.
scatter_list (list[Tensor]) ... | torch.distributed#torch.distributed.scatter |
torch.distributed.scatter_object_list(scatter_object_output_list, scatter_object_input_list, src=0, group=None) [source]
Scatters picklable objects in scatter_object_input_list to the whole group. Similar to scatter(), but Python objects can be passed in. On each rank, the scattered object will be stored as the first... | torch.distributed#torch.distributed.scatter_object_list |
torch.distributed.send(tensor, dst, group=None, tag=0) [source]
Sends a tensor synchronously. Parameters
tensor (Tensor) – Tensor to send.
dst (int) – Destination rank.
group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used.
tag (int, optional) – Tag to m... | torch.distributed#torch.distributed.send |
class torch.distributed.Store
Base class for all store implementations, such as the 3 provided by PyTorch distributed: (TCPStore, FileStore, and HashStore). | torch.distributed#torch.distributed.Store |
torch.distributed.Store.add(self: torch._C._distributed_c10d.Store, arg0: str, arg1: int) → int
The first call to add for a given key creates a counter associated with key in the store, initialized to amount. Subsequent calls to add with the same key increment the counter by the specified amount. Calling add() with a... | torch.distributed#torch.distributed.Store.add |
torch.distributed.Store.delete_key(self: torch._C._distributed_c10d.Store, arg0: str) → bool
Deletes the key-value pair associated with key from the store. Returns true if the key was successfully deleted, and false if it was not. Warning The delete_key API is only supported by the TCPStore and HashStore. Using this... | torch.distributed#torch.distributed.Store.delete_key |
torch.distributed.Store.get(self: torch._C._distributed_c10d.Store, arg0: str) → bytes
Retrieves the value associated with the given key in the store. If key is not present in the store, the function will wait for timeout, which is defined when initializing the store, before throwing an exception. Parameters
key (s... | torch.distributed#torch.distributed.Store.get |
torch.distributed.Store.num_keys(self: torch._C._distributed_c10d.Store) → int
Returns the number of keys set in the store. Note that this number will typically be one greater than the number of keys added by set() and add() since one key is used to coordinate all the workers using the store. Warning When used with ... | torch.distributed#torch.distributed.Store.num_keys |
torch.distributed.Store.set(self: torch._C._distributed_c10d.Store, arg0: str, arg1: str) → None
Inserts the key-value pair into the store based on the supplied key and value. If key already exists in the store, it will overwrite the old value with the new supplied value. Parameters
key (str) – The key to be adde... | torch.distributed#torch.distributed.Store.set |
torch.distributed.Store.set_timeout(self: torch._C._distributed_c10d.Store, arg0: datetime.timedelta) → None
Sets the store’s default timeout. This timeout is used during initialization and in wait() and get(). Parameters
timeout (timedelta) – timeout to be set in the store. Example::
>>> import torch.distribut... | torch.distributed#torch.distributed.Store.set_timeout |
torch.distributed.Store.wait(*args, **kwargs)
Overloaded function. wait(self: torch._C._distributed_c10d.Store, arg0: List[str]) -> None Waits for each key in keys to be added to the store. If not all keys are set before the timeout (set during store initialization), then wait will throw an exception. Parameters
... | torch.distributed#torch.distributed.Store.wait |
class torch.distributed.TCPStore
A TCP-based distributed key-value store implementation. The server store holds the data, while the client stores can connect to the server store over TCP and perform actions such as set() to insert a key-value pair, get() to retrieve a key-value pair, etc. Parameters
host_name (st... | torch.distributed#torch.distributed.TCPStore |
Probability distributions - torch.distributions The distributions package contains parameterizable probability distributions and sampling functions. This allows the construction of stochastic computation graphs and stochastic gradient estimators for optimization. This package generally follows the design of the TensorF... | torch.distributions |
class torch.distributions.bernoulli.Bernoulli(probs=None, logits=None, validate_args=None) [source]
Bases: torch.distributions.exp_family.ExponentialFamily Creates a Bernoulli distribution parameterized by probs or logits (but not both). Samples are binary (0 or 1). They take the value 1 with probability p and 0 with... | torch.distributions#torch.distributions.bernoulli.Bernoulli |
arg_constraints = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0)} | torch.distributions#torch.distributions.bernoulli.Bernoulli.arg_constraints |
entropy() [source] | torch.distributions#torch.distributions.bernoulli.Bernoulli.entropy |
enumerate_support(expand=True) [source] | torch.distributions#torch.distributions.bernoulli.Bernoulli.enumerate_support |
expand(batch_shape, _instance=None) [source] | torch.distributions#torch.distributions.bernoulli.Bernoulli.expand |
has_enumerate_support = True | torch.distributions#torch.distributions.bernoulli.Bernoulli.has_enumerate_support |
logits [source] | torch.distributions#torch.distributions.bernoulli.Bernoulli.logits |
log_prob(value) [source] | torch.distributions#torch.distributions.bernoulli.Bernoulli.log_prob |
property mean | torch.distributions#torch.distributions.bernoulli.Bernoulli.mean |
property param_shape | torch.distributions#torch.distributions.bernoulli.Bernoulli.param_shape |
probs [source] | torch.distributions#torch.distributions.bernoulli.Bernoulli.probs |
sample(sample_shape=torch.Size([])) [source] | torch.distributions#torch.distributions.bernoulli.Bernoulli.sample |
support = Boolean() | torch.distributions#torch.distributions.bernoulli.Bernoulli.support |
property variance | torch.distributions#torch.distributions.bernoulli.Bernoulli.variance |
class torch.distributions.beta.Beta(concentration1, concentration0, validate_args=None) [source]
Bases: torch.distributions.exp_family.ExponentialFamily Beta distribution parameterized by concentration1 and concentration0. Example: >>> m = Beta(torch.tensor([0.5]), torch.tensor([0.5]))
>>> m.sample() # Beta distribu... | torch.distributions#torch.distributions.beta.Beta |
arg_constraints = {'concentration0': GreaterThan(lower_bound=0.0), 'concentration1': GreaterThan(lower_bound=0.0)} | torch.distributions#torch.distributions.beta.Beta.arg_constraints |
property concentration0 | torch.distributions#torch.distributions.beta.Beta.concentration0 |
property concentration1 | torch.distributions#torch.distributions.beta.Beta.concentration1 |
entropy() [source] | torch.distributions#torch.distributions.beta.Beta.entropy |
expand(batch_shape, _instance=None) [source] | torch.distributions#torch.distributions.beta.Beta.expand |
has_rsample = True | torch.distributions#torch.distributions.beta.Beta.has_rsample |
log_prob(value) [source] | torch.distributions#torch.distributions.beta.Beta.log_prob |
property mean | torch.distributions#torch.distributions.beta.Beta.mean |
rsample(sample_shape=()) [source] | torch.distributions#torch.distributions.beta.Beta.rsample |
support = Interval(lower_bound=0.0, upper_bound=1.0) | torch.distributions#torch.distributions.beta.Beta.support |
property variance | torch.distributions#torch.distributions.beta.Beta.variance |
class torch.distributions.binomial.Binomial(total_count=1, probs=None, logits=None, validate_args=None) [source]
Bases: torch.distributions.distribution.Distribution Creates a Binomial distribution parameterized by total_count and either probs or logits (but not both). total_count must be broadcastable with probs/log... | torch.distributions#torch.distributions.binomial.Binomial |
arg_constraints = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0), 'total_count': IntegerGreaterThan(lower_bound=0)} | torch.distributions#torch.distributions.binomial.Binomial.arg_constraints |
enumerate_support(expand=True) [source] | torch.distributions#torch.distributions.binomial.Binomial.enumerate_support |
expand(batch_shape, _instance=None) [source] | torch.distributions#torch.distributions.binomial.Binomial.expand |
has_enumerate_support = True | torch.distributions#torch.distributions.binomial.Binomial.has_enumerate_support |
logits [source] | torch.distributions#torch.distributions.binomial.Binomial.logits |
log_prob(value) [source] | torch.distributions#torch.distributions.binomial.Binomial.log_prob |
property mean | torch.distributions#torch.distributions.binomial.Binomial.mean |
property param_shape | torch.distributions#torch.distributions.binomial.Binomial.param_shape |
probs [source] | torch.distributions#torch.distributions.binomial.Binomial.probs |
sample(sample_shape=torch.Size([])) [source] | torch.distributions#torch.distributions.binomial.Binomial.sample |
property support | torch.distributions#torch.distributions.binomial.Binomial.support |
property variance | torch.distributions#torch.distributions.binomial.Binomial.variance |
class torch.distributions.categorical.Categorical(probs=None, logits=None, validate_args=None) [source]
Bases: torch.distributions.distribution.Distribution Creates a categorical distribution parameterized by either probs or logits (but not both). Note It is equivalent to the distribution that torch.multinomial() sa... | torch.distributions#torch.distributions.categorical.Categorical |
arg_constraints = {'logits': IndependentConstraint(Real(), 1), 'probs': Simplex()} | torch.distributions#torch.distributions.categorical.Categorical.arg_constraints |
entropy() [source] | torch.distributions#torch.distributions.categorical.Categorical.entropy |
enumerate_support(expand=True) [source] | torch.distributions#torch.distributions.categorical.Categorical.enumerate_support |
expand(batch_shape, _instance=None) [source] | torch.distributions#torch.distributions.categorical.Categorical.expand |
has_enumerate_support = True | torch.distributions#torch.distributions.categorical.Categorical.has_enumerate_support |
logits [source] | torch.distributions#torch.distributions.categorical.Categorical.logits |
log_prob(value) [source] | torch.distributions#torch.distributions.categorical.Categorical.log_prob |
property mean | torch.distributions#torch.distributions.categorical.Categorical.mean |
property param_shape | torch.distributions#torch.distributions.categorical.Categorical.param_shape |
probs [source] | torch.distributions#torch.distributions.categorical.Categorical.probs |
sample(sample_shape=torch.Size([])) [source] | torch.distributions#torch.distributions.categorical.Categorical.sample |
property support | torch.distributions#torch.distributions.categorical.Categorical.support |
property variance | torch.distributions#torch.distributions.categorical.Categorical.variance |
class torch.distributions.cauchy.Cauchy(loc, scale, validate_args=None) [source]
Bases: torch.distributions.distribution.Distribution Samples from a Cauchy (Lorentz) distribution. The distribution of the ratio of independent normally distributed random variables with means 0 follows a Cauchy distribution. Example: >>... | torch.distributions#torch.distributions.cauchy.Cauchy |
arg_constraints = {'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)} | torch.distributions#torch.distributions.cauchy.Cauchy.arg_constraints |
cdf(value) [source] | torch.distributions#torch.distributions.cauchy.Cauchy.cdf |
entropy() [source] | torch.distributions#torch.distributions.cauchy.Cauchy.entropy |
expand(batch_shape, _instance=None) [source] | torch.distributions#torch.distributions.cauchy.Cauchy.expand |
has_rsample = True | torch.distributions#torch.distributions.cauchy.Cauchy.has_rsample |
icdf(value) [source] | torch.distributions#torch.distributions.cauchy.Cauchy.icdf |
log_prob(value) [source] | torch.distributions#torch.distributions.cauchy.Cauchy.log_prob |
property mean | torch.distributions#torch.distributions.cauchy.Cauchy.mean |
rsample(sample_shape=torch.Size([])) [source] | torch.distributions#torch.distributions.cauchy.Cauchy.rsample |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.