doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
torch.cuda.memory_cached(device=None) [source]
Deprecated; see memory_reserved(). | torch.cuda#torch.cuda.memory_cached |
torch.cuda.memory_reserved(device=None) [source]
Returns the current GPU memory managed by the caching allocator in bytes for a given device. Parameters
device (torch.device or int, optional) β selected device. Returns statistic for the current device, given by current_device(), if device is None (default). Note... | torch.cuda#torch.cuda.memory_reserved |
torch.cuda.memory_snapshot() [source]
Returns a snapshot of the CUDA memory allocator state across all devices. Interpreting the output of this function requires familiarity with the memory allocator internals. Note See Memory management for more details about GPU memory management. | torch.cuda#torch.cuda.memory_snapshot |
torch.cuda.memory_stats(device=None) [source]
Returns a dictionary of CUDA memory allocator statistics for a given device. The return value of this function is a dictionary of statistics, each of which is a non-negative integer. Core statistics:
"allocated.{all,large_pool,small_pool}.{current,peak,allocated,freed}"... | torch.cuda#torch.cuda.memory_stats |
torch.cuda.memory_summary(device=None, abbreviated=False) [source]
Returns a human-readable printout of the current memory allocator statistics for a given device. This can be useful to display periodically during training, or when handling out-of-memory exceptions. Parameters
device (torch.device or int, optiona... | torch.cuda#torch.cuda.memory_summary |
torch.cuda.nvtx.mark(msg) [source]
Describe an instantaneous event that occurred at some point. Parameters
msg (string) β ASCII message to associate with the event. | torch.cuda#torch.cuda.nvtx.mark |
torch.cuda.nvtx.range_pop() [source]
Pops a range off of a stack of nested range spans. Returns the zero-based depth of the range that is ended. | torch.cuda#torch.cuda.nvtx.range_pop |
torch.cuda.nvtx.range_push(msg) [source]
Pushes a range onto a stack of nested range span. Returns zero-based depth of the range that is started. Parameters
msg (string) β ASCII message to associate with range | torch.cuda#torch.cuda.nvtx.range_push |
torch.cuda.reset_max_memory_allocated(device=None) [source]
Resets the starting point in tracking maximum GPU memory occupied by tensors for a given device. See max_memory_allocated() for details. Parameters
device (torch.device or int, optional) β selected device. Returns statistic for the current device, given by... | torch.cuda#torch.cuda.reset_max_memory_allocated |
torch.cuda.reset_max_memory_cached(device=None) [source]
Resets the starting point in tracking maximum GPU memory managed by the caching allocator for a given device. See max_memory_cached() for details. Parameters
device (torch.device or int, optional) β selected device. Returns statistic for the current device, g... | torch.cuda#torch.cuda.reset_max_memory_cached |
torch.cuda.seed() [source]
Sets the seed for generating random numbers to a random number for the current GPU. Itβs safe to call this function if CUDA is not available; in that case, it is silently ignored. Warning If you are working with a multi-GPU model, this function will only initialize the seed on one GPU. To ... | torch.cuda#torch.cuda.seed |
torch.cuda.seed_all() [source]
Sets the seed for generating random numbers to a random number on all GPUs. Itβs safe to call this function if CUDA is not available; in that case, it is silently ignored. | torch.cuda#torch.cuda.seed_all |
torch.cuda.set_device(device) [source]
Sets the current device. Usage of this function is discouraged in favor of device. In most cases itβs better to use CUDA_VISIBLE_DEVICES environmental variable. Parameters
device (torch.device or int) β selected device. This function is a no-op if this argument is negative. | torch.cuda#torch.cuda.set_device |
torch.cuda.set_per_process_memory_fraction(fraction, device=None) [source]
Set memory fraction for a process. The fraction is used to limit an caching allocator to allocated memory on a CUDA device. The allowed value equals the total visible memory multiplied fraction. If trying to allocate more than the allowed valu... | torch.cuda#torch.cuda.set_per_process_memory_fraction |
torch.cuda.set_rng_state(new_state, device='cuda') [source]
Sets the random number generator state of the specified GPU. Parameters
new_state (torch.ByteTensor) β The desired state
device (torch.device or int, optional) β The device to set the RNG state. Default: 'cuda' (i.e., torch.device('cuda'), the current C... | torch.cuda#torch.cuda.set_rng_state |
torch.cuda.set_rng_state_all(new_states) [source]
Sets the random number generator state of all devices. Parameters
new_states (Iterable of torch.ByteTensor) β The desired state for each device | torch.cuda#torch.cuda.set_rng_state_all |
class torch.cuda.Stream [source]
Wrapper around a CUDA stream. A CUDA stream is a linear sequence of execution that belongs to a specific device, independent from other streams. See CUDA semantics for details. Parameters
device (torch.device or int, optional) β a device on which to allocate the stream. If device ... | torch.cuda#torch.cuda.Stream |
torch.cuda.stream(stream) [source]
Context-manager that selects a given stream. All CUDA kernels queued within its context will be enqueued on a selected stream. Parameters
stream (Stream) β selected stream. This manager is a no-op if itβs None. Note Streams are per-device. If the selected stream is not on the c... | torch.cuda#torch.cuda.stream |
query() [source]
Checks if all the work submitted has been completed. Returns
A boolean indicating if all kernels in this stream are completed. | torch.cuda#torch.cuda.Stream.query |
record_event(event=None) [source]
Records an event. Parameters
event (Event, optional) β event to record. If not given, a new one will be allocated. Returns
Recorded event. | torch.cuda#torch.cuda.Stream.record_event |
synchronize() [source]
Wait for all the kernels in this stream to complete. Note This is a wrapper around cudaStreamSynchronize(): see CUDA Stream documentation for more info. | torch.cuda#torch.cuda.Stream.synchronize |
wait_event(event) [source]
Makes all future work submitted to the stream wait for an event. Parameters
event (Event) β an event to wait for. Note This is a wrapper around cudaStreamWaitEvent(): see CUDA Stream documentation for more info. This function returns without waiting for event: only future operations ar... | torch.cuda#torch.cuda.Stream.wait_event |
wait_stream(stream) [source]
Synchronizes with another stream. All future work submitted to this stream will wait until all kernels submitted to a given stream at the time of call complete. Parameters
stream (Stream) β a stream to synchronize. Note This function returns without waiting for currently enqueued ker... | torch.cuda#torch.cuda.Stream.wait_stream |
torch.cuda.synchronize(device=None) [source]
Waits for all kernels in all streams on a CUDA device to complete. Parameters
device (torch.device or int, optional) β device for which to synchronize. It uses the current device, given by current_device(), if device is None (default). | torch.cuda#torch.cuda.synchronize |
torch.cummax(input, dim, *, out=None) -> (Tensor, LongTensor)
Returns a namedtuple (values, indices) where values is the cumulative maximum of elements of input in the dimension dim. And indices is the index location of each maximum value found in the dimension dim. yi=max(x1,x2,x3,β¦,xi)y_i = max(x_1, x_2, x_3, \dot... | torch.generated.torch.cummax#torch.cummax |
torch.cummin(input, dim, *, out=None) -> (Tensor, LongTensor)
Returns a namedtuple (values, indices) where values is the cumulative minimum of elements of input in the dimension dim. And indices is the index location of each maximum value found in the dimension dim. yi=min(x1,x2,x3,β¦,xi)y_i = min(x_1, x_2, x_3, \dot... | torch.generated.torch.cummin#torch.cummin |
torch.cumprod(input, dim, *, dtype=None, out=None) β Tensor
Returns the cumulative product of elements of input in the dimension dim. For example, if input is a vector of size N, the result will also be a vector of size N, with elements. yi=x1Γx2Γx3Γβ―Γxiy_i = x_1 \times x_2\times x_3\times \dots \times x_i
Parame... | torch.generated.torch.cumprod#torch.cumprod |
torch.cumsum(input, dim, *, dtype=None, out=None) β Tensor
Returns the cumulative sum of elements of input in the dimension dim. For example, if input is a vector of size N, the result will also be a vector of size N, with elements. yi=x1+x2+x3+β―+xiy_i = x_1 + x_2 + x_3 + \dots + x_i
Parameters
input (Tensor) ... | torch.generated.torch.cumsum#torch.cumsum |
torch.deg2rad(input, *, out=None) β Tensor
Returns a new tensor with each of the elements of input converted from angles in degrees to radians. Parameters
input (Tensor) β the input tensor. Keyword Arguments
out (Tensor, optional) β the output tensor. Example: >>> a = torch.tensor([[180.0, -180.0], [360.0, -360... | torch.generated.torch.deg2rad#torch.deg2rad |
torch.dequantize(tensor) β Tensor
Returns an fp32 Tensor by dequantizing a quantized Tensor Parameters
tensor (Tensor) β A quantized Tensor
torch.dequantize(tensors) β sequence of Tensors
Given a list of quantized Tensors, dequantize them and return a list of fp32 Tensors Parameters
tensors (sequence of Ten... | torch.generated.torch.dequantize#torch.dequantize |
torch.det(input) β Tensor
Calculates determinant of a square matrix or batches of square matrices. Note torch.det() is deprecated. Please use torch.linalg.det() instead. Note Backward through detdet internally uses SVD results when input is not invertible. In this case, double backward through detdet will be uns... | torch.generated.torch.det#torch.det |
torch.diag(input, diagonal=0, *, out=None) β Tensor
If input is a vector (1-D tensor), then returns a 2-D square tensor with the elements of input as the diagonal. If input is a matrix (2-D tensor), then returns a 1-D tensor with the diagonal elements of input. The argument diagonal controls which diagonal to consi... | torch.generated.torch.diag#torch.diag |
torch.diagflat(input, offset=0) β Tensor
If input is a vector (1-D tensor), then returns a 2-D square tensor with the elements of input as the diagonal. If input is a tensor with more than one dimension, then returns a 2-D tensor with diagonal elements equal to a flattened input. The argument offset controls which ... | torch.generated.torch.diagflat#torch.diagflat |
torch.diagonal(input, offset=0, dim1=0, dim2=1) β Tensor
Returns a partial view of input with the its diagonal elements with respect to dim1 and dim2 appended as a dimension at the end of the shape. The argument offset controls which diagonal to consider: If offset = 0, it is the main diagonal. If offset > 0, it is ... | torch.generated.torch.diagonal#torch.diagonal |
torch.diag_embed(input, offset=0, dim1=-2, dim2=-1) β Tensor
Creates a tensor whose diagonals of certain 2D planes (specified by dim1 and dim2) are filled by input. To facilitate creating batched diagonal matrices, the 2D planes formed by the last two dimensions of the returned tensor are chosen by default. The argum... | torch.generated.torch.diag_embed#torch.diag_embed |
torch.diff(input, n=1, dim=-1, prepend=None, append=None) β Tensor
Computes the n-th forward difference along the given dimension. The first-order differences are given by out[i] = input[i + 1] - input[i]. Higher-order differences are calculated by using torch.diff() recursively. Note Only n = 1 is currently support... | torch.generated.torch.diff#torch.diff |
torch.digamma(input, *, out=None) β Tensor
Computes the logarithmic derivative of the gamma function on input. Ο(x)=ddxlnβ‘(Ξ(x))=Ξβ²(x)Ξ(x)\psi(x) = \frac{d}{dx} \ln\left(\Gamma\left(x\right)\right) = \frac{\Gamma'(x)}{\Gamma(x)}
Parameters
input (Tensor) β the tensor to compute the digamma function on Keyword A... | torch.generated.torch.digamma#torch.digamma |
torch.dist(input, other, p=2) β Tensor
Returns the p-norm of (input - other) The shapes of input and other must be broadcastable. Parameters
input (Tensor) β the input tensor.
other (Tensor) β the Right-hand-side input tensor
p (float, optional) β the norm to be computed Example: >>> x = torch.randn(4)
>>> x... | torch.generated.torch.dist#torch.dist |
Distributed communication package - torch.distributed Note Please refer to PyTorch Distributed Overview for a brief introduction to all features related to distributed training. Backends torch.distributed supports three built-in backends, each with different capabilities. The table below shows which functions are ava... | torch.distributed |
torch.distributed.algorithms.ddp_comm_hooks.default_hooks.allreduce_hook(process_group, bucket) [source]
This DDP communication hook just calls allreduce using GradBucket tensors. Once gradient tensors are aggregated across all workers, its then callback takes the mean and returns the result. If user registers this h... | torch.ddp_comm_hooks#torch.distributed.algorithms.ddp_comm_hooks.default_hooks.allreduce_hook |
torch.distributed.algorithms.ddp_comm_hooks.default_hooks.fp16_compress_hook(process_group, bucket) [source]
This DDP communication hook implements a simple gradient compression approach that converts GradBucket tensors whose type is assumed to be torch.float32 to half-precision floating point format (torch.float16).... | torch.ddp_comm_hooks#torch.distributed.algorithms.ddp_comm_hooks.default_hooks.fp16_compress_hook |
torch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook.batched_powerSGD_hook(state, bucket) [source]
This DDP communication hook implements a simplified PowerSGD gradient compression algorithm described in the paper. This variant does not compress the gradients layer by layer, but instead compresses the flattened ... | torch.ddp_comm_hooks#torch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook.batched_powerSGD_hook |
class torch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook.PowerSGDState(process_group, matrix_approximation_rank=1, start_powerSGD_iter=10, use_error_feedback=True, warm_start=True, random_seed=0) [source]
Stores both the algorithmβs hyperparameters and the internal state for all the gradients during the traini... | torch.ddp_comm_hooks#torch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook.PowerSGDState |
torch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook.powerSGD_hook(state, bucket) [source]
This DDP communication hook implements PowerSGD gradient compression algorithm described in the paper. Once gradient tensors are aggregated across all workers, this hook applies compression as follows: Views the input fla... | torch.ddp_comm_hooks#torch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook.powerSGD_hook |
torch.distributed.all_gather(tensor_list, tensor, group=None, async_op=False) [source]
Gathers tensors from the whole group in a list. Complex tensors are supported. Parameters
tensor_list (list[Tensor]) β Output list. It should contain correctly-sized tensors to be used for output of the collective.
tensor (Ten... | torch.distributed#torch.distributed.all_gather |
torch.distributed.all_gather_multigpu(output_tensor_lists, input_tensor_list, group=None, async_op=False) [source]
Gathers tensors from the whole group in a list. Each tensor in tensor_list should reside on a separate GPU Only nccl backend is currently supported tensors should only be GPU tensors Complex tensors are ... | torch.distributed#torch.distributed.all_gather_multigpu |
torch.distributed.all_gather_object(object_list, obj, group=None) [source]
Gathers picklable objects from the whole group into a list. Similar to all_gather(), but Python objects can be passed in. Note that the object must be picklable in order to be gathered. Parameters
object_list (list[Any]) β Output list. It ... | torch.distributed#torch.distributed.all_gather_object |
torch.distributed.all_reduce(tensor, op=<ReduceOp.SUM: 0>, group=None, async_op=False) [source]
Reduces the tensor data across all machines in such a way that all get the final result. After the call tensor is going to be bitwise identical in all processes. Complex tensors are supported. Parameters
tensor (Tensor... | torch.distributed#torch.distributed.all_reduce |
torch.distributed.all_reduce_multigpu(tensor_list, op=<ReduceOp.SUM: 0>, group=None, async_op=False) [source]
Reduces the tensor data across all machines in such a way that all get the final result. This function reduces a number of tensors on every node, while each tensor resides on different GPUs. Therefore, the in... | torch.distributed#torch.distributed.all_reduce_multigpu |
torch.distributed.all_to_all(output_tensor_list, input_tensor_list, group=None, async_op=False) [source]
Each process scatters list of input tensors to all processes in a group and return gathered list of tensors in output list. Parameters
output_tensor_list (list[Tensor]) β List of tensors to be gathered one per... | torch.distributed#torch.distributed.all_to_all |
torch.distributed.autograd.backward(context_id: int, roots: List[Tensor], retain_graph = False) β None
Kicks off the distributed backward pass using the provided roots. This currently implements the FAST mode algorithm which assumes all RPC messages sent in the same distributed autograd context across workers would b... | torch.rpc#torch.distributed.autograd.backward |
class torch.distributed.autograd.context [source]
Context object to wrap forward and backward passes when using distributed autograd. The context_id generated in the with statement is required to uniquely identify a distributed backward pass on all workers. Each worker stores metadata associated with this context_id,... | torch.rpc#torch.distributed.autograd.context |
torch.distributed.autograd.get_gradients(context_id: int) β Dict[Tensor, Tensor]
Retrieves a map from Tensor to the appropriate gradient for that Tensor accumulated in the provided context corresponding to the given context_id as part of the distributed autograd backward pass. Parameters
context_id (int) β The auto... | torch.rpc#torch.distributed.autograd.get_gradients |
class torch.distributed.Backend [source]
An enum-like class of available backends: GLOO, NCCL, MPI, and other registered backends. The values of this class are lowercase strings, e.g., "gloo". They can be accessed as attributes, e.g., Backend.NCCL. This class can be directly called to parse the string, e.g., Backend(... | torch.distributed#torch.distributed.Backend |
torch.distributed.barrier(group=None, async_op=False, device_ids=None) [source]
Synchronizes all processes. This collective blocks processes until the whole group enters this function, if async_op is False, or if async work handle is called on wait(). Parameters
group (ProcessGroup, optional) β The process group ... | torch.distributed#torch.distributed.barrier |
torch.distributed.broadcast(tensor, src, group=None, async_op=False) [source]
Broadcasts the tensor to the whole group. tensor must have the same number of elements in all processes participating in the collective. Parameters
tensor (Tensor) β Data to be sent if src is the rank of current process, and tensor to b... | torch.distributed#torch.distributed.broadcast |
torch.distributed.broadcast_multigpu(tensor_list, src, group=None, async_op=False, src_tensor=0) [source]
Broadcasts the tensor to the whole group with multiple GPU tensors per node. tensor must have the same number of elements in all the GPUs from all processes participating in the collective. each tensor in the lis... | torch.distributed#torch.distributed.broadcast_multigpu |
torch.distributed.broadcast_object_list(object_list, src=0, group=None) [source]
Broadcasts picklable objects in object_list to the whole group. Similar to broadcast(), but Python objects can be passed in. Note that all objects in object_list must be picklable in order to be broadcasted. Parameters
object_list (L... | torch.distributed#torch.distributed.broadcast_object_list |
class torch.distributed.FileStore
A store implementation that uses a file to store the underlying key-value pairs. Parameters
file_name (str) β path of the file in which to store the key-value pairs
world_size (int) β The total number of processes using the store Example::
>>> import torch.distributed as di... | torch.distributed#torch.distributed.FileStore |
torch.distributed.gather(tensor, gather_list=None, dst=0, group=None, async_op=False) [source]
Gathers a list of tensors in a single process. Parameters
tensor (Tensor) β Input tensor.
gather_list (list[Tensor], optional) β List of appropriately-sized tensors to use for gathered data (default is None, must be sp... | torch.distributed#torch.distributed.gather |
torch.distributed.gather_object(obj, object_gather_list=None, dst=0, group=None) [source]
Gathers picklable objects from the whole group in a single process. Similar to gather(), but Python objects can be passed in. Note that the object must be picklable in order to be gathered. Parameters
obj (Any) β Input objec... | torch.distributed#torch.distributed.gather_object |
torch.distributed.get_backend(group=None) [source]
Returns the backend of the given process group. Parameters
group (ProcessGroup, optional) β The process group to work on. The default is the general main process group. If another specific group is specified, the calling process must be part of group. Returns
The... | torch.distributed#torch.distributed.get_backend |
torch.distributed.get_rank(group=None) [source]
Returns the rank of current process group Rank is a unique identifier assigned to each process within a distributed process group. They are always consecutive integers ranging from 0 to world_size. Parameters
group (ProcessGroup, optional) β The process group to work ... | torch.distributed#torch.distributed.get_rank |
torch.distributed.get_world_size(group=None) [source]
Returns the number of processes in the current process group Parameters
group (ProcessGroup, optional) β The process group to work on. If None, the default process group will be used. Returns
The world size of the process group -1, if not part of the group | torch.distributed#torch.distributed.get_world_size |
class torch.distributed.HashStore
A thread-safe store implementation based on an underlying hashmap. This store can be used within the same process (for example, by other threads), but cannot be used across processes. Example::
>>> import torch.distributed as dist
>>> store = dist.HashStore()
>>> # store can be use... | torch.distributed#torch.distributed.HashStore |
torch.distributed.init_process_group(backend, init_method=None, timeout=datetime.timedelta(seconds=1800), world_size=-1, rank=-1, store=None, group_name='') [source]
Initializes the default distributed process group, and this will also initialize the distributed package. There are 2 main ways to initialize a process... | torch.distributed#torch.distributed.init_process_group |
torch.distributed.irecv(tensor, src=None, group=None, tag=0) [source]
Receives a tensor asynchronously. Parameters
tensor (Tensor) β Tensor to fill with received data.
src (int, optional) β Source rank. Will receive from any process if unspecified.
group (ProcessGroup, optional) β The process group to work on. ... | torch.distributed#torch.distributed.irecv |
torch.distributed.isend(tensor, dst, group=None, tag=0) [source]
Sends a tensor asynchronously. Parameters
tensor (Tensor) β Tensor to send.
dst (int) β Destination rank.
group (ProcessGroup, optional) β The process group to work on. If None, the default process group will be used.
tag (int, optional) β Tag to... | torch.distributed#torch.distributed.isend |
torch.distributed.is_available() [source]
Returns True if the distributed package is available. Otherwise, torch.distributed does not expose any other APIs. Currently, torch.distributed is available on Linux, MacOS and Windows. Set USE_DISTRIBUTED=1 to enable it when building PyTorch from source. Currently, the defau... | torch.distributed#torch.distributed.is_available |
torch.distributed.is_initialized() [source]
Checking if the default process group has been initialized | torch.distributed#torch.distributed.is_initialized |
torch.distributed.is_mpi_available() [source]
Checks if the MPI backend is available. | torch.distributed#torch.distributed.is_mpi_available |
torch.distributed.is_nccl_available() [source]
Checks if the NCCL backend is available. | torch.distributed#torch.distributed.is_nccl_available |
torch.distributed.new_group(ranks=None, timeout=datetime.timedelta(seconds=1800), backend=None) [source]
Creates a new distributed group. This function requires that all processes in the main group (i.e. all processes that are part of the distributed job) enter this function, even if they are not going to be members ... | torch.distributed#torch.distributed.new_group |
class torch.distributed.optim.DistributedOptimizer(optimizer_class, params_rref, *args, **kwargs) [source]
DistributedOptimizer takes remote references to parameters scattered across workers and applies the given optimizer locally for each parameter. This class uses get_gradients() in order to retrieve the gradients ... | torch.rpc#torch.distributed.optim.DistributedOptimizer |
step(context_id) [source]
Performs a single optimization step. This will call torch.optim.Optimizer.step() on each worker containing parameters to be optimized, and will block until all workers return. The provided context_id will be used to retrieve the corresponding context that contains the gradients that should b... | torch.rpc#torch.distributed.optim.DistributedOptimizer.step |
class torch.distributed.pipeline.sync.Pipe(module, chunks=1, checkpoint='except_last', deferred_batch_norm=False) [source]
Wraps an arbitrary nn.Sequential module to train on using synchronous pipeline parallelism. If the module requires lots of memory and doesnβt fit on a single GPU, pipeline parallelism is a useful... | torch.pipeline#torch.distributed.pipeline.sync.Pipe |
forward(input) [source]
Processes a single input mini-batch through the pipe and returns an RRef pointing to the output. Pipe is a fairly transparent module wrapper. It doesnβt modify the input and output signature of the underlying module. But thereβs type restriction. Input and output have to be a Tensor or a seque... | torch.pipeline#torch.distributed.pipeline.sync.Pipe.forward |
class torch.distributed.pipeline.sync.skip.skippable.pop(name) [source]
The command to pop a skip tensor. def forward(self, input):
skip = yield pop('name')
return f(input) + skip
Parameters
name (str) β name of skip tensor Returns
the skip tensor previously stashed by another layer under the same name | torch.pipeline#torch.distributed.pipeline.sync.skip.skippable.pop |
torch.distributed.pipeline.sync.skip.skippable.skippable(stash=(), pop=()) [source]
The decorator to define a nn.Module with skip connections. Decorated modules are called βskippableβ. This functionality works perfectly fine even when the module is not wrapped by Pipe. Each skip tensor is managed by its name. Before ... | torch.pipeline#torch.distributed.pipeline.sync.skip.skippable.skippable |
class torch.distributed.pipeline.sync.skip.skippable.stash(name, tensor) [source]
The command to stash a skip tensor. def forward(self, input):
yield stash('name', input)
return f(input)
Parameters
name (str) β name of skip tensor
input (torch.Tensor or None) β tensor to pass to the skip connection | torch.pipeline#torch.distributed.pipeline.sync.skip.skippable.stash |
torch.distributed.pipeline.sync.skip.skippable.verify_skippables(module) [source]
Verifies if the underlying skippable modules satisfy integrity. Every skip tensor must have only one pair of stash and pop. If there are one or more unmatched pairs, it will raise TypeError with the detailed messages. Here are a few fai... | torch.pipeline#torch.distributed.pipeline.sync.skip.skippable.verify_skippables |
class torch.distributed.PrefixStore
A wrapper around any of the 3 key-value stores (TCPStore, FileStore, and HashStore) that adds a prefix to each key inserted to the store. Parameters
prefix (str) β The prefix string that is prepended to each key before being inserted into the store.
store (torch.distributed.st... | torch.distributed#torch.distributed.PrefixStore |
torch.distributed.recv(tensor, src=None, group=None, tag=0) [source]
Receives a tensor synchronously. Parameters
tensor (Tensor) β Tensor to fill with received data.
src (int, optional) β Source rank. Will receive from any process if unspecified.
group (ProcessGroup, optional) β The process group to work on. If... | torch.distributed#torch.distributed.recv |
torch.distributed.reduce(tensor, dst, op=<ReduceOp.SUM: 0>, group=None, async_op=False) [source]
Reduces the tensor data across all machines. Only the process with rank dst is going to receive the final result. Parameters
tensor (Tensor) β Input and output of the collective. The function operates in-place.
dst (... | torch.distributed#torch.distributed.reduce |
class torch.distributed.ReduceOp
An enum-like class for available reduction operations: SUM, PRODUCT, MIN, MAX, BAND, BOR, and BXOR. Note that BAND, BOR, and BXOR reductions are not available when using the NCCL backend. Additionally, MAX, MIN and PRODUCT are not supported for complex tensors. The values of this clas... | torch.distributed#torch.distributed.ReduceOp |
torch.distributed.reduce_multigpu(tensor_list, dst, op=<ReduceOp.SUM: 0>, group=None, async_op=False, dst_tensor=0) [source]
Reduces the tensor data on multiple GPUs across all machines. Each tensor in tensor_list should reside on a separate GPU Only the GPU of tensor_list[dst_tensor] on the process with rank dst is ... | torch.distributed#torch.distributed.reduce_multigpu |
class torch.distributed.reduce_op
Deprecated enum-like class for reduction operations: SUM, PRODUCT, MIN, and MAX. ReduceOp is recommended to use instead. | torch.distributed#torch.distributed.reduce_op |
torch.distributed.reduce_scatter(output, input_list, op=<ReduceOp.SUM: 0>, group=None, async_op=False) [source]
Reduces, then scatters a list of tensors to all processes in a group. Parameters
output (Tensor) β Output tensor.
input_list (list[Tensor]) β List of tensors to reduce and scatter.
group (ProcessGroup... | torch.distributed#torch.distributed.reduce_scatter |
torch.distributed.reduce_scatter_multigpu(output_tensor_list, input_tensor_lists, op=<ReduceOp.SUM: 0>, group=None, async_op=False) [source]
Reduce and scatter a list of tensors to the whole group. Only nccl backend is currently supported. Each tensor in output_tensor_list should reside on a separate GPU, as should e... | torch.distributed#torch.distributed.reduce_scatter_multigpu |
class torch.distributed.rpc.BackendType
An enum class of available backends. PyTorch ships with two builtin backends: BackendType.TENSORPIPE and BackendType.PROCESS_GROUP. Additional ones can be registered using the register_backend() function. | torch.rpc#torch.distributed.rpc.BackendType |
torch.distributed.rpc.functions.async_execution(fn) [source]
A decorator for a function indicating that the return value of the function is guaranteed to be a Future object and this function can run asynchronously on the RPC callee. More specifically, the callee extracts the Future returned by the wrapped function an... | torch.rpc#torch.distributed.rpc.functions.async_execution |
torch.distributed.rpc.get_worker_info(worker_name=None) [source]
Get WorkerInfo of a given worker name. Use this WorkerInfo to avoid passing an expensive string on every invocation. Parameters
worker_name (str) β the string name of a worker. If None, return the the id of the current worker. (default None) Returns ... | torch.rpc#torch.distributed.rpc.get_worker_info |
torch.distributed.rpc.init_rpc(name, backend=None, rank=-1, world_size=None, rpc_backend_options=None) [source]
Initializes RPC primitives such as the local RPC agent and distributed autograd, which immediately makes the current process ready to send and receive RPCs. Parameters
name (str) β a globally unique nam... | torch.rpc#torch.distributed.rpc.init_rpc |
class torch.distributed.rpc.ProcessGroupRpcBackendOptions
The backend options class for ProcessGroupAgent, which is derived from RpcBackendOptions. Parameters
num_send_recv_threads (int, optional) β The number of threads in the thread-pool used by ProcessGroupAgent (default: 4).
rpc_timeout (float, optional) β T... | torch.rpc#torch.distributed.rpc.ProcessGroupRpcBackendOptions |
property init_method
URL specifying how to initialize the process group. Default is env:// | torch.rpc#torch.distributed.rpc.ProcessGroupRpcBackendOptions.init_method |
property num_send_recv_threads
The number of threads in the thread-pool used by ProcessGroupAgent. | torch.rpc#torch.distributed.rpc.ProcessGroupRpcBackendOptions.num_send_recv_threads |
property rpc_timeout
A float indicating the timeout to use for all RPCs. If an RPC does not complete in this timeframe, it will complete with an exception indicating that it has timed out. | torch.rpc#torch.distributed.rpc.ProcessGroupRpcBackendOptions.rpc_timeout |
torch.distributed.rpc.remote(to, func, args=None, kwargs=None, timeout=-1.0) [source]
Make a remote call to run func on worker to and return an RRef to the result value immediately. Worker to will be the owner of the returned RRef, and the worker calling remote is a user. The owner manages the global reference count ... | torch.rpc#torch.distributed.rpc.remote |
class torch.distributed.rpc.RpcBackendOptions
An abstract structure encapsulating the options passed into the RPC backend. An instance of this class can be passed in to init_rpc() in order to initialize RPC with specific configurations, such as the RPC timeout and init_method to be used.
property init_method
URL ... | torch.rpc#torch.distributed.rpc.RpcBackendOptions |
property init_method
URL specifying how to initialize the process group. Default is env:// | torch.rpc#torch.distributed.rpc.RpcBackendOptions.init_method |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.