doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
torch.backends.openmp.is_available() [source] Returns whether PyTorch is built with OpenMP support.
torch.backends#torch.backends.openmp.is_available
torch.baddbmm(input, batch1, batch2, *, beta=1, alpha=1, out=None) → Tensor Performs a batch matrix-matrix product of matrices in batch1 and batch2. input is added to the final result. batch1 and batch2 must be 3-D tensors each containing the same number of matrices. If batch1 is a (b×n×m)(b \times n \times m) tenso...
torch.generated.torch.baddbmm#torch.baddbmm
torch.bartlett_window(window_length, periodic=True, *, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor Bartlett window function. w[n]=1−∣2nN−1−1∣={2nN−1if 0≤n≤N−122−2nN−1if N−12<n<N,w[n] = 1 - \left| \frac{2n}{N-1} - 1 \right| = \begin{cases} \frac{2n}{N - 1} & \text{if } 0 \leq n \leq \...
torch.generated.torch.bartlett_window#torch.bartlett_window
torch.bernoulli(input, *, generator=None, out=None) → Tensor Draws binary random numbers (0 or 1) from a Bernoulli distribution. The input tensor should be a tensor containing probabilities to be used for drawing the binary random number. Hence, all values in input have to be in the range: 0≤inputi≤10 \leq \text{inpu...
torch.generated.torch.bernoulli#torch.bernoulli
torch.bincount(input, weights=None, minlength=0) → Tensor Count the frequency of each value in an array of non-negative ints. The number of bins (size 1) is one larger than the largest value in input unless input is empty, in which case the result is a tensor of size 0. If minlength is specified, the number of bins i...
torch.generated.torch.bincount#torch.bincount
torch.bitwise_and(input, other, *, out=None) → Tensor Computes the bitwise AND of input and other. The input tensor must be of integral or Boolean types. For bool tensors, it computes the logical AND. Parameters input – the first input tensor other – the second input tensor Keyword Arguments out (Tensor, opti...
torch.generated.torch.bitwise_and#torch.bitwise_and
torch.bitwise_not(input, *, out=None) → Tensor Computes the bitwise NOT of the given input tensor. The input tensor must be of integral or Boolean types. For bool tensors, it computes the logical NOT. Parameters input (Tensor) – the input tensor. Keyword Arguments out (Tensor, optional) – the output tensor. Exa...
torch.generated.torch.bitwise_not#torch.bitwise_not
torch.bitwise_or(input, other, *, out=None) → Tensor Computes the bitwise OR of input and other. The input tensor must be of integral or Boolean types. For bool tensors, it computes the logical OR. Parameters input – the first input tensor other – the second input tensor Keyword Arguments out (Tensor, optiona...
torch.generated.torch.bitwise_or#torch.bitwise_or
torch.bitwise_xor(input, other, *, out=None) → Tensor Computes the bitwise XOR of input and other. The input tensor must be of integral or Boolean types. For bool tensors, it computes the logical XOR. Parameters input – the first input tensor other – the second input tensor Keyword Arguments out (Tensor, opti...
torch.generated.torch.bitwise_xor#torch.bitwise_xor
torch.blackman_window(window_length, periodic=True, *, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor Blackman window function. w[n]=0.42−0.5cos⁡(2πnN−1)+0.08cos⁡(4πnN−1)w[n] = 0.42 - 0.5 \cos \left( \frac{2 \pi n}{N - 1} \right) + 0.08 \cos \left( \frac{4 \pi n}{N - 1} \right) where ...
torch.generated.torch.blackman_window#torch.blackman_window
torch.block_diag(*tensors) [source] Create a block diagonal matrix from provided tensors. Parameters *tensors – One or more tensors with 0, 1, or 2 dimensions. Returns A 2 dimensional tensor with all the input tensors arranged in order such that their upper left and lower right corners are diagonally adjacent. ...
torch.generated.torch.block_diag#torch.block_diag
torch.bmm(input, mat2, *, deterministic=False, out=None) → Tensor Performs a batch matrix-matrix product of matrices stored in input and mat2. input and mat2 must be 3-D tensors each containing the same number of matrices. If input is a (b×n×m)(b \times n \times m) tensor, mat2 is a (b×m×p)(b \times m \times p) ten...
torch.generated.torch.bmm#torch.bmm
torch.broadcast_shapes(*shapes) → Size [source] Similar to broadcast_tensors() but for shapes. This is equivalent to torch.broadcast_tensors(*map(torch.empty, shapes))[0].shape but avoids the need create to intermediate tensors. This is useful for broadcasting tensors of common batch shape but different rightmost sha...
torch.generated.torch.broadcast_shapes#torch.broadcast_shapes
torch.broadcast_tensors(*tensors) → List of Tensors [source] Broadcasts the given tensors according to Broadcasting semantics. Parameters *tensors – any number of tensors of the same type Warning More than one element of a broadcasted tensor may refer to a single memory location. As a result, in-place operations...
torch.generated.torch.broadcast_tensors#torch.broadcast_tensors
torch.broadcast_to(input, shape) → Tensor Broadcasts input to the shape shape. Equivalent to calling input.expand(shape). See expand() for details. Parameters input (Tensor) – the input tensor. shape (list, tuple, or torch.Size) – the new shape. Example: >>> x = torch.tensor([1, 2, 3]) >>> torch.broadcast_to(...
torch.generated.torch.broadcast_to#torch.broadcast_to
torch.bucketize(input, boundaries, *, out_int32=False, right=False, out=None) → Tensor Returns the indices of the buckets to which each value in the input belongs, where the boundaries of the buckets are set by boundaries. Return a new tensor with the same size as input. If right is False (default), then the left bou...
torch.generated.torch.bucketize#torch.bucketize
torch.can_cast(from, to) → bool Determines if a type conversion is allowed under PyTorch casting rules described in the type promotion documentation. Parameters from (dpython:type) – The original torch.dtype. to (dpython:type) – The target torch.dtype. Example: >>> torch.can_cast(torch.double, torch.float) Tr...
torch.generated.torch.can_cast#torch.can_cast
torch.cartesian_prod(*tensors) [source] Do cartesian product of the given sequence of tensors. The behavior is similar to python’s itertools.product. Parameters *tensors – any number of 1 dimensional tensors. Returns A tensor equivalent to converting all the input tensors into lists, do itertools.product on the...
torch.generated.torch.cartesian_prod#torch.cartesian_prod
torch.cat(tensors, dim=0, *, out=None) → Tensor Concatenates the given sequence of seq tensors in the given dimension. All tensors must either have the same shape (except in the concatenating dimension) or be empty. torch.cat() can be seen as an inverse operation for torch.split() and torch.chunk(). torch.cat() can b...
torch.generated.torch.cat#torch.cat
torch.cdist(x1, x2, p=2.0, compute_mode='use_mm_for_euclid_dist_if_necessary') [source] Computes batched the p-norm distance between each pair of the two collections of row vectors. Parameters x1 (Tensor) – input tensor of shape B×P×MB \times P \times M . x2 (Tensor) – input tensor of shape B×R×MB \times R \time...
torch.generated.torch.cdist#torch.cdist
torch.ceil(input, *, out=None) → Tensor Returns a new tensor with the ceil of the elements of input, the smallest integer greater than or equal to each element. outi=⌈inputi⌉=⌊inputi⌋+1\text{out}_{i} = \left\lceil \text{input}_{i} \right\rceil = \left\lfloor \text{input}_{i} \right\rfloor + 1 Parameters input (T...
torch.generated.torch.ceil#torch.ceil
torch.chain_matmul(*matrices) [source] Returns the matrix product of the NN 2-D tensors. This product is efficiently computed using the matrix chain order algorithm which selects the order in which incurs the lowest cost in terms of arithmetic operations ([CLRS]). Note that since this is a function to compute the pr...
torch.generated.torch.chain_matmul#torch.chain_matmul
torch.cholesky(input, upper=False, *, out=None) → Tensor Computes the Cholesky decomposition of a symmetric positive-definite matrix AA or for batches of symmetric positive-definite matrices. If upper is True, the returned matrix U is upper-triangular, and the decomposition has the form: A=UTUA = U^TU If upper is ...
torch.generated.torch.cholesky#torch.cholesky
torch.cholesky_inverse(input, upper=False, *, out=None) → Tensor Computes the inverse of a symmetric positive-definite matrix AA using its Cholesky factor uu : returns matrix inv. The inverse is computed using LAPACK routines dpotri and spotri (and the corresponding MAGMA routines). If upper is False, uu is lower t...
torch.generated.torch.cholesky_inverse#torch.cholesky_inverse
torch.cholesky_solve(input, input2, upper=False, *, out=None) → Tensor Solves a linear system of equations with a positive semidefinite matrix to be inverted given its Cholesky factor matrix uu . If upper is False, uu is and lower triangular and c is returned such that: c=(uuT)−1bc = (u u^T)^{{-1}} b If upper is ...
torch.generated.torch.cholesky_solve#torch.cholesky_solve
torch.chunk(input, chunks, dim=0) → List of Tensors Splits a tensor into a specific number of chunks. Each chunk is a view of the input tensor. Last chunk will be smaller if the tensor size along the given dimension dim is not divisible by chunks. Parameters input (Tensor) – the tensor to split chunks (int) – nu...
torch.generated.torch.chunk#torch.chunk
torch.clamp(input, min, max, *, out=None) → Tensor Clamp all elements in input into the range [ min, max ]. Let min_value and max_value be min and max, respectively, this returns: yi=min⁡(max⁡(xi,min_value),max_value)y_i = \min(\max(x_i, \text{min\_value}), \text{max\_value}) Parameters input (Tensor) – the in...
torch.generated.torch.clamp#torch.clamp
torch.clip(input, min, max, *, out=None) → Tensor Alias for torch.clamp().
torch.generated.torch.clip#torch.clip
torch.clone(input, *, memory_format=torch.preserve_format) → Tensor Returns a copy of input. Note This function is differentiable, so gradients will flow back from the result of this operation to input. To create a tensor without an autograd relationship to input see detach(). Parameters input (Tensor) – the inpu...
torch.generated.torch.clone#torch.clone
torch.column_stack(tensors, *, out=None) → Tensor Creates a new tensor by horizontally stacking the tensors in tensors. Equivalent to torch.hstack(tensors), except each zero or one dimensional tensor t in tensors is first reshaped into a (t.numel(), 1) column before being stacked horizontally. Parameters tensors (s...
torch.generated.torch.column_stack#torch.column_stack
torch.combinations(input, r=2, with_replacement=False) → seq Compute combinations of length rr of the given tensor. The behavior is similar to python’s itertools.combinations when with_replacement is set to False, and itertools.combinations_with_replacement when with_replacement is set to True. Parameters input ...
torch.generated.torch.combinations#torch.combinations
torch.compiled_with_cxx11_abi() [source] Returns whether PyTorch was built with _GLIBCXX_USE_CXX11_ABI=1
torch.generated.torch.compiled_with_cxx11_abi#torch.compiled_with_cxx11_abi
torch.complex(real, imag, *, out=None) → Tensor Constructs a complex tensor with its real part equal to real and its imaginary part equal to imag. Parameters real (Tensor) – The real part of the complex tensor. Must be float or double. imag (Tensor) – The imaginary part of the complex tensor. Must be same dtype ...
torch.generated.torch.complex#torch.complex
torch.conj(input, *, out=None) → Tensor Computes the element-wise conjugate of the given input tensor. If :attr`input` has a non-complex dtype, this function just returns input. Warning In the future, torch.conj() may return a non-writeable view for an input of non-complex dtype. It’s recommended that programs not m...
torch.generated.torch.conj#torch.conj
torch.copysign(input, other, *, out=None) → Tensor Create a new floating-point tensor with the magnitude of input and the sign of other, elementwise. outi={−∣inputi∣ifotheri≤−0.0∣inputi∣ifotheri≥0.0\text{out}_{i} = \begin{cases} -|\text{input}_{i}| & \text{if} \text{other}_{i} \leq -0.0 \\ |\text{input}_{i}| & \text...
torch.generated.torch.copysign#torch.copysign
torch.cos(input, *, out=None) → Tensor Returns a new tensor with the cosine of the elements of input. outi=cos⁡(inputi)\text{out}_{i} = \cos(\text{input}_{i}) Parameters input (Tensor) – the input tensor. Keyword Arguments out (Tensor, optional) – the output tensor. Example: >>> a = torch.randn(4) >>> a tens...
torch.generated.torch.cos#torch.cos
torch.cosh(input, *, out=None) → Tensor Returns a new tensor with the hyperbolic cosine of the elements of input. outi=cosh⁡(inputi)\text{out}_{i} = \cosh(\text{input}_{i}) Parameters input (Tensor) – the input tensor. Keyword Arguments out (Tensor, optional) – the output tensor. Example: >>> a = torch.randn...
torch.generated.torch.cosh#torch.cosh
torch.count_nonzero(input, dim=None) → Tensor Counts the number of non-zero values in the tensor input along the given dim. If no dim is specified then all non-zeros in the tensor are counted. Parameters input (Tensor) – the input tensor. dim (int or tuple of python:ints, optional) – Dim or tuple of dims along w...
torch.generated.torch.count_nonzero#torch.count_nonzero
torch.cross(input, other, dim=None, *, out=None) → Tensor Returns the cross product of vectors in dimension dim of input and other. input and other must have the same size, and the size of their dim dimension should be 3. If dim is not given, it defaults to the first dimension found with the size 3. Note that this mi...
torch.generated.torch.cross#torch.cross
torch.cuda This package adds support for CUDA tensor types, that implement the same function as CPU tensors, but they utilize GPUs for computation. It is lazily initialized, so you can always import it, and use is_available() to determine if your system supports CUDA. CUDA semantics has more details about working with ...
torch.cuda
Automatic Mixed Precision package - torch.cuda.amp torch.cuda.amp provides convenience methods for mixed precision, where some operations use the torch.float32 (float) datatype and other operations use torch.float16 (half). Some ops, like linear layers and convolutions, are much faster in float16. Other ops, like reduc...
torch.amp
class torch.cuda.amp.autocast(enabled=True) [source] Instances of autocast serve as context managers or decorators that allow regions of your script to run in mixed precision. In these regions, CUDA ops run in an op-specific dtype chosen by autocast to improve performance while maintaining accuracy. See the Autocast ...
torch.amp#torch.cuda.amp.autocast
torch.cuda.amp.custom_bwd(bwd) [source] Helper decorator for backward methods of custom autograd functions (subclasses of torch.autograd.Function). Ensures that backward executes with the same autocast state as forward. See the example page for more detail.
torch.amp#torch.cuda.amp.custom_bwd
torch.cuda.amp.custom_fwd(fwd=None, **kwargs) [source] Helper decorator for forward methods of custom autograd functions (subclasses of torch.autograd.Function). See the example page for more detail. Parameters cast_inputs (torch.dtype or None, optional, default=None) – If not None, when forward runs in an autocast...
torch.amp#torch.cuda.amp.custom_fwd
class torch.cuda.amp.GradScaler(init_scale=65536.0, growth_factor=2.0, backoff_factor=0.5, growth_interval=2000, enabled=True) [source] get_backoff_factor() [source] Returns a Python float containing the scale backoff factor. get_growth_factor() [source] Returns a Python float containing the scale growth fa...
torch.amp#torch.cuda.amp.GradScaler
get_backoff_factor() [source] Returns a Python float containing the scale backoff factor.
torch.amp#torch.cuda.amp.GradScaler.get_backoff_factor
get_growth_factor() [source] Returns a Python float containing the scale growth factor.
torch.amp#torch.cuda.amp.GradScaler.get_growth_factor
get_growth_interval() [source] Returns a Python int containing the growth interval.
torch.amp#torch.cuda.amp.GradScaler.get_growth_interval
get_scale() [source] Returns a Python float containing the current scale, or 1.0 if scaling is disabled. Warning get_scale() incurs a CPU-GPU sync.
torch.amp#torch.cuda.amp.GradScaler.get_scale
is_enabled() [source] Returns a bool indicating whether this instance is enabled.
torch.amp#torch.cuda.amp.GradScaler.is_enabled
load_state_dict(state_dict) [source] Loads the scaler state. If this instance is disabled, load_state_dict() is a no-op. Parameters state_dict (dict) – scaler state. Should be an object returned from a call to state_dict().
torch.amp#torch.cuda.amp.GradScaler.load_state_dict
scale(outputs) [source] Multiplies (‘scales’) a tensor or list of tensors by the scale factor. Returns scaled outputs. If this instance of GradScaler is not enabled, outputs are returned unmodified. Parameters outputs (Tensor or iterable of Tensors) – Outputs to scale.
torch.amp#torch.cuda.amp.GradScaler.scale
set_backoff_factor(new_factor) [source] Parameters new_scale (float) – Value to use as the new scale backoff factor.
torch.amp#torch.cuda.amp.GradScaler.set_backoff_factor
set_growth_factor(new_factor) [source] Parameters new_scale (float) – Value to use as the new scale growth factor.
torch.amp#torch.cuda.amp.GradScaler.set_growth_factor
set_growth_interval(new_interval) [source] Parameters new_interval (int) – Value to use as the new growth interval.
torch.amp#torch.cuda.amp.GradScaler.set_growth_interval
state_dict() [source] Returns the state of the scaler as a dict. It contains five entries: "scale" - a Python float containing the current scale "growth_factor" - a Python float containing the current growth factor "backoff_factor" - a Python float containing the current backoff factor "growth_interval" - a Pyth...
torch.amp#torch.cuda.amp.GradScaler.state_dict
step(optimizer, *args, **kwargs) [source] step() carries out the following two operations: Internally invokes unscale_(optimizer) (unless unscale_() was explicitly called for optimizer earlier in the iteration). As part of the unscale_(), gradients are checked for infs/NaNs. If no inf/NaN gradients are found, invoke...
torch.amp#torch.cuda.amp.GradScaler.step
unscale_(optimizer) [source] Divides (“unscales”) the optimizer’s gradient tensors by the scale factor. unscale_() is optional, serving cases where you need to modify or inspect gradients between the backward pass(es) and step(). If unscale_() is not called explicitly, gradients will be unscaled automatically during ...
torch.amp#torch.cuda.amp.GradScaler.unscale_
update(new_scale=None) [source] Updates the scale factor. If any optimizer steps were skipped the scale is multiplied by backoff_factor to reduce it. If growth_interval unskipped iterations occurred consecutively, the scale is multiplied by growth_factor to increase it. Passing new_scale sets the scale directly. Par...
torch.amp#torch.cuda.amp.GradScaler.update
torch.cuda.can_device_access_peer(device, peer_device) [source] Checks if peer access between two devices is possible.
torch.cuda#torch.cuda.can_device_access_peer
torch.cuda.comm.broadcast(tensor, devices=None, *, out=None) [source] Broadcasts a tensor to specified GPU devices. Parameters tensor (Tensor) – tensor to broadcast. Can be on CPU or GPU. devices (Iterable[torch.device, str or int], optional) – an iterable of GPU devices, among which to broadcast. out (Sequence...
torch.cuda#torch.cuda.comm.broadcast
torch.cuda.comm.broadcast_coalesced(tensors, devices, buffer_size=10485760) [source] Broadcasts a sequence tensors to the specified GPUs. Small tensors are first coalesced into a buffer to reduce the number of synchronizations. Parameters tensors (sequence) – tensors to broadcast. Must be on the same device, eith...
torch.cuda#torch.cuda.comm.broadcast_coalesced
torch.cuda.comm.gather(tensors, dim=0, destination=None, *, out=None) [source] Gathers tensors from multiple GPU devices. Parameters tensors (Iterable[Tensor]) – an iterable of tensors to gather. Tensor sizes in all dimensions other than dim have to match. dim (int, optional) – a dimension along which the tensor...
torch.cuda#torch.cuda.comm.gather
torch.cuda.comm.reduce_add(inputs, destination=None) [source] Sums tensors from multiple GPUs. All inputs should have matching shapes, dtype, and layout. The output tensor will be of the same shape, dtype, and layout. Parameters inputs (Iterable[Tensor]) – an iterable of tensors to add. destination (int, optiona...
torch.cuda#torch.cuda.comm.reduce_add
torch.cuda.comm.scatter(tensor, devices=None, chunk_sizes=None, dim=0, streams=None, *, out=None) [source] Scatters tensor across multiple GPUs. Parameters tensor (Tensor) – tensor to scatter. Can be on CPU or GPU. devices (Iterable[torch.device, str or int], optional) – an iterable of GPU devices, among which t...
torch.cuda#torch.cuda.comm.scatter
torch.cuda.current_blas_handle() [source] Returns cublasHandle_t pointer to current cuBLAS handle
torch.cuda#torch.cuda.current_blas_handle
torch.cuda.current_device() [source] Returns the index of a currently selected device.
torch.cuda#torch.cuda.current_device
torch.cuda.current_stream(device=None) [source] Returns the currently selected Stream for a given device. Parameters device (torch.device or int, optional) – selected device. Returns the currently selected Stream for the current device, given by current_device(), if device is None (default).
torch.cuda#torch.cuda.current_stream
torch.cuda.default_stream(device=None) [source] Returns the default Stream for a given device. Parameters device (torch.device or int, optional) – selected device. Returns the default Stream for the current device, given by current_device(), if device is None (default).
torch.cuda#torch.cuda.default_stream
class torch.cuda.device(device) [source] Context-manager that changes the selected device. Parameters device (torch.device or int) – device index to select. It’s a no-op if this argument is a negative integer or None.
torch.cuda#torch.cuda.device
torch.cuda.device_count() [source] Returns the number of GPUs available.
torch.cuda#torch.cuda.device_count
class torch.cuda.device_of(obj) [source] Context-manager that changes the current device to that of given object. You can use both tensors and storages as arguments. If a given object is not allocated on a GPU, this is a no-op. Parameters obj (Tensor or Storage) – object allocated on the selected device.
torch.cuda#torch.cuda.device_of
torch.cuda.empty_cache() [source] Releases all unoccupied cached memory currently held by the caching allocator so that those can be used in other GPU application and visible in nvidia-smi. Note empty_cache() doesn’t increase the amount of GPU memory available for PyTorch. However, it may help reduce fragmentation o...
torch.cuda#torch.cuda.empty_cache
class torch.cuda.Event [source] Wrapper around a CUDA event. CUDA events are synchronization markers that can be used to monitor the device’s progress, to accurately measure timing, and to synchronize CUDA streams. The underlying CUDA events are lazily initialized when the event is first recorded or exported to anoth...
torch.cuda#torch.cuda.Event
elapsed_time(end_event) [source] Returns the time elapsed in milliseconds after the event was recorded and before the end_event was recorded.
torch.cuda#torch.cuda.Event.elapsed_time
classmethod from_ipc_handle(device, handle) [source] Reconstruct an event from an IPC handle on the given device.
torch.cuda#torch.cuda.Event.from_ipc_handle
ipc_handle() [source] Returns an IPC handle of this event. If not recorded yet, the event will use the current device.
torch.cuda#torch.cuda.Event.ipc_handle
query() [source] Checks if all work currently captured by event has completed. Returns A boolean indicating if all work currently captured by event has completed.
torch.cuda#torch.cuda.Event.query
record(stream=None) [source] Records the event in a given stream. Uses torch.cuda.current_stream() if no stream is specified. The stream’s device must match the event’s device.
torch.cuda#torch.cuda.Event.record
synchronize() [source] Waits for the event to complete. Waits until the completion of all work currently captured in this event. This prevents the CPU thread from proceeding until the event completes. Note This is a wrapper around cudaEventSynchronize(): see CUDA Event documentation for more info.
torch.cuda#torch.cuda.Event.synchronize
wait(stream=None) [source] Makes all future work submitted to the given stream wait for this event. Use torch.cuda.current_stream() if no stream is specified.
torch.cuda#torch.cuda.Event.wait
torch.cuda.get_arch_list() [source] Returns list CUDA architectures this library was compiled for.
torch.cuda#torch.cuda.get_arch_list
torch.cuda.get_device_capability(device=None) [source] Gets the cuda capability of a device. Parameters device (torch.device or int, optional) – device for which to return the device capability. This function is a no-op if this argument is a negative integer. It uses the current device, given by current_device(), i...
torch.cuda#torch.cuda.get_device_capability
torch.cuda.get_device_name(device=None) [source] Gets the name of a device. Parameters device (torch.device or int, optional) – device for which to return the name. This function is a no-op if this argument is a negative integer. It uses the current device, given by current_device(), if device is None (default). R...
torch.cuda#torch.cuda.get_device_name
torch.cuda.get_device_properties(device) [source] Gets the properties of a device. Parameters device (torch.device or int or str) – device for which to return the properties of the device. Returns the properties of the device Return type _CudaDeviceProperties
torch.cuda#torch.cuda.get_device_properties
torch.cuda.get_gencode_flags() [source] Returns NVCC gencode flags this library were compiled with.
torch.cuda#torch.cuda.get_gencode_flags
torch.cuda.get_rng_state(device='cuda') [source] Returns the random number generator state of the specified GPU as a ByteTensor. Parameters device (torch.device or int, optional) – The device to return the RNG state of. Default: 'cuda' (i.e., torch.device('cuda'), the current CUDA device). Warning This function ...
torch.cuda#torch.cuda.get_rng_state
torch.cuda.get_rng_state_all() [source] Returns a list of ByteTensor representing the random number states of all devices.
torch.cuda#torch.cuda.get_rng_state_all
torch.cuda.init() [source] Initialize PyTorch’s CUDA state. You may need to call this explicitly if you are interacting with PyTorch via its C API, as Python bindings for CUDA functionality will not be available until this initialization takes place. Ordinary users should not need this, as all of PyTorch’s CUDA metho...
torch.cuda#torch.cuda.init
torch.cuda.initial_seed() [source] Returns the current random seed of the current GPU. Warning This function eagerly initializes CUDA.
torch.cuda#torch.cuda.initial_seed
torch.cuda.ipc_collect() [source] Force collects GPU memory after it has been released by CUDA IPC. Note Checks if any sent CUDA tensors could be cleaned from the memory. Force closes shared memory file used for reference counting if there is no active counters. Useful when the producer process stopped actively send...
torch.cuda#torch.cuda.ipc_collect
torch.cuda.is_available() [source] Returns a bool indicating if CUDA is currently available.
torch.cuda#torch.cuda.is_available
torch.cuda.is_initialized() [source] Returns whether PyTorch’s CUDA state has been initialized.
torch.cuda#torch.cuda.is_initialized
torch.cuda.list_gpu_processes(device=None) [source] Returns a human-readable printout of the running processes and their GPU memory use for a given device. This can be useful to display periodically during training, or when handling out-of-memory exceptions. Parameters device (torch.device or int, optional) – selec...
torch.cuda#torch.cuda.list_gpu_processes
torch.cuda.manual_seed(seed) [source] Sets the seed for generating random numbers for the current GPU. It’s safe to call this function if CUDA is not available; in that case, it is silently ignored. Parameters seed (int) – The desired seed. Warning If you are working with a multi-GPU model, this function is insu...
torch.cuda#torch.cuda.manual_seed
torch.cuda.manual_seed_all(seed) [source] Sets the seed for generating random numbers on all GPUs. It’s safe to call this function if CUDA is not available; in that case, it is silently ignored. Parameters seed (int) – The desired seed.
torch.cuda#torch.cuda.manual_seed_all
torch.cuda.max_memory_allocated(device=None) [source] Returns the maximum GPU memory occupied by tensors in bytes for a given device. By default, this returns the peak allocated memory since the beginning of this program. reset_peak_stats() can be used to reset the starting point in tracking this metric. For example,...
torch.cuda#torch.cuda.max_memory_allocated
torch.cuda.max_memory_cached(device=None) [source] Deprecated; see max_memory_reserved().
torch.cuda#torch.cuda.max_memory_cached
torch.cuda.max_memory_reserved(device=None) [source] Returns the maximum GPU memory managed by the caching allocator in bytes for a given device. By default, this returns the peak cached memory since the beginning of this program. reset_peak_stats() can be used to reset the starting point in tracking this metric. For...
torch.cuda#torch.cuda.max_memory_reserved
torch.cuda.memory_allocated(device=None) [source] Returns the current GPU memory occupied by tensors in bytes for a given device. Parameters device (torch.device or int, optional) – selected device. Returns statistic for the current device, given by current_device(), if device is None (default). Note This is lik...
torch.cuda#torch.cuda.memory_allocated