doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
tf.Variable View source on GitHub See the variable guide.
tf.Variable(
initial_value=None, trainable=None, validate_shape=True, caching_device=None,
name=None, variable_def=None, dtype=None, import_scope=None, constraint=None,
synchronization=tf.VariableSynchronization.AUTO,
aggregation=tf.com... | tensorflow.variable |
tf.Variable.SaveSliceInfo View source on GitHub Information on how to save this Variable as a slice. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.Variable.SaveSliceInfo
tf.Variable.SaveSliceInfo(
full_name=None, full_shape=None, var_offset=None, var_shape=... | tensorflow.variable.savesliceinfo |
tf.VariableAggregation View source on GitHub Indicates how a distributed variable will be aggregated. tf.distribute.Strategy distributes a model by making multiple copies (called "replicas") acting data-parallel on different elements of the input batch. When performing some variable-update operation, say var.ass... | tensorflow.variableaggregation |
tf.VariableSynchronization View source on GitHub Indicates when a distributed variable will be synced. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.VariableSynchronization
AUTO: Indicates that the synchronization will be determined by the current Distribution... | tensorflow.variablesynchronization |
tf.variable_creator_scope View source on GitHub Scope which defines a variable creation function to be used by variable().
@tf_contextlib.contextmanager
tf.variable_creator_scope(
variable_creator
)
variable_creator is expected to be a function with the following signature: def variable_creator(next_creato... | tensorflow.variable_creator_scope |
tf.vectorized_map View source on GitHub Parallel map on the list of tensors unpacked from elems on dimension 0. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.vectorized_map
tf.vectorized_map(
fn, elems, fallback_to_while_loop=True
)
This method works simil... | tensorflow.vectorized_map |
Module: tf.version Public API for tf.version namespace.
Other Members
COMPILER_VERSION '7.3.1 20180303'
GIT_VERSION 'v2.4.0-rc4-71-g582c8d236cb'
GRAPH_DEF_VERSION 561
GRAPH_DEF_VERSION_MIN_CONSUMER 0
GRAPH_DEF_VERSION_MIN_PRODUCER 0
VERSION '2.4.0' | tensorflow.version |
tf.where View source on GitHub Return the elements where condition is True (multiplexing x and y). View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.where_v2
tf.where(
condition, x=None, y=None, name=None
)
This operator has two modes: in one mode both x and y... | tensorflow.where |
tf.while_loop View source on GitHub Repeat body while the condition cond is true. (deprecated argument values)
tf.while_loop(
cond, body, loop_vars, shape_invariants=None, parallel_iterations=10,
back_prop=True, swap_memory=False, maximum_iterations=None, name=None
)
Warning: SOME ARGUMENT VALUES ARE D... | tensorflow.while_loop |
Module: tf.xla Public API for tf.xla namespace. Modules experimental module: Public API for tf.xla.experimental namespace. | tensorflow.xla |
Module: tf.xla.experimental Public API for tf.xla.experimental namespace. Functions compile(...): Builds an operator that compiles and runs computation with XLA. (deprecated) jit_scope(...): Enable or disable JIT compilation of operators within the scope. | tensorflow.xla.experimental |
tf.xla.experimental.compile View source on GitHub Builds an operator that compiles and runs computation with XLA. (deprecated) View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.xla.experimental.compile
tf.xla.experimental.compile(
computation, inputs=None
)
Wa... | tensorflow.xla.experimental.compile |
tf.xla.experimental.jit_scope View source on GitHub Enable or disable JIT compilation of operators within the scope. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.xla.experimental.jit_scope
@contextlib.contextmanager
tf.xla.experimental.jit_scope(
compile_o... | tensorflow.xla.experimental.jit_scope |
tf.zeros View source on GitHub Creates a tensor with all elements set to zero. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.zeros
tf.zeros(
shape, dtype=tf.dtypes.float32, name=None
)
See also tf.zeros_like, tf.ones, tf.fill, tf.eye. This operation return... | tensorflow.zeros |
tf.zeros_initializer View source on GitHub Initializer that generates tensors initialized to 0. Initializers allow you to pre-specify an initialization strategy, encoded in the Initializer object, without knowing the shape and dtype of the variable being initialized. Examples:
def make_variables(k, initializer)... | tensorflow.zeros_initializer |
tf.zeros_like View source on GitHub Creates a tensor with all elements set to zero.
tf.zeros_like(
input, dtype=None, name=None
)
See also tf.zeros. Given a single tensor or array-like object (input), this operation returns a tensor of the same type and shape as input with all elements set to zero. Optiona... | tensorflow.zeros_like |
torch The torch package contains data structures for multi-dimensional tensors and defines mathematical operations over these tensors. Additionally, it provides many utilities for efficient serializing of Tensors and arbitrary types, and other useful utilities. It has a CUDA counterpart, that enables you to run your te... | torch |
torch.abs(input, *, out=None) → Tensor
Computes the absolute value of each element in input. outi=∣inputi∣\text{out}_{i} = |\text{input}_{i}|
Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> torch.abs(torch.tensor([-1, -2, 3]))
tensor([ ... | torch.generated.torch.abs#torch.abs |
torch.absolute(input, *, out=None) → Tensor
Alias for torch.abs() | torch.generated.torch.absolute#torch.absolute |
torch.acos(input, *, out=None) → Tensor
Computes the inverse cosine of each element in input. outi=cos−1(inputi)\text{out}_{i} = \cos^{-1}(\text{input}_{i})
Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.randn(4)
>>> a
tenso... | torch.generated.torch.acos#torch.acos |
torch.acosh(input, *, out=None) → Tensor
Returns a new tensor with the inverse hyperbolic cosine of the elements of input. Note The domain of the inverse hyperbolic cosine is [1, inf) and values outside this range will be mapped to NaN, except for + INF for which the output is mapped to + INF. outi=cosh−1(inputi)... | torch.generated.torch.acosh#torch.acosh |
torch.add(input, other, *, out=None)
Adds the scalar other to each element of the input input and returns a new resulting tensor. out=input+other\text{out} = \text{input} + \text{other}
If input is of type FloatTensor or DoubleTensor, other must be a real number, otherwise it should be an integer. Parameters
i... | torch.generated.torch.add#torch.add |
torch.addbmm(input, batch1, batch2, *, beta=1, alpha=1, out=None) → Tensor
Performs a batch matrix-matrix product of matrices stored in batch1 and batch2, with a reduced add step (all matrix multiplications get accumulated along the first dimension). input is added to the final result. batch1 and batch2 must be 3-D t... | torch.generated.torch.addbmm#torch.addbmm |
torch.addcdiv(input, tensor1, tensor2, *, value=1, out=None) → Tensor
Performs the element-wise division of tensor1 by tensor2, multiply the result by the scalar value and add it to input. Warning Integer division with addcdiv is no longer supported, and in a future release addcdiv will perform a true division of te... | torch.generated.torch.addcdiv#torch.addcdiv |
torch.addcmul(input, tensor1, tensor2, *, value=1, out=None) → Tensor
Performs the element-wise multiplication of tensor1 by tensor2, multiply the result by the scalar value and add it to input. outi=inputi+value×tensor1i×tensor2i\text{out}_i = \text{input}_i + \text{value} \times \text{tensor1}_i \times \text{tenso... | torch.generated.torch.addcmul#torch.addcmul |
torch.addmm(input, mat1, mat2, *, beta=1, alpha=1, out=None) → Tensor
Performs a matrix multiplication of the matrices mat1 and mat2. The matrix input is added to the final result. If mat1 is a (n×m)(n \times m) tensor, mat2 is a (m×p)(m \times p) tensor, then input must be broadcastable with a (n×p)(n \times p) t... | torch.generated.torch.addmm#torch.addmm |
torch.addmv(input, mat, vec, *, beta=1, alpha=1, out=None) → Tensor
Performs a matrix-vector product of the matrix mat and the vector vec. The vector input is added to the final result. If mat is a (n×m)(n \times m) tensor, vec is a 1-D tensor of size m, then input must be broadcastable with a 1-D tensor of size n a... | torch.generated.torch.addmv#torch.addmv |
torch.addr(input, vec1, vec2, *, beta=1, alpha=1, out=None) → Tensor
Performs the outer-product of vectors vec1 and vec2 and adds it to the matrix input. Optional values beta and alpha are scaling factors on the outer product between vec1 and vec2 and the added matrix input respectively. out=β input+α(vec1⊗vec2)\tex... | torch.generated.torch.addr#torch.addr |
torch.all(input) → Tensor
Tests if all elements in input evaluate to True. Note This function matches the behaviour of NumPy in returning output of dtype bool for all supported dtypes except uint8. For uint8 the dtype of output is uint8 itself. Example: >>> a = torch.rand(1, 2).bool()
>>> a
tensor([[False, True]], ... | torch.generated.torch.all#torch.all |
torch.allclose(input, other, rtol=1e-05, atol=1e-08, equal_nan=False) → bool
This function checks if all input and other satisfy the condition: ∣input−other∣≤atol+rtol×∣other∣\lvert \text{input} - \text{other} \rvert \leq \texttt{atol} + \texttt{rtol} \times \lvert \text{other} \rvert
elementwise, for all elements... | torch.generated.torch.allclose#torch.allclose |
torch.amax(input, dim, keepdim=False, *, out=None) → Tensor
Returns the maximum value of each slice of the input tensor in the given dimension(s) dim. Note
The difference between max/min and amax/amin is:
amax/amin supports reducing on multiple dimensions,
amax/amin does not return indices,
amax/amin evenly ... | torch.generated.torch.amax#torch.amax |
torch.amin(input, dim, keepdim=False, *, out=None) → Tensor
Returns the minimum value of each slice of the input tensor in the given dimension(s) dim. Note
The difference between max/min and amax/amin is:
amax/amin supports reducing on multiple dimensions,
amax/amin does not return indices,
amax/amin evenly ... | torch.generated.torch.amin#torch.amin |
torch.angle(input, *, out=None) → Tensor
Computes the element-wise angle (in radians) of the given input tensor. outi=angle(inputi)\text{out}_{i} = angle(\text{input}_{i})
Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Note Starting in PyTorch 1.8... | torch.generated.torch.angle#torch.angle |
torch.any(input) → Tensor
Parameters
input (Tensor) – the input tensor. Tests if any element in input evaluates to True. Note This function matches the behaviour of NumPy in returning output of dtype bool for all supported dtypes except uint8. For uint8 the dtype of output is uint8 itself. Example: >>> a = torc... | torch.generated.torch.any#torch.any |
torch.arange(start=0, end, step=1, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
Returns a 1-D tensor of size ⌈end−startstep⌉\left\lceil \frac{\text{end} - \text{start}}{\text{step}} \right\rceil with values from the interval [start, end) taken with common difference step ... | torch.generated.torch.arange#torch.arange |
torch.arccos(input, *, out=None) → Tensor
Alias for torch.acos(). | torch.generated.torch.arccos#torch.arccos |
torch.arccosh(input, *, out=None) → Tensor
Alias for torch.acosh(). | torch.generated.torch.arccosh#torch.arccosh |
torch.arcsin(input, *, out=None) → Tensor
Alias for torch.asin(). | torch.generated.torch.arcsin#torch.arcsin |
torch.arcsinh(input, *, out=None) → Tensor
Alias for torch.asinh(). | torch.generated.torch.arcsinh#torch.arcsinh |
torch.arctan(input, *, out=None) → Tensor
Alias for torch.atan(). | torch.generated.torch.arctan#torch.arctan |
torch.arctanh(input, *, out=None) → Tensor
Alias for torch.atanh(). | torch.generated.torch.arctanh#torch.arctanh |
torch.are_deterministic_algorithms_enabled() [source]
Returns True if the global deterministic flag is turned on. Refer to torch.use_deterministic_algorithms() documentation for more details. | torch.generated.torch.are_deterministic_algorithms_enabled#torch.are_deterministic_algorithms_enabled |
torch.argmax(input) → LongTensor
Returns the indices of the maximum value of all elements in the input tensor. This is the second value returned by torch.max(). See its documentation for the exact semantics of this method. Note If there are multiple minimal values then the indices of the first minimal value are retu... | torch.generated.torch.argmax#torch.argmax |
torch.argmin(input, dim=None, keepdim=False) → LongTensor
Returns the indices of the minimum value(s) of the flattened tensor or along a dimension This is the second value returned by torch.min(). See its documentation for the exact semantics of this method. Note If there are multiple minimal values then the indices... | torch.generated.torch.argmin#torch.argmin |
torch.argsort(input, dim=-1, descending=False) → LongTensor
Returns the indices that sort a tensor along a given dimension in ascending order by value. This is the second value returned by torch.sort(). See its documentation for the exact semantics of this method. Parameters
input (Tensor) – the input tensor.
di... | torch.generated.torch.argsort#torch.argsort |
torch.asin(input, *, out=None) → Tensor
Returns a new tensor with the arcsine of the elements of input. outi=sin−1(inputi)\text{out}_{i} = \sin^{-1}(\text{input}_{i})
Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.randn(4)
>... | torch.generated.torch.asin#torch.asin |
torch.asinh(input, *, out=None) → Tensor
Returns a new tensor with the inverse hyperbolic sine of the elements of input. outi=sinh−1(inputi)\text{out}_{i} = \sinh^{-1}(\text{input}_{i})
Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a... | torch.generated.torch.asinh#torch.asinh |
torch.as_strided(input, size, stride, storage_offset=0) → Tensor
Create a view of an existing torch.Tensor input with specified size, stride and storage_offset. Warning More than one element of a created tensor may refer to a single memory location. As a result, in-place operations (especially ones that are vectoriz... | torch.generated.torch.as_strided#torch.as_strided |
torch.as_tensor(data, dtype=None, device=None) → Tensor
Convert the data into a torch.Tensor. If the data is already a Tensor with the same dtype and device, no copy will be performed, otherwise a new Tensor will be returned with computational graph retained if data Tensor has requires_grad=True. Similarly, if the da... | torch.generated.torch.as_tensor#torch.as_tensor |
torch.atan(input, *, out=None) → Tensor
Returns a new tensor with the arctangent of the elements of input. outi=tan−1(inputi)\text{out}_{i} = \tan^{-1}(\text{input}_{i})
Parameters
input (Tensor) – the input tensor. Keyword Arguments
out (Tensor, optional) – the output tensor. Example: >>> a = torch.randn(4... | torch.generated.torch.atan#torch.atan |
torch.atan2(input, other, *, out=None) → Tensor
Element-wise arctangent of inputi/otheri\text{input}_{i} / \text{other}_{i} with consideration of the quadrant. Returns a new tensor with the signed angles in radians between vector (otheri,inputi)(\text{other}_{i}, \text{input}_{i}) and vector (1,0)(1, 0) . (Note tha... | torch.generated.torch.atan2#torch.atan2 |
torch.atanh(input, *, out=None) → Tensor
Returns a new tensor with the inverse hyperbolic tangent of the elements of input. Note The domain of the inverse hyperbolic tangent is (-1, 1) and values outside this range will be mapped to NaN, except for the values 1 and -1 for which the output is mapped to +/-INF respect... | torch.generated.torch.atanh#torch.atanh |
torch.atleast_1d(*tensors) [source]
Returns a 1-dimensional view of each input tensor with zero dimensions. Input tensors with one or more dimensions are returned as-is. Parameters
input (Tensor or list of Tensors) – Returns
output (Tensor or tuple of Tensors) Example::
>>> x = torch.randn(2)
>>> x
tensor([1... | torch.generated.torch.atleast_1d#torch.atleast_1d |
torch.atleast_2d(*tensors) [source]
Returns a 2-dimensional view of each input tensor with zero dimensions. Input tensors with two or more dimensions are returned as-is. :param input: :type input: Tensor or list of Tensors Returns
output (Tensor or tuple of Tensors) Example::
>>> x = torch.tensor(1.)
>>> x
tens... | torch.generated.torch.atleast_2d#torch.atleast_2d |
torch.atleast_3d(*tensors) [source]
Returns a 3-dimensional view of each input tensor with zero dimensions. Input tensors with three or more dimensions are returned as-is. :param input: :type input: Tensor or list of Tensors Returns
output (Tensor or tuple of Tensors) Example >>> x = torch.tensor(0.5)
>>> x
tenso... | torch.generated.torch.atleast_3d#torch.atleast_3d |
Automatic differentiation package - torch.autograd torch.autograd provides classes and functions implementing automatic differentiation of arbitrary scalar valued functions. It requires minimal changes to the existing code - you only need to declare Tensor s for which gradients should be computed with the requires_grad... | torch.autograd |
torch.autograd.backward(tensors, grad_tensors=None, retain_graph=None, create_graph=False, grad_variables=None, inputs=None) [source]
Computes the sum of gradients of given tensors w.r.t. graph leaves. The graph is differentiated using the chain rule. If any of tensors are non-scalar (i.e. their data has more than on... | torch.autograd#torch.autograd.backward |
class torch.autograd.detect_anomaly [source]
Context-manager that enable anomaly detection for the autograd engine. This does two things: Running the forward pass with detection enabled will allow the backward pass to print the traceback of the forward operation that created the failing backward function. Any backwa... | torch.autograd#torch.autograd.detect_anomaly |
class torch.autograd.enable_grad [source]
Context-manager that enables gradient calculation. Enables gradient calculation, if it has been disabled via no_grad or set_grad_enabled. This context manager is thread local; it will not affect computation in other threads. Also functions as a decorator. (Make sure to instan... | torch.autograd#torch.autograd.enable_grad |
class torch.autograd.Function [source]
Records operation history and defines formulas for differentiating ops. See the Note on extending the autograd engine for more details on how to use this class: https://pytorch.org/docs/stable/notes/extending.html#extending-torch-autograd Every operation performed on Tensor s cr... | torch.autograd#torch.autograd.Function |
static backward(ctx, *grad_outputs) [source]
Defines a formula for differentiating the operation. This function is to be overridden by all subclasses. It must accept a context ctx as the first argument, followed by as many outputs did forward() return, and it should return as many tensors, as there were inputs to for... | torch.autograd#torch.autograd.Function.backward |
static forward(ctx, *args, **kwargs) [source]
Performs the operation. This function is to be overridden by all subclasses. It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types). The context can be used to store tensors that can be then retrieved during the ba... | torch.autograd#torch.autograd.Function.forward |
class torch.autograd.function._ContextMethodMixin [source]
mark_dirty(*args) [source]
Marks given tensors as modified in an in-place operation. This should be called at most once, only from inside the forward() method, and all arguments should be inputs. Every tensor that’s been modified in-place in a call to for... | torch.autograd#torch.autograd.function._ContextMethodMixin |
mark_dirty(*args) [source]
Marks given tensors as modified in an in-place operation. This should be called at most once, only from inside the forward() method, and all arguments should be inputs. Every tensor that’s been modified in-place in a call to forward() should be given to this function, to ensure correctness ... | torch.autograd#torch.autograd.function._ContextMethodMixin.mark_dirty |
mark_non_differentiable(*args) [source]
Marks outputs as non-differentiable. This should be called at most once, only from inside the forward() method, and all arguments should be outputs. This will mark outputs as not requiring gradients, increasing the efficiency of backward computation. You still need to accept a ... | torch.autograd#torch.autograd.function._ContextMethodMixin.mark_non_differentiable |
save_for_backward(*tensors) [source]
Saves given tensors for a future call to backward(). This should be called at most once, and only from inside the forward() method. Later, saved tensors can be accessed through the saved_tensors attribute. Before returning them to the user, a check is made to ensure they weren’t u... | torch.autograd#torch.autograd.function._ContextMethodMixin.save_for_backward |
set_materialize_grads(value) [source]
Sets whether to materialize output grad tensors. Default is true. This should be called only from inside the forward() method If true, undefined output grad tensors will be expanded to tensors full of zeros prior to calling the backward() method. | torch.autograd#torch.autograd.function._ContextMethodMixin.set_materialize_grads |
torch.autograd.functional.hessian(func, inputs, create_graph=False, strict=False, vectorize=False) [source]
Function that computes the Hessian of a given scalar function. Parameters
func (function) – a Python function that takes Tensor inputs and returns a Tensor with a single element.
inputs (tuple of Tensors o... | torch.autograd#torch.autograd.functional.hessian |
torch.autograd.functional.hvp(func, inputs, v=None, create_graph=False, strict=False) [source]
Function that computes the dot product between the Hessian of a given scalar function and a vector v at the point given by the inputs. Parameters
func (function) – a Python function that takes Tensor inputs and returns ... | torch.autograd#torch.autograd.functional.hvp |
torch.autograd.functional.jacobian(func, inputs, create_graph=False, strict=False, vectorize=False) [source]
Function that computes the Jacobian of a given function. Parameters
func (function) – a Python function that takes Tensor inputs and returns a tuple of Tensors or a Tensor.
inputs (tuple of Tensors or Ten... | torch.autograd#torch.autograd.functional.jacobian |
torch.autograd.functional.jvp(func, inputs, v=None, create_graph=False, strict=False) [source]
Function that computes the dot product between the Jacobian of the given function at the point given by the inputs and a vector v. Parameters
func (function) – a Python function that takes Tensor inputs and returns a tu... | torch.autograd#torch.autograd.functional.jvp |
torch.autograd.functional.vhp(func, inputs, v=None, create_graph=False, strict=False) [source]
Function that computes the dot product between a vector v and the Hessian of a given scalar function at the point given by the inputs. Parameters
func (function) – a Python function that takes Tensor inputs and returns ... | torch.autograd#torch.autograd.functional.vhp |
torch.autograd.functional.vjp(func, inputs, v=None, create_graph=False, strict=False) [source]
Function that computes the dot product between a vector v and the Jacobian of the given function at the point given by the inputs. Parameters
func (function) – a Python function that takes Tensor inputs and returns a tu... | torch.autograd#torch.autograd.functional.vjp |
torch.autograd.grad(outputs, inputs, grad_outputs=None, retain_graph=None, create_graph=False, only_inputs=True, allow_unused=False) [source]
Computes and returns the sum of gradients of outputs w.r.t. the inputs. grad_outputs should be a sequence of length matching output containing the “vector” in Jacobian-vector p... | torch.autograd#torch.autograd.grad |
torch.autograd.gradcheck(func, inputs, eps=1e-06, atol=1e-05, rtol=0.001, raise_exception=True, check_sparse_nnz=False, nondet_tol=0.0, check_undefined_grad=True, check_grad_dtypes=False, check_batched_grad=False) [source]
Check gradients computed via small finite differences against analytical gradients w.r.t. tenso... | torch.autograd#torch.autograd.gradcheck |
torch.autograd.gradgradcheck(func, inputs, grad_outputs=None, eps=1e-06, atol=1e-05, rtol=0.001, gen_non_contig_grad_outputs=False, raise_exception=True, nondet_tol=0.0, check_undefined_grad=True, check_grad_dtypes=False, check_batched_grad=False) [source]
Check gradients of gradients computed via small finite differ... | torch.autograd#torch.autograd.gradgradcheck |
class torch.autograd.no_grad [source]
Context-manager that disabled gradient calculation. Disabling gradient calculation is useful for inference, when you are sure that you will not call Tensor.backward(). It will reduce memory consumption for computations that would otherwise have requires_grad=True. In this mode, t... | torch.autograd#torch.autograd.no_grad |
class torch.autograd.profiler.emit_nvtx(enabled=True, record_shapes=False) [source]
Context manager that makes every autograd operation emit an NVTX range. It is useful when running the program under nvprof: nvprof --profile-from-start off -o trace_name.prof -- <regular command here>
Unfortunately, there’s no way to... | torch.autograd#torch.autograd.profiler.emit_nvtx |
torch.autograd.profiler.load_nvprof(path) [source]
Opens an nvprof trace file and parses autograd annotations. Parameters
path (str) – path to nvprof trace | torch.autograd#torch.autograd.profiler.load_nvprof |
class torch.autograd.profiler.profile(enabled=True, *, use_cuda=False, record_shapes=False, with_flops=False, profile_memory=False, with_stack=False, use_kineto=False, use_cpu=True) [source]
Context manager that manages autograd profiler state and holds a summary of results. Under the hood it just records events of f... | torch.autograd#torch.autograd.profiler.profile |
export_chrome_trace(path) [source]
Exports an EventList as a Chrome tracing tools file. The checkpoint can be later loaded and inspected under chrome://tracing URL. Parameters
path (str) – Path where the trace will be written. | torch.autograd#torch.autograd.profiler.profile.export_chrome_trace |
key_averages(group_by_input_shape=False, group_by_stack_n=0) [source]
Averages all function events over their keys. Parameters
group_by_input_shapes – group entries by
name, input shapes) rather than just event name. ((event) –
is useful to see which input shapes contribute to the runtime (This) –
most and m... | torch.autograd#torch.autograd.profiler.profile.key_averages |
property self_cpu_time_total
Returns total time spent on CPU obtained as a sum of all self times across all the events. | torch.autograd#torch.autograd.profiler.profile.self_cpu_time_total |
table(sort_by=None, row_limit=100, max_src_column_width=75, header=None, top_level_events_only=False) [source]
Prints an EventList as a nicely formatted table. Parameters
sort_by (str, optional) – Attribute used to sort entries. By default they are printed in the same order as they were registered. Valid keys inc... | torch.autograd#torch.autograd.profiler.profile.table |
total_average() [source]
Averages all events. Returns
A FunctionEventAvg object. | torch.autograd#torch.autograd.profiler.profile.total_average |
class torch.autograd.set_detect_anomaly(mode) [source]
Context-manager that sets the anomaly detection for the autograd engine on or off. set_detect_anomaly will enable or disable the autograd anomaly detection based on its argument mode. It can be used as a context-manager or as a function. See detect_anomaly above ... | torch.autograd#torch.autograd.set_detect_anomaly |
class torch.autograd.set_grad_enabled(mode) [source]
Context-manager that sets gradient calculation to on or off. set_grad_enabled will enable or disable grads based on its argument mode. It can be used as a context-manager or as a function. This context manager is thread local; it will not affect computation in othe... | torch.autograd#torch.autograd.set_grad_enabled |
torch.backends torch.backends controls the behavior of various backends that PyTorch supports. These backends include: torch.backends.cuda torch.backends.cudnn torch.backends.mkl torch.backends.mkldnn torch.backends.openmp torch.backends.cuda
torch.backends.cuda.is_built() [source]
Returns whether PyTorch is buil... | torch.backends |
torch.backends.cuda.cufft_plan_cache
cufft_plan_cache caches the cuFFT plans
size
A readonly int that shows the number of plans currently in the cuFFT plan cache.
max_size
A int that controls cache capacity of cuFFT plan.
clear()
Clears the cuFFT plan cache. | torch.backends#torch.backends.cuda.cufft_plan_cache |
torch.backends.cuda.is_built() [source]
Returns whether PyTorch is built with CUDA support. Note that this doesn’t necessarily mean CUDA is available; just that if this PyTorch binary were run a machine with working CUDA drivers and devices, we would be able to use it. | torch.backends#torch.backends.cuda.is_built |
torch.backends.cuda.matmul.allow_tf32
A bool that controls whether TensorFloat-32 tensor cores may be used in matrix multiplications on Ampere or newer GPUs. See TensorFloat-32(TF32) on Ampere devices. | torch.backends#torch.backends.cuda.matmul.allow_tf32 |
size
A readonly int that shows the number of plans currently in the cuFFT plan cache. | torch.backends#torch.backends.cuda.size |
torch.backends.cudnn.allow_tf32
A bool that controls where TensorFloat-32 tensor cores may be used in cuDNN convolutions on Ampere or newer GPUs. See TensorFloat-32(TF32) on Ampere devices. | torch.backends#torch.backends.cudnn.allow_tf32 |
torch.backends.cudnn.benchmark
A bool that, if True, causes cuDNN to benchmark multiple convolution algorithms and select the fastest. | torch.backends#torch.backends.cudnn.benchmark |
torch.backends.cudnn.deterministic
A bool that, if True, causes cuDNN to only use deterministic convolution algorithms. See also torch.are_deterministic_algorithms_enabled() and torch.use_deterministic_algorithms(). | torch.backends#torch.backends.cudnn.deterministic |
torch.backends.cudnn.enabled
A bool that controls whether cuDNN is enabled. | torch.backends#torch.backends.cudnn.enabled |
torch.backends.cudnn.is_available() [source]
Returns a bool indicating if CUDNN is currently available. | torch.backends#torch.backends.cudnn.is_available |
torch.backends.cudnn.version() [source]
Returns the version of cuDNN | torch.backends#torch.backends.cudnn.version |
torch.backends.mkl.is_available() [source]
Returns whether PyTorch is built with MKL support. | torch.backends#torch.backends.mkl.is_available |
torch.backends.mkldnn.is_available() [source]
Returns whether PyTorch is built with MKL-DNN support. | torch.backends#torch.backends.mkldnn.is_available |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.