doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
call_method(method_name, args=None, kwargs=None, type_expr=None) [source] Insert a call_method Node into the Graph. A call_method node represents a call to a given method on the 0th element of args. Parameters method_name (str) – The name of the method to apply to the self argument. For example, if args[0] is a N...
torch.fx#torch.fx.Graph.call_method
call_module(module_name, args=None, kwargs=None, type_expr=None) [source] Insert a call_module Node into the Graph. A call_module node represents a call to the forward() function of a Module in the Module hierarchy. Parameters module_name (str) – The qualified name of the Module in the Module hierarchy to be call...
torch.fx#torch.fx.Graph.call_module
create_node(op, target, args=None, kwargs=None, name=None, type_expr=None) [source] Create a Node and add it to the Graph at the current insert-point. Note that the current insert-point can be set via Graph.inserting_before() and Graph.inserting_after(). Parameters op (str) – the opcode for this Node. One of ‘cal...
torch.fx#torch.fx.Graph.create_node
erase_node(to_erase) [source] Erases a Node from the Graph. Throws an exception if there are still users of that node in the Graph. Parameters to_erase (Node) – The Node to erase from the Graph.
torch.fx#torch.fx.Graph.erase_node
get_attr(qualified_name, type_expr=None) [source] Insert a get_attr node into the Graph. A get_attr Node represents the fetch of an attribute from the Module hierarchy. Parameters qualified_name (str) – the fully-qualified name of the attribute to be retrieved. For example, if the traced Module has a submodule na...
torch.fx#torch.fx.Graph.get_attr
graph_copy(g, val_map) [source] Copy all nodes from a given graph into self. Parameters g (Graph) – The source graph from which to copy Nodes. val_map (Dict[Node, Node]) – a dictionary that will be populated with a mapping from nodes in g to nodes in self. Note that val_map can be passed in with values in it alr...
torch.fx#torch.fx.Graph.graph_copy
inserting_after(n=None) [source] Set the point at which create_node and companion methods will insert into the graph. When used within a ‘with’ statement, this will temporary set the insert point and then restore it when the with statement exits: with g.inserting_after(n): ... # inserting after node n ... # inser...
torch.fx#torch.fx.Graph.inserting_after
inserting_before(n=None) [source] Set the point at which create_node and companion methods will insert into the graph. When used within a ‘with’ statement, this will temporary set the insert point and then restore it when the with statement exits: with g.inserting_before(n): ... # inserting before node n ... # in...
torch.fx#torch.fx.Graph.inserting_before
lint(root=None) [source] Runs various checks on this Graph to make sure it is well-formed. In particular: - Checks Nodes have correct ownership (owned by this graph) - Checks Nodes appear in topological order - If root is provided, checks that targets exist in root Parameters root (Optional[torch.nn.Module]) – The ...
torch.fx#torch.fx.Graph.lint
property nodes Get the list of Nodes that constitute this Graph. Note that this Node list representation is a doubly-linked list. Mutations during iteration (e.g. delete a Node, add a Node) are safe. Returns A doubly-linked list of Nodes. Note that reversed can be called on this list to switch iteration order.
torch.fx#torch.fx.Graph.nodes
node_copy(node, arg_transform=<function Graph.<lambda>>) [source] Copy a node from one graph into another. arg_transform needs to transform arguments from the graph of node to the graph of self. Example: # Copying all the nodes in `g` into `new_graph` g : torch.fx.Graph = ... new_graph = torch.fx.graph() value_remap ...
torch.fx#torch.fx.Graph.node_copy
output(result, type_expr=None) [source] Insert an output Node into the Graph. An output node represents a return statement in Python code. result is the value that should be returned. Parameters result (Argument) – The value to be returned. type_expr (Optional[Any]) – an optional type annotation representing the...
torch.fx#torch.fx.Graph.output
placeholder(name, type_expr=None) [source] Insert a placeholder node into the Graph. A placeholder represents a function input. Parameters name (str) – A name for the input value. This corresponds to the name of the positional argument to the function this Graph represents. type_expr (Optional[Any]) – an optiona...
torch.fx#torch.fx.Graph.placeholder
print_tabular() [source] Prints the intermediate representation of the graph in tabular format.
torch.fx#torch.fx.Graph.print_tabular
python_code(root_module) [source] Turn this Graph into valid Python code. Parameters root_module (str) – The name of the root module on which to look-up qualified name targets. This is usually ‘self’. Returns The string source code generated from this Graph.
torch.fx#torch.fx.Graph.python_code
__init__() [source] Construct an empty Graph.
torch.fx#torch.fx.Graph.__init__
class torch.fx.GraphModule(root, graph, class_name='GraphModule') [source] GraphModule is an nn.Module generated from an fx.Graph. Graphmodule has a graph attribute, as well as code and forward attributes generated from that graph. Warning When graph is reassigned, code and forward will be automatically regenerated....
torch.fx#torch.fx.GraphModule
property code Return the Python code generated from the Graph underlying this GraphModule.
torch.fx#torch.fx.GraphModule.code
property graph Return the Graph underlying this GraphModule
torch.fx#torch.fx.GraphModule.graph
recompile() [source] Recompile this GraphModule from its graph attribute. This should be called after editing the contained graph, otherwise the generated code of this GraphModule will be out of date.
torch.fx#torch.fx.GraphModule.recompile
to_folder(folder, module_name='FxModule') [source] Dumps out module to folder with module_name so that it can be imported with from <folder> import <module_name> Parameters folder (Union[str, os.PathLike]) – The folder to write the code out to module_name (str) – Top-level name to use for the Module while writin...
torch.fx#torch.fx.GraphModule.to_folder
__init__(root, graph, class_name='GraphModule') [source] Construct a GraphModule. Parameters root (Union[torch.nn.Module, Dict[str, Any]) – root can either be an nn.Module instance or a Dict mapping strings to any attribute type. In the case that root is a Module, any references to Module-based objects (via quali...
torch.fx#torch.fx.GraphModule.__init__
class torch.fx.Interpreter(module) [source] An Interpreter executes an FX graph Node-by-Node. This pattern can be useful for many things, including writing code transformations as well as analysis passes. Methods in the Interpreter class can be overridden to customize the behavior of execution. The map of overrideabl...
torch.fx#torch.fx.Interpreter
call_function(target, args, kwargs) [source] Execute a call_function node and return the result. Parameters target (Target) – The call target for this node. See Node for details on semantics args (Tuple) – Tuple of positional args for this invocation kwargs (Dict) – Dict of keyword arguments for this invocation...
torch.fx#torch.fx.Interpreter.call_function
call_method(target, args, kwargs) [source] Execute a call_method node and return the result. Parameters target (Target) – The call target for this node. See Node for details on semantics args (Tuple) – Tuple of positional args for this invocation kwargs (Dict) – Dict of keyword arguments for this invocation ...
torch.fx#torch.fx.Interpreter.call_method
call_module(target, args, kwargs) [source] Execute a call_module node and return the result. Parameters target (Target) – The call target for this node. See Node for details on semantics args (Tuple) – Tuple of positional args for this invocation kwargs (Dict) – Dict of keyword arguments for this invocation ...
torch.fx#torch.fx.Interpreter.call_module
fetch_args_kwargs_from_env(n) [source] Fetch the concrete values of args and kwargs of node n from the current execution environment. Parameters n (Node) – The node for which args and kwargs should be fetched. Returns args and kwargs with concrete values for n. Return type Tuple[Tuple, Dict]
torch.fx#torch.fx.Interpreter.fetch_args_kwargs_from_env
fetch_attr(target) [source] Fetch an attribute from the Module hierarchy of self.module. Parameters target (str) – The fully-qualfiied name of the attribute to fetch Returns The value of the attribute. Return type Any
torch.fx#torch.fx.Interpreter.fetch_attr
get_attr(target, args, kwargs) [source] Execute a get_attr node. Will retrieve an attribute value from the Module hierarchy of self.module. Parameters target (Target) – The call target for this node. See Node for details on semantics args (Tuple) – Tuple of positional args for this invocation kwargs (Dict) – Di...
torch.fx#torch.fx.Interpreter.get_attr
map_nodes_to_values(args, n) [source] Recursively descend through args and look up the concrete value for each Node in the current execution environment. Parameters args (Argument) – Data structure within which to look up concrete values n (Node) – Node to which args belongs. This is only used for error reportin...
torch.fx#torch.fx.Interpreter.map_nodes_to_values
output(target, args, kwargs) [source] Execute an output node. This really just retrieves the value referenced by the output node and returns it. Parameters target (Target) – The call target for this node. See Node for details on semantics args (Tuple) – Tuple of positional args for this invocation kwargs (Dict)...
torch.fx#torch.fx.Interpreter.output
placeholder(target, args, kwargs) [source] Execute a placeholder node. Note that this is stateful: Interpreter maintains an internal iterator over arguments passed to run and this method returns next() on that iterator. Parameters target (Target) – The call target for this node. See Node for details on semantics ...
torch.fx#torch.fx.Interpreter.placeholder
run(*args, initial_env=None) [source] Run module via interpretation and return the result. Parameters *args – The arguments to the Module to run, in positional order initial_env (Optional[Dict[Node, Any]]) – An optional starting environment for execution. This is a dict mapping Node to any value. This can be use...
torch.fx#torch.fx.Interpreter.run
run_node(n) [source] Run a specific node n and return the result. Calls into placeholder, get_attr, call_function, call_method, call_module, or output depending on node.op Parameters n (Node) – The Node to execute Returns The result of executing n Return type Any
torch.fx#torch.fx.Interpreter.run_node
class torch.fx.Node(graph, name, op, target, args, kwargs, type=None) [source] Node is the data structure that represents individual operations within a Graph. For the most part, Nodes represent callsites to various entities, such as operators, methods, and Modules (some exceptions include nodes that specify function...
torch.fx#torch.fx.Node
property all_input_nodes Return all Nodes that are inputs to this Node. This is equivalent to iterating over args and kwargs and only collecting the values that are Nodes. Returns List of Nodes that appear in the args and kwargs of this Node, in that order.
torch.fx#torch.fx.Node.all_input_nodes
append(x) [source] Insert x after this node in the list of nodes in the graph. Equvalent to self.next.prepend(x) Parameters x (Node) – The node to put after this node. Must be a member of the same graph.
torch.fx#torch.fx.Node.append
property args The tuple of arguments to this Node. The interpretation of arguments depends on the node’s opcode. See the Node docstring for more information. Assignment to this property is allowed. All accounting of uses and users is updated automatically on assignment.
torch.fx#torch.fx.Node.args
property kwargs The dict of keyword arguments to this Node. The interpretation of arguments depends on the node’s opcode. See the Node docstring for more information. Assignment to this property is allowed. All accounting of uses and users is updated automatically on assignment.
torch.fx#torch.fx.Node.kwargs
property next Returns the next Node in the linked list of Nodes. Returns The next Node in the linked list of Nodes.
torch.fx#torch.fx.Node.next
prepend(x) [source] Insert x before this node in the list of nodes in the graph. Example: Before: p -> self bx -> x -> ax After: p -> x -> self bx -> ax Parameters x (Node) – The node to put before this node. Must be a member of the same graph.
torch.fx#torch.fx.Node.prepend
property prev Returns the previous Node in the linked list of Nodes. Returns The previous Node in the linked list of Nodes.
torch.fx#torch.fx.Node.prev
replace_all_uses_with(replace_with) [source] Replace all uses of self in the Graph with the Node replace_with. Parameters replace_with (Node) – The node to replace all uses of self with. Returns The list of Nodes on which this change was made.
torch.fx#torch.fx.Node.replace_all_uses_with
class torch.fx.Proxy(node, tracer=None) [source] Proxy objects are Node wrappers that flow through the program during symbolic tracing and record all the operations (torch function calls, method calls, operators) that they touch into the growing FX Graph. If you’re doing graph transforms, you can wrap your own Proxy ...
torch.fx#torch.fx.Proxy
torch.fx.replace_pattern(gm, pattern, replacement) [source] Matches all possible non-overlapping sets of operators and their data dependencies (pattern) in the Graph of a GraphModule (gm), then replaces each of these matched subgraphs with another subgraph (replacement). Parameters gm – The GraphModule that wraps...
torch.fx#torch.fx.replace_pattern
torch.fx.symbolic_trace(root, concrete_args=None) [source] Symbolic tracing API Given an nn.Module or function instance root, this function will return a GraphModule constructed by recording operations seen while tracing through root. Parameters root (Union[torch.nn.Module, Callable]) – Module or function to be t...
torch.fx#torch.fx.symbolic_trace
class torch.fx.Tracer(autowrap_modules=(<module 'math' from '/home/matti/miniconda3/lib/python3.7/lib-dynload/math.cpython-37m-x86_64-linux-gnu.so'>, )) [source] Tracer is the class that implements the symbolic tracing functionality of torch.fx.symbolic_trace. A call to symbolic_trace(m) is equivalent to Tracer().tra...
torch.fx#torch.fx.Tracer
call_module(m, forward, args, kwargs) [source] Method that specifies the behavior of this Tracer when it encounters a call to an nn.Module instance. By default, the behavior is to check if the called module is a leaf module via is_leaf_module. If it is, emit a call_module node referring to m in the Graph. Otherwise, ...
torch.fx#torch.fx.Tracer.call_module
create_arg(a) [source] A method to specify the behavior of tracing when preparing values to be used as arguments to nodes in the Graph. By default, the behavior includes: Iterate through collection types (e.g. tuple, list, dict) and recursively call create_args on the elements. Given a Proxy object, return a referen...
torch.fx#torch.fx.Tracer.create_arg
create_args_for_root(root_fn, is_module, concrete_args=None) [source] Create placeholder nodes corresponding to the signature of the root Module. This method introspects root’s signature and emits those nodes accordingly, also supporting *args and **kwargs.
torch.fx#torch.fx.Tracer.create_args_for_root
is_leaf_module(m, module_qualified_name) [source] A method to specify whether a given nn.Module is a “leaf” module. Leaf modules are the atomic units that appear in the IR, referenced by call_module calls. By default, Modules in the PyTorch standard library namespace (torch.nn) are leaf modules. All other modules are...
torch.fx#torch.fx.Tracer.is_leaf_module
path_of_module(mod) [source] Helper method to find the qualified name of mod in the Module hierarchy of root. For example, if root has a submodule named foo, which has a submodule named bar, passing bar into this function will return the string “foo.bar”. Parameters mod (str) – The Module to retrieve the qualified ...
torch.fx#torch.fx.Tracer.path_of_module
trace(root, concrete_args=None) [source] Trace root and return the corresponding FX Graph representation. root can either be an nn.Module instance or a Python callable. Note that after this call, self.root may be different from the root passed in here. For example, when a free function is passed to trace(), we will c...
torch.fx#torch.fx.Tracer.trace
class torch.fx.Transformer(module) [source] Transformer is a special type of interpreter that produces a new Module. It exposes a transform() method that returns the transformed Module. Transformer does not require arguments to run, as Interpreter does. Transformer works entirely symbolically. Example Suppose we want...
torch.fx#torch.fx.Transformer
get_attr(target, args, kwargs) [source] Execute a get_attr node. In Transformer, this is overridden to insert a new get_attr node into the output graph. Parameters target (Target) – The call target for this node. See Node for details on semantics args (Tuple) – Tuple of positional args for this invocation kwarg...
torch.fx#torch.fx.Transformer.get_attr
placeholder(target, args, kwargs) [source] Execute a placeholder node. In Transformer, this is overridden to insert a new placeholder into the output graph. Parameters target (Target) – The call target for this node. See Node for details on semantics args (Tuple) – Tuple of positional args for this invocation k...
torch.fx#torch.fx.Transformer.placeholder
transform() [source] Transform self.module and return the transformed GraphModule.
torch.fx#torch.fx.Transformer.transform
torch.fx.wrap(fn_or_name) [source] This function can be called at module-level scope to register fn_or_name as a “leaf function”. A “leaf function” will be preserved as a CallFunction node in the FX trace instead of being traced through: # foo/bar/baz.py def my_custom_function(x, y): return x * x + y * y torch.f...
torch.fx#torch.fx.wrap
torch.gather(input, dim, index, *, sparse_grad=False, out=None) → Tensor Gathers values along an axis specified by dim. For a 3-D tensor the output is specified by: out[i][j][k] = input[index[i][j][k]][j][k] # if dim == 0 out[i][j][k] = input[i][index[i][j][k]][k] # if dim == 1 out[i][j][k] = input[i][j][index[i][j...
torch.generated.torch.gather#torch.gather
torch.gcd(input, other, *, out=None) → Tensor Computes the element-wise greatest common divisor (GCD) of input and other. Both input and other must have integer types. Note This defines gcd(0,0)=0gcd(0, 0) = 0 . Parameters input (Tensor) – the input tensor. other (Tensor) – the second input tensor Keyword Ar...
torch.generated.torch.gcd#torch.gcd
torch.ge(input, other, *, out=None) → Tensor Computes input≥other\text{input} \geq \text{other} element-wise. The second argument can be a number or a tensor whose shape is broadcastable with the first argument. Parameters input (Tensor) – the tensor to compare other (Tensor or float) – the tensor or value to c...
torch.generated.torch.ge#torch.ge
class torch.Generator(device='cpu') → Generator Creates and returns a generator object that manages the state of the algorithm which produces pseudo random numbers. Used as a keyword argument in many In-place random sampling functions. Parameters device (torch.device, optional) – the desired device for the generato...
torch.generated.torch.generator#torch.Generator
device Generator.device -> device Gets the current device of the generator. Example: >>> g_cpu = torch.Generator() >>> g_cpu.device device(type='cpu')
torch.generated.torch.generator#torch.Generator.device
get_state() → Tensor Returns the Generator state as a torch.ByteTensor. Returns A torch.ByteTensor which contains all the necessary bits to restore a Generator to a specific point in time. Return type Tensor Example: >>> g_cpu = torch.Generator() >>> g_cpu.get_state()
torch.generated.torch.generator#torch.Generator.get_state
initial_seed() → int Returns the initial seed for generating random numbers. Example: >>> g_cpu = torch.Generator() >>> g_cpu.initial_seed() 2147483647
torch.generated.torch.generator#torch.Generator.initial_seed
manual_seed(seed) → Generator Sets the seed for generating random numbers. Returns a torch.Generator object. It is recommended to set a large seed, i.e. a number that has a good balance of 0 and 1 bits. Avoid having many 0 bits in the seed. Parameters seed (int) – The desired seed. Value must be within the inclusiv...
torch.generated.torch.generator#torch.Generator.manual_seed
seed() → int Gets a non-deterministic random number from std::random_device or the current time and uses it to seed a Generator. Example: >>> g_cpu = torch.Generator() >>> g_cpu.seed() 1516516984916
torch.generated.torch.generator#torch.Generator.seed
set_state(new_state) → void Sets the Generator state. Parameters new_state (torch.ByteTensor) – The desired state. Example: >>> g_cpu = torch.Generator() >>> g_cpu_other = torch.Generator() >>> g_cpu.set_state(g_cpu_other.get_state())
torch.generated.torch.generator#torch.Generator.set_state
torch.geqrf(input, *, out=None) -> (Tensor, Tensor) This is a low-level function for calling LAPACK directly. This function returns a namedtuple (a, tau) as defined in LAPACK documentation for geqrf . You’ll generally want to use torch.qr() instead. Computes a QR decomposition of input, but without constructing QQ a...
torch.generated.torch.geqrf#torch.geqrf
torch.ger(input, vec2, *, out=None) → Tensor Alias of torch.outer(). Warning This function is deprecated and will be removed in a future PyTorch release. Use torch.outer() instead.
torch.generated.torch.ger#torch.ger
torch.get_default_dtype() → torch.dtype Get the current default floating point torch.dtype. Example: >>> torch.get_default_dtype() # initial default for floating point is torch.float32 torch.float32 >>> torch.set_default_dtype(torch.float64) >>> torch.get_default_dtype() # default is now changed to torch.float64 to...
torch.generated.torch.get_default_dtype#torch.get_default_dtype
torch.get_num_interop_threads() → int Returns the number of threads used for inter-op parallelism on CPU (e.g. in JIT interpreter)
torch.generated.torch.get_num_interop_threads#torch.get_num_interop_threads
torch.get_num_threads() → int Returns the number of threads used for parallelizing CPU operations
torch.generated.torch.get_num_threads#torch.get_num_threads
torch.get_rng_state() [source] Returns the random number generator state as a torch.ByteTensor.
torch.generated.torch.get_rng_state#torch.get_rng_state
torch.greater(input, other, *, out=None) → Tensor Alias for torch.gt().
torch.generated.torch.greater#torch.greater
torch.greater_equal(input, other, *, out=None) → Tensor Alias for torch.ge().
torch.generated.torch.greater_equal#torch.greater_equal
torch.gt(input, other, *, out=None) → Tensor Computes input>other\text{input} > \text{other} element-wise. The second argument can be a number or a tensor whose shape is broadcastable with the first argument. Parameters input (Tensor) – the tensor to compare other (Tensor or float) – the tensor or value to comp...
torch.generated.torch.gt#torch.gt
torch.hamming_window(window_length, periodic=True, alpha=0.54, beta=0.46, *, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor Hamming window function. w[n]=α−βcos⁡(2πnN−1),w[n] = \alpha - \beta\ \cos \left( \frac{2 \pi n}{N - 1} \right), where NN is the full window size. The input wind...
torch.generated.torch.hamming_window#torch.hamming_window
torch.hann_window(window_length, periodic=True, *, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor Hann window function. w[n]=12[1−cos⁡(2πnN−1)]=sin⁡2(πnN−1),w[n] = \frac{1}{2}\ \left[1 - \cos \left( \frac{2 \pi n}{N - 1} \right)\right] = \sin^2 \left( \frac{\pi n}{N - 1} \right), wher...
torch.generated.torch.hann_window#torch.hann_window
torch.heaviside(input, values, *, out=None) → Tensor Computes the Heaviside step function for each element in input. The Heaviside step function is defined as: heaviside(input,values)={0,if input < 0values,if input == 01,if input > 0\text{{heaviside}}(input, values) = \begin{cases} 0, & \text{if input < 0}\\ values,...
torch.generated.torch.heaviside#torch.heaviside
torch.histc(input, bins=100, min=0, max=0, *, out=None) → Tensor Computes the histogram of a tensor. The elements are sorted into equal width bins between min and max. If min and max are both zero, the minimum and maximum values of the data are used. Elements lower than min and higher than max are ignored. Parameter...
torch.generated.torch.histc#torch.histc
torch.hspmm(mat1, mat2, *, out=None) → Tensor Performs a matrix multiplication of a sparse COO matrix mat1 and a strided matrix mat2. The result is a (1 + 1)-dimensional hybrid COO matrix. Parameters mat1 (Tensor) – the first sparse matrix to be matrix multiplied mat2 (Tensor) – the second strided matrix to be m...
torch.sparse#torch.hspmm
torch.hstack(tensors, *, out=None) → Tensor Stack tensors in sequence horizontally (column wise). This is equivalent to concatenation along the first axis for 1-D tensors, and along the second axis for all other tensors. Parameters tensors (sequence of Tensors) – sequence of tensors to concatenate Keyword Argument...
torch.generated.torch.hstack#torch.hstack
torch.hub Pytorch Hub is a pre-trained model repository designed to facilitate research reproducibility. Publishing models Pytorch Hub supports publishing pre-trained models(model definitions and pre-trained weights) to a github repository by adding a simple hubconf.py file; hubconf.py can have multiple entrypoints. Ea...
torch.hub
torch.hub.download_url_to_file(url, dst, hash_prefix=None, progress=True) [source] Download object at the given URL to a local path. Parameters url (string) – URL of the object to download dst (string) – Full path where object will be saved, e.g. /tmp/temporary_file hash_prefix (string, optional) – If not None...
torch.hub#torch.hub.download_url_to_file
torch.hub.get_dir() [source] Get the Torch Hub cache directory used for storing downloaded models & weights. If set_dir() is not called, default path is $TORCH_HOME/hub where environment variable $TORCH_HOME defaults to $XDG_CACHE_HOME/torch. $XDG_CACHE_HOME follows the X Design Group specification of the Linux files...
torch.hub#torch.hub.get_dir
torch.hub.help(github, model, force_reload=False) [source] Show the docstring of entrypoint model. Parameters github (string) – a string with format <repo_owner/repo_name[:tag_name]> with an optional tag/branch. The default branch is master if not specified. Example: ‘pytorch/vision[:hub]’ model (string) – a str...
torch.hub#torch.hub.help
torch.hub.list(github, force_reload=False) [source] List all entrypoints available in github hubconf. Parameters github (string) – a string with format “repo_owner/repo_name[:tag_name]” with an optional tag/branch. The default branch is master if not specified. Example: ‘pytorch/vision[:hub]’ force_reload (bool,...
torch.hub#torch.hub.list
torch.hub.load(repo_or_dir, model, *args, **kwargs) [source] Load a model from a github repo or a local directory. Note: Loading a model is the typical use case, but this can also be used to for loading other objects such as tokenizers, loss functions, etc. If source is 'github', repo_or_dir is expected to be of the ...
torch.hub#torch.hub.load
torch.hub.load_state_dict_from_url(url, model_dir=None, map_location=None, progress=True, check_hash=False, file_name=None) [source] Loads the Torch serialized object at the given URL. If downloaded file is a zip file, it will be automatically decompressed. If the object is already present in model_dir, it’s deserial...
torch.hub#torch.hub.load_state_dict_from_url
torch.hub.set_dir(d) [source] Optionally set the Torch Hub directory used to save downloaded models & weights. Parameters d (string) – path to a local folder to save downloaded models & weights.
torch.hub#torch.hub.set_dir
torch.hypot(input, other, *, out=None) → Tensor Given the legs of a right triangle, return its hypotenuse. outi=inputi2+otheri2\text{out}_{i} = \sqrt{\text{input}_{i}^{2} + \text{other}_{i}^{2}} The shapes of input and other must be broadcastable. Parameters input (Tensor) – the first input tensor other (Tens...
torch.generated.torch.hypot#torch.hypot
torch.i0(input, *, out=None) → Tensor Computes the zeroth order modified Bessel function of the first kind for each element of input. outi=I0(inputi)=∑k=0∞(inputi2/4)k(k!)2\text{out}_{i} = I_0(\text{input}_{i}) = \sum_{k=0}^{\infty} \frac{(\text{input}_{i}^2/4)^k}{(k!)^2} Parameters input (Tensor) – the input te...
torch.generated.torch.i0#torch.i0
torch.igamma(input, other, *, out=None) → Tensor Computes the regularized lower incomplete gamma function: outi=1Γ(inputi)∫0otheritinputi−1e−tdt\text{out}_{i} = \frac{1}{\Gamma(\text{input}_i)} \int_0^{\text{other}_i} t^{\text{input}_i-1} e^{-t} dt where both inputi\text{input}_i and otheri\text{other}_i are wea...
torch.generated.torch.igamma#torch.igamma
torch.igammac(input, other, *, out=None) → Tensor Computes the regularized upper incomplete gamma function: outi=1Γ(inputi)∫otheri∞tinputi−1e−tdt\text{out}_{i} = \frac{1}{\Gamma(\text{input}_i)} \int_{\text{other}_i}^{\infty} t^{\text{input}_i-1} e^{-t} dt where both inputi\text{input}_i and otheri\text{other}_i ...
torch.generated.torch.igammac#torch.igammac
torch.imag(input) → Tensor Returns a new tensor containing imaginary values of the self tensor. The returned tensor and self share the same underlying storage. Warning imag() is only supported for tensors with complex dtypes. Parameters input (Tensor) – the input tensor. Example:: >>> x=torch.randn(4, dtype=t...
torch.generated.torch.imag#torch.imag
torch.index_select(input, dim, index, *, out=None) → Tensor Returns a new tensor which indexes the input tensor along dimension dim using the entries in index which is a LongTensor. The returned tensor has the same number of dimensions as the original tensor (input). The dimth dimension has the same size as the lengt...
torch.generated.torch.index_select#torch.index_select
torch.initial_seed() [source] Returns the initial seed for generating random numbers as a Python long.
torch.generated.torch.initial_seed#torch.initial_seed
torch.inner(input, other, *, out=None) → Tensor Computes the dot product for 1D tensors. For higher dimensions, sums the product of elements from input and other along their last dimension. Note If either input or other is a scalar, the result is equivalent to torch.mul(input, other). If both input and other are non...
torch.generated.torch.inner#torch.inner
torch.inverse(input, *, out=None) → Tensor Takes the inverse of the square matrix input. input can be batches of 2D square tensors, in which case this function would return a tensor composed of individual inverses. Supports real and complex input. Note torch.inverse() is deprecated. Please use torch.linalg.inv() ins...
torch.generated.torch.inverse#torch.inverse