doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
class torch.iinfo
torch.type_info#torch.torch.iinfo
class torch.layout
torch.tensor_attributes#torch.torch.layout
class torch.memory_format
torch.tensor_attributes#torch.torch.memory_format
torch.trace(input) β†’ Tensor Returns the sum of the elements of the diagonal of the input 2-D matrix. Example: >>> x = torch.arange(1., 10.).view(3, 3) >>> x tensor([[ 1., 2., 3.], [ 4., 5., 6.], [ 7., 8., 9.]]) >>> torch.trace(x) tensor(15.)
torch.generated.torch.trace#torch.trace
torch.transpose(input, dim0, dim1) β†’ Tensor Returns a tensor that is a transposed version of input. The given dimensions dim0 and dim1 are swapped. The resulting out tensor shares its underlying storage with the input tensor, so changing the content of one would change the content of the other. Parameters input (...
torch.generated.torch.transpose#torch.transpose
torch.trapz(y, x, *, dim=-1) β†’ Tensor Estimate ∫ydx\int y\,dx along dim, using the trapezoid rule. Parameters y (Tensor) – The values of the function to integrate x (Tensor) – The points at which the function y is sampled. If x is not in ascending order, intervals on which it is decreasing contribute negatively...
torch.generated.torch.trapz#torch.trapz
torch.triangular_solve(input, A, upper=True, transpose=False, unitriangular=False) -> (Tensor, Tensor) Solves a system of equations with a triangular coefficient matrix AA and multiple right-hand sides bb . In particular, solves AX=bAX = b and assumes AA is upper-triangular with the default keyword arguments. torc...
torch.generated.torch.triangular_solve#torch.triangular_solve
torch.tril(input, diagonal=0, *, out=None) β†’ Tensor Returns the lower triangular part of the matrix (2-D tensor) or batch of matrices input, the other elements of the result tensor out are set to 0. The lower triangular part of the matrix is defined as the elements on and below the diagonal. The argument diagonal con...
torch.generated.torch.tril#torch.tril
torch.tril_indices(row, col, offset=0, *, dtype=torch.long, device='cpu', layout=torch.strided) β†’ Tensor Returns the indices of the lower triangular part of a row-by- col matrix in a 2-by-N Tensor, where the first row contains row coordinates of all indices and the second row contains column coordinates. Indices are ...
torch.generated.torch.tril_indices#torch.tril_indices
torch.triu(input, diagonal=0, *, out=None) β†’ Tensor Returns the upper triangular part of a matrix (2-D tensor) or batch of matrices input, the other elements of the result tensor out are set to 0. The upper triangular part of the matrix is defined as the elements on and above the diagonal. The argument diagonal contr...
torch.generated.torch.triu#torch.triu
torch.triu_indices(row, col, offset=0, *, dtype=torch.long, device='cpu', layout=torch.strided) β†’ Tensor Returns the indices of the upper triangular part of a row by col matrix in a 2-by-N Tensor, where the first row contains row coordinates of all indices and the second row contains column coordinates. Indices are o...
torch.generated.torch.triu_indices#torch.triu_indices
torch.true_divide(dividend, divisor, *, out) β†’ Tensor Alias for torch.div() with rounding_mode=None.
torch.generated.torch.true_divide#torch.true_divide
torch.trunc(input, *, out=None) β†’ Tensor Returns a new tensor with the truncated integer values of the elements of input. Parameters input (Tensor) – the input tensor. Keyword Arguments out (Tensor, optional) – the output tensor. Example: >>> a = torch.randn(4) >>> a tensor([ 3.4742, 0.5466, -0.8008, -0.9079])...
torch.generated.torch.trunc#torch.trunc
torch.unbind(input, dim=0) β†’ seq Removes a tensor dimension. Returns a tuple of all slices along a given dimension, already without it. Parameters input (Tensor) – the tensor to unbind dim (int) – dimension to remove Example: >>> torch.unbind(torch.tensor([[1, 2, 3], >>> [4, 5, 6], ...
torch.generated.torch.unbind#torch.unbind
torch.unique(*args, **kwargs) Returns the unique elements of the input tensor. Note This function is different from torch.unique_consecutive() in the sense that this function also eliminates non-consecutive duplicate values. Note Currently in the CUDA implementation and the CPU implementation when dim is specified...
torch.generated.torch.unique#torch.unique
torch.unique_consecutive(*args, **kwargs) Eliminates all but the first element from every consecutive group of equivalent elements. Note This function is different from torch.unique() in the sense that this function only eliminates consecutive duplicate values. This semantics is similar to std::unique in C++. Para...
torch.generated.torch.unique_consecutive#torch.unique_consecutive
torch.unsqueeze(input, dim) β†’ Tensor Returns a new tensor with a dimension of size one inserted at the specified position. The returned tensor shares the same underlying data with this tensor. A dim value within the range [-input.dim() - 1, input.dim() + 1) can be used. Negative dim will correspond to unsqueeze() app...
torch.generated.torch.unsqueeze#torch.unsqueeze
torch.use_deterministic_algorithms(d) [source] Sets whether PyTorch operations must use β€œdeterministic” algorithms. That is, algorithms which, given the same input, and when run on the same software and hardware, always produce the same output. When True, operations will use deterministic algorithms when available, a...
torch.generated.torch.use_deterministic_algorithms#torch.use_deterministic_algorithms
Benchmark Utils - torch.utils.benchmark class torch.utils.benchmark.Timer(stmt='pass', setup='pass', timer=<function timer>, globals=None, label=None, sub_label=None, description=None, env=None, num_threads=1, language=<Language.PYTHON: 0>) [source] Helper class for measuring execution time of PyTorch statements. F...
torch.benchmark_utils
class torch.utils.benchmark.CallgrindStats(task_spec, number_per_run, built_with_debug_symbols, baseline_inclusive_stats, baseline_exclusive_stats, stmt_inclusive_stats, stmt_exclusive_stats) [source] Top level container for Callgrind results collected by Timer. Manipulation is generally done using the FunctionCounts...
torch.benchmark_utils#torch.utils.benchmark.CallgrindStats
as_standardized() [source] Strip library names and some prefixes from function strings. When comparing two different sets of instruction counts, on stumbling block can be path prefixes. Callgrind includes the full filepath when reporting a function (as it should). However, this can cause issues when diffing profiles....
torch.benchmark_utils#torch.utils.benchmark.CallgrindStats.as_standardized
counts(*, denoise=False) [source] Returns the total number of instructions executed. See FunctionCounts.denoise() for an explation of the denoise arg.
torch.benchmark_utils#torch.utils.benchmark.CallgrindStats.counts
delta(other, inclusive=False, subtract_baselines=True) [source] Diff two sets of counts. One common reason to collect instruction counts is to determine the the effect that a particular change will have on the number of instructions needed to perform some unit of work. If a change increases that number, the next logi...
torch.benchmark_utils#torch.utils.benchmark.CallgrindStats.delta
stats(inclusive=False) [source] Returns detailed function counts. Conceptually, the FunctionCounts returned can be thought of as a tuple of (count, path_and_function_name) tuples. inclusive matches the semantics of callgrind. If True, the counts include instructions executed by children. inclusive=True is useful for ...
torch.benchmark_utils#torch.utils.benchmark.CallgrindStats.stats
class torch.utils.benchmark.FunctionCounts(_data, inclusive, _linewidth=None) [source] Container for manipulating Callgrind results. It supports: Addition and subtraction to combine or diff results. Tuple-like indexing. A denoise function which strips CPython calls which are known to be non-deterministic and quite...
torch.benchmark_utils#torch.utils.benchmark.FunctionCounts
denoise() [source] Remove known noisy instructions. Several instructions in the CPython interpreter are rather noisy. These instructions involve unicode to dictionary lookups which Python uses to map variable names. FunctionCounts is generally a content agnostic container, however this is sufficiently important for o...
torch.benchmark_utils#torch.utils.benchmark.FunctionCounts.denoise
filter(filter_fn) [source] Keep only the elements where filter_fn applied to function name returns True.
torch.benchmark_utils#torch.utils.benchmark.FunctionCounts.filter
transform(map_fn) [source] Apply map_fn to all of the function names. This can be used to regularize function names (e.g. stripping irrelevant parts of the file path), coalesce entries by mapping multiple functions to the same name (in which case the counts are added together), etc.
torch.benchmark_utils#torch.utils.benchmark.FunctionCounts.transform
class torch.utils.benchmark.Measurement(number_per_run, raw_times, task_spec, metadata=None) [source] The result of a Timer measurement. This class stores one or more measurements of a given statement. It is serializable and provides several convenience methods (including a detailed __repr__) for downstream consumers...
torch.benchmark_utils#torch.utils.benchmark.Measurement
static merge(measurements) [source] Convenience method for merging replicates. Merge will extrapolate times to number_per_run=1 and will not transfer any metadata. (Since it might differ between replicates)
torch.benchmark_utils#torch.utils.benchmark.Measurement.merge
property significant_figures Approximate significant figure estimate. This property is intended to give a convenient way to estimate the precision of a measurement. It only uses the interquartile region to estimate statistics to try to mitigate skew from the tails, and uses a static z value of 1.645 since it is not e...
torch.benchmark_utils#torch.utils.benchmark.Measurement.significant_figures
class torch.utils.benchmark.Timer(stmt='pass', setup='pass', timer=<function timer>, globals=None, label=None, sub_label=None, description=None, env=None, num_threads=1, language=<Language.PYTHON: 0>) [source] Helper class for measuring execution time of PyTorch statements. For a full tutorial on how to use this clas...
torch.benchmark_utils#torch.utils.benchmark.Timer
blocked_autorange(callback=None, min_run_time=0.2) [source] Measure many replicates while keeping timer overhead to a minimum. At a high level, blocked_autorange executes the following pseudo-code: `setup` total_time = 0 while total_time < min_run_time start = timer() for _ in range(block_size): `stm...
torch.benchmark_utils#torch.utils.benchmark.Timer.blocked_autorange
collect_callgrind(number=100, collect_baseline=True) [source] Collect instruction counts using Callgrind. Unlike wall times, instruction counts are deterministic (modulo non-determinism in the program itself and small amounts of jitter from the Python interpreter.) This makes them ideal for detailed performance analy...
torch.benchmark_utils#torch.utils.benchmark.Timer.collect_callgrind
timeit(number=1000000) [source] Mirrors the semantics of timeit.Timer.timeit(). Execute the main statement (stmt) number times. https://docs.python.org/3/library/timeit.html#timeit.Timer.timeit
torch.benchmark_utils#torch.utils.benchmark.Timer.timeit
torch.utils.bottleneck torch.utils.bottleneck is a tool that can be used as an initial step for debugging bottlenecks in your program. It summarizes runs of your script with the Python profiler and PyTorch’s autograd profiler. Run it on the command line with python -m torch.utils.bottleneck /path/to/source/script.py [a...
torch.bottleneck
torch.utils.checkpoint Note Checkpointing is implemented by rerunning a forward-pass segment for each checkpointed segment during backward. This can cause persistent states like the RNG state to be advanced than they would without checkpointing. By default, checkpointing includes logic to juggle the RNG state such tha...
torch.checkpoint
torch.utils.checkpoint.checkpoint(function, *args, **kwargs) [source] Checkpoint a model or part of the model Checkpointing works by trading compute for memory. Rather than storing all intermediate activations of the entire computation graph for computing backward, the checkpointed part does not save intermediate act...
torch.checkpoint#torch.utils.checkpoint.checkpoint
torch.utils.checkpoint.checkpoint_sequential(functions, segments, input, **kwargs) [source] A helper function for checkpointing sequential models. Sequential models execute a list of modules/functions in order (sequentially). Therefore, we can divide such a model in various segments and checkpoint each segment. All s...
torch.checkpoint#torch.utils.checkpoint.checkpoint_sequential
torch.utils.cpp_extension torch.utils.cpp_extension.CppExtension(name, sources, *args, **kwargs) [source] Creates a setuptools.Extension for C++. Convenience method that creates a setuptools.Extension with the bare minimum (but often sufficient) arguments to build a C++ extension. All arguments are forwarded to the...
torch.cpp_extension
torch.utils.cpp_extension.BuildExtension(*args, **kwargs) [source] A custom setuptools build extension . This setuptools.build_ext subclass takes care of passing the minimum required compiler flags (e.g. -std=c++14) as well as mixed C++/CUDA compilation (and support for CUDA files in general). When using BuildExtensi...
torch.cpp_extension#torch.utils.cpp_extension.BuildExtension
torch.utils.cpp_extension.check_compiler_abi_compatibility(compiler) [source] Verifies that the given compiler is ABI-compatible with PyTorch. Parameters compiler (str) – The compiler executable name to check (e.g. g++). Must be executable in a shell process. Returns False if the compiler is (likely) ABI-incompat...
torch.cpp_extension#torch.utils.cpp_extension.check_compiler_abi_compatibility
torch.utils.cpp_extension.CppExtension(name, sources, *args, **kwargs) [source] Creates a setuptools.Extension for C++. Convenience method that creates a setuptools.Extension with the bare minimum (but often sufficient) arguments to build a C++ extension. All arguments are forwarded to the setuptools.Extension constr...
torch.cpp_extension#torch.utils.cpp_extension.CppExtension
torch.utils.cpp_extension.CUDAExtension(name, sources, *args, **kwargs) [source] Creates a setuptools.Extension for CUDA/C++. Convenience method that creates a setuptools.Extension with the bare minimum (but often sufficient) arguments to build a CUDA/C++ extension. This includes the CUDA include path, library path a...
torch.cpp_extension#torch.utils.cpp_extension.CUDAExtension
torch.utils.cpp_extension.include_paths(cuda=False) [source] Get the include paths required to build a C++ or CUDA extension. Parameters cuda – If True, includes CUDA-specific include paths. Returns A list of include path strings.
torch.cpp_extension#torch.utils.cpp_extension.include_paths
torch.utils.cpp_extension.is_ninja_available() [source] Returns True if the ninja build system is available on the system, False otherwise.
torch.cpp_extension#torch.utils.cpp_extension.is_ninja_available
torch.utils.cpp_extension.load(name, sources, extra_cflags=None, extra_cuda_cflags=None, extra_ldflags=None, extra_include_paths=None, build_directory=None, verbose=False, with_cuda=None, is_python_module=True, is_standalone=False, keep_intermediates=True) [source] Loads a PyTorch C++ extension just-in-time (JIT). To...
torch.cpp_extension#torch.utils.cpp_extension.load
torch.utils.cpp_extension.load_inline(name, cpp_sources, cuda_sources=None, functions=None, extra_cflags=None, extra_cuda_cflags=None, extra_ldflags=None, extra_include_paths=None, build_directory=None, verbose=False, with_cuda=None, is_python_module=True, with_pytorch_error_handling=True, keep_intermediates=True) [sou...
torch.cpp_extension#torch.utils.cpp_extension.load_inline
torch.utils.cpp_extension.verify_ninja_availability() [source] Raises RuntimeError if ninja build system is not available on the system, does nothing otherwise.
torch.cpp_extension#torch.utils.cpp_extension.verify_ninja_availability
torch.utils.data At the heart of PyTorch data loading utility is the torch.utils.data.DataLoader class. It represents a Python iterable over a dataset, with support for map-style and iterable-style datasets, customizing data loading order, automatic batching, single- and multi-process data loading, automatic memo...
torch.data
class torch.utils.data.BatchSampler(sampler, batch_size, drop_last) [source] Wraps another sampler to yield a mini-batch of indices. Parameters sampler (Sampler or Iterable) – Base sampler. Can be any iterable object batch_size (int) – Size of mini-batch. drop_last (bool) – If True, the sampler will drop the la...
torch.data#torch.utils.data.BatchSampler
class torch.utils.data.BufferedShuffleDataset(dataset, buffer_size) [source] Dataset shuffled from the original dataset. This class is useful to shuffle an existing instance of an IterableDataset. The buffer with buffer_size is filled with the items from the dataset first. Then, each item will be yielded from the buf...
torch.data#torch.utils.data.BufferedShuffleDataset
class torch.utils.data.ChainDataset(datasets) [source] Dataset for chainning multiple IterableDataset s. This class is useful to assemble different existing dataset streams. The chainning operation is done on-the-fly, so concatenating large-scale datasets with this class will be efficient. Parameters datasets (iter...
torch.data#torch.utils.data.ChainDataset
class torch.utils.data.ConcatDataset(datasets) [source] Dataset as a concatenation of multiple datasets. This class is useful to assemble different existing datasets. Parameters datasets (sequence) – List of datasets to be concatenated
torch.data#torch.utils.data.ConcatDataset
class torch.utils.data.DataLoader(dataset, batch_size=1, shuffle=False, sampler=None, batch_sampler=None, num_workers=0, collate_fn=None, pin_memory=False, drop_last=False, timeout=0, worker_init_fn=None, multiprocessing_context=None, generator=None, *, prefetch_factor=2, persistent_workers=False) [source] Data loade...
torch.data#torch.utils.data.DataLoader
class torch.utils.data.Dataset [source] An abstract class representing a Dataset. All datasets that represent a map from keys to data samples should subclass it. All subclasses should overwrite __getitem__(), supporting fetching a data sample for a given key. Subclasses could also optionally overwrite __len__(), whic...
torch.data#torch.utils.data.Dataset
class torch.utils.data.distributed.DistributedSampler(dataset, num_replicas=None, rank=None, shuffle=True, seed=0, drop_last=False) [source] Sampler that restricts data loading to a subset of the dataset. It is especially useful in conjunction with torch.nn.parallel.DistributedDataParallel. In such a case, each proce...
torch.data#torch.utils.data.distributed.DistributedSampler
torch.utils.data.get_worker_info() [source] Returns the information about the current DataLoader iterator worker process. When called in a worker, this returns an object guaranteed to have the following attributes: id: the current worker id. num_workers: the total number of workers. seed: the random seed set for ...
torch.data#torch.utils.data.get_worker_info
class torch.utils.data.IterableDataset [source] An iterable Dataset. All datasets that represent an iterable of data samples should subclass it. Such form of datasets is particularly useful when data come from a stream. All subclasses should overwrite __iter__(), which would return an iterator of samples in this data...
torch.data#torch.utils.data.IterableDataset
class torch.utils.data.RandomSampler(data_source, replacement=False, num_samples=None, generator=None) [source] Samples elements randomly. If without replacement, then sample from a shuffled dataset. If with replacement, then user can specify num_samples to draw. Parameters data_source (Dataset) – dataset to samp...
torch.data#torch.utils.data.RandomSampler
torch.utils.data.random_split(dataset, lengths, generator=<torch._C.Generator object>) [source] Randomly split a dataset into non-overlapping new datasets of given lengths. Optionally fix the generator for reproducible results, e.g.: >>> random_split(range(10), [3, 7], generator=torch.Generator().manual_seed(42)) P...
torch.data#torch.utils.data.random_split
class torch.utils.data.Sampler(data_source) [source] Base class for all Samplers. Every Sampler subclass has to provide an __iter__() method, providing a way to iterate over indices of dataset elements, and a __len__() method that returns the length of the returned iterators. Note The __len__() method isn’t strictly...
torch.data#torch.utils.data.Sampler
class torch.utils.data.SequentialSampler(data_source) [source] Samples elements sequentially, always in the same order. Parameters data_source (Dataset) – dataset to sample from
torch.data#torch.utils.data.SequentialSampler
class torch.utils.data.Subset(dataset, indices) [source] Subset of a dataset at specified indices. Parameters dataset (Dataset) – The whole Dataset indices (sequence) – Indices in the whole set selected for subset
torch.data#torch.utils.data.Subset
class torch.utils.data.SubsetRandomSampler(indices, generator=None) [source] Samples elements randomly from a given list of indices, without replacement. Parameters indices (sequence) – a sequence of indices generator (Generator) – Generator used in sampling.
torch.data#torch.utils.data.SubsetRandomSampler
class torch.utils.data.TensorDataset(*tensors) [source] Dataset wrapping tensors. Each sample will be retrieved by indexing tensors along the first dimension. Parameters *tensors (Tensor) – tensors that have the same size of the first dimension.
torch.data#torch.utils.data.TensorDataset
class torch.utils.data.WeightedRandomSampler(weights, num_samples, replacement=True, generator=None) [source] Samples elements from [0,..,len(weights)-1] with given probabilities (weights). Parameters weights (sequence) – a sequence of weights, not necessary summing up to one num_samples (int) – number of sample...
torch.data#torch.utils.data.WeightedRandomSampler
torch.utils.dlpack torch.utils.dlpack.from_dlpack(dlpack) β†’ Tensor Decodes a DLPack to a tensor. Parameters dlpack – a PyCapsule object with the dltensor The tensor will share the memory with the object represented in the dlpack. Note that each dlpack can only be consumed once. torch.utils.dlpack.to_dlpack(...
torch.dlpack
torch.utils.dlpack.from_dlpack(dlpack) β†’ Tensor Decodes a DLPack to a tensor. Parameters dlpack – a PyCapsule object with the dltensor The tensor will share the memory with the object represented in the dlpack. Note that each dlpack can only be consumed once.
torch.dlpack#torch.utils.dlpack.from_dlpack
torch.utils.dlpack.to_dlpack(tensor) β†’ PyCapsule Returns a DLPack representing the tensor. Parameters tensor – a tensor to be exported The dlpack shares the tensors memory. Note that each dlpack can only be consumed once.
torch.dlpack#torch.utils.dlpack.to_dlpack
torch.utils.mobile_optimizer Warning This API is in beta and may change in the near future. Torch mobile supports torch.mobile_optimizer.optimize_for_mobile utility to run a list of optimization pass with modules in eval mode. The method takes the following parameters: a torch.jit.ScriptModule object, a blocklisting ...
torch.mobile_optimizer
torch.utils.mobile_optimizer.optimize_for_mobile(script_module, optimization_blocklist=None, preserved_methods=None, backend='CPU') [source] Parameters script_module – An instance of torch script module with type of ScriptModule. optimization_blocklist – A set with type of MobileOptimizerType. When set is not pa...
torch.mobile_optimizer#torch.utils.mobile_optimizer.optimize_for_mobile
torch.utils.model_zoo Moved to torch.hub. torch.utils.model_zoo.load_url(url, model_dir=None, map_location=None, progress=True, check_hash=False, file_name=None) Loads the Torch serialized object at the given URL. If downloaded file is a zip file, it will be automatically decompressed. If the object is already pres...
torch.model_zoo
torch.utils.model_zoo.load_url(url, model_dir=None, map_location=None, progress=True, check_hash=False, file_name=None) Loads the Torch serialized object at the given URL. If downloaded file is a zip file, it will be automatically decompressed. If the object is already present in model_dir, it’s deserialized and retu...
torch.model_zoo#torch.utils.model_zoo.load_url
torch.utils.tensorboard Before going further, more details on TensorBoard can be found at https://www.tensorflow.org/tensorboard/ Once you’ve installed TensorBoard, these utilities let you log PyTorch models and metrics into a directory for visualization within the TensorBoard UI. Scalars, images, histograms, graphs, a...
torch.tensorboard
class torch.utils.tensorboard.writer.SummaryWriter(log_dir=None, comment='', purge_step=None, max_queue=10, flush_secs=120, filename_suffix='') [source] Writes entries directly to event files in the log_dir to be consumed by TensorBoard. The SummaryWriter class provides a high-level API to create an event file in a g...
torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter
add_audio(tag, snd_tensor, global_step=None, sample_rate=44100, walltime=None) [source] Add audio data to summary. Parameters tag (string) – Data identifier snd_tensor (torch.Tensor) – Sound data global_step (int) – Global step value to record sample_rate (int) – sample rate in Hz walltime (float) – Optional ...
torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.add_audio
add_custom_scalars(layout) [source] Create special chart by collecting charts tags in β€˜scalars’. Note that this function can only be called once for each SummaryWriter() object. Because it only provides metadata to tensorboard, the function can be called before or after the training loop. Parameters layout (dict) –...
torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.add_custom_scalars
add_embedding(mat, metadata=None, label_img=None, global_step=None, tag='default', metadata_header=None) [source] Add embedding projector data to summary. Parameters mat (torch.Tensor or numpy.array) – A matrix which each row is the feature vector of the data point metadata (list) – A list of labels, each elemen...
torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.add_embedding
add_figure(tag, figure, global_step=None, close=True, walltime=None) [source] Render matplotlib figure into an image and add it to summary. Note that this requires the matplotlib package. Parameters tag (string) – Data identifier figure (matplotlib.pyplot.figure) – Figure or a list of figures global_step (int) ...
torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.add_figure
add_graph(model, input_to_model=None, verbose=False) [source] Add graph data to summary. Parameters model (torch.nn.Module) – Model to draw. input_to_model (torch.Tensor or list of torch.Tensor) – A variable or a tuple of variables to be fed. verbose (bool) – Whether to print graph structure in console.
torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.add_graph
add_histogram(tag, values, global_step=None, bins='tensorflow', walltime=None, max_bins=None) [source] Add histogram to summary. Parameters tag (string) – Data identifier values (torch.Tensor, numpy.array, or string/blobname) – Values to build histogram global_step (int) – Global step value to record bins (str...
torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.add_histogram
add_hparams(hparam_dict, metric_dict, hparam_domain_discrete=None, run_name=None) [source] Add a set of hyperparameters to be compared in TensorBoard. Parameters hparam_dict (dict) – Each key-value pair in the dictionary is the name of the hyper parameter and it’s corresponding value. The type of the value can be...
torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.add_hparams
add_image(tag, img_tensor, global_step=None, walltime=None, dataformats='CHW') [source] Add image data to summary. Note that this requires the pillow package. Parameters tag (string) – Data identifier img_tensor (torch.Tensor, numpy.array, or string/blobname) – Image data global_step (int) – Global step value t...
torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.add_image
add_images(tag, img_tensor, global_step=None, walltime=None, dataformats='NCHW') [source] Add batched image data to summary. Note that this requires the pillow package. Parameters tag (string) – Data identifier img_tensor (torch.Tensor, numpy.array, or string/blobname) – Image data global_step (int) – Global st...
torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.add_images
add_mesh(tag, vertices, colors=None, faces=None, config_dict=None, global_step=None, walltime=None) [source] Add meshes or 3D point clouds to TensorBoard. The visualization is based on Three.js, so it allows users to interact with the rendered object. Besides the basic definitions such as vertices, faces, users can f...
torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.add_mesh
add_pr_curve(tag, labels, predictions, global_step=None, num_thresholds=127, weights=None, walltime=None) [source] Adds precision recall curve. Plotting a precision-recall curve lets you understand your model’s performance under different threshold settings. With this function, you provide the ground truth labeling (...
torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.add_pr_curve
add_scalar(tag, scalar_value, global_step=None, walltime=None) [source] Add scalar data to summary. Parameters tag (string) – Data identifier scalar_value (float or string/blobname) – Value to save global_step (int) – Global step value to record walltime (float) – Optional override default walltime (time.time(...
torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.add_scalar
add_scalars(main_tag, tag_scalar_dict, global_step=None, walltime=None) [source] Adds many scalar data to summary. Parameters main_tag (string) – The parent name for the tags tag_scalar_dict (dict) – Key-value pair storing the tag and corresponding values global_step (int) – Global step value to record walltim...
torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.add_scalars
add_text(tag, text_string, global_step=None, walltime=None) [source] Add text data to summary. Parameters tag (string) – Data identifier text_string (string) – String to save global_step (int) – Global step value to record walltime (float) – Optional override default walltime (time.time()) seconds after epoch ...
torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.add_text
add_video(tag, vid_tensor, global_step=None, fps=4, walltime=None) [source] Add video data to summary. Note that this requires the moviepy package. Parameters tag (string) – Data identifier vid_tensor (torch.Tensor) – Video data global_step (int) – Global step value to record fps (float or int) – Frames per se...
torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.add_video
close() [source]
torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.close
flush() [source] Flushes the event file to disk. Call this method to make sure that all pending events have been written to disk.
torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.flush
__init__(log_dir=None, comment='', purge_step=None, max_queue=10, flush_secs=120, filename_suffix='') [source] Creates a SummaryWriter that will write out events and summaries to the event file. Parameters log_dir (string) – Save directory location. Default is runs/CURRENT_DATETIME_HOSTNAME, which changes after e...
torch.tensorboard#torch.utils.tensorboard.writer.SummaryWriter.__init__
torch.vander(x, N=None, increasing=False) β†’ Tensor Generates a Vandermonde matrix. The columns of the output matrix are elementwise powers of the input vector x(Nβˆ’1),x(Nβˆ’2),...,x0x^{(N-1)}, x^{(N-2)}, ..., x^0 . If increasing is True, the order of the columns is reversed x0,x1,...,x(Nβˆ’1)x^0, x^1, ..., x^{(N-1)} . Suc...
torch.generated.torch.vander#torch.vander
torch.var(input, unbiased=True) β†’ Tensor Returns the variance of all elements in the input tensor. If unbiased is False, then the variance will be calculated via the biased estimator. Otherwise, Bessel’s correction will be used. Parameters input (Tensor) – the input tensor. unbiased (bool) – whether to use the u...
torch.generated.torch.var#torch.var
torch.var_mean(input, unbiased=True) -> (Tensor, Tensor) Returns the variance and mean of all elements in the input tensor. If unbiased is False, then the variance will be calculated via the biased estimator. Otherwise, Bessel’s correction will be used. Parameters input (Tensor) – the input tensor. unbiased (boo...
torch.generated.torch.var_mean#torch.var_mean
torch.vdot(input, other, *, out=None) β†’ Tensor Computes the dot product of two 1D tensors. The vdot(a, b) function handles complex numbers differently than dot(a, b). If the first argument is complex, the complex conjugate of the first argument is used for the calculation of the dot product. Note Unlike NumPy’s vdot...
torch.generated.torch.vdot#torch.vdot
torch.view_as_complex(input) β†’ Tensor Returns a view of input as a complex tensor. For an input complex tensor of size m1,m2,…,mi,2m1, m2, \dots, mi, 2 , this function returns a new complex tensor of size m1,m2,…,mim1, m2, \dots, mi where the last dimension of the input tensor is expected to represent the real and i...
torch.generated.torch.view_as_complex#torch.view_as_complex
torch.view_as_real(input) β†’ Tensor Returns a view of input as a real tensor. For an input complex tensor of size m1,m2,…,mim1, m2, \dots, mi , this function returns a new real tensor of size m1,m2,…,mi,2m1, m2, \dots, mi, 2 , where the last dimension of size 2 represents the real and imaginary components of complex n...
torch.generated.torch.view_as_real#torch.view_as_real