prompts
stringlengths
87
212
description
stringlengths
0
6.76k
Given the following machine learning model name: squeeze-and-excitation networks, provide a description of the model
SENet pioneered channel attention. The core of SENet is a squeeze-and-excitation (SE) block which is used to collect global information, capture channel-wise relationships and improve representation ability. SE blocks are divided into two parts, a squeeze module and an excitation module. Global spatial information is ...
Given the following machine learning model name: Gated Transformer-XL, provide a description of the model
**Gated Transformer-XL**, or **GTrXL**, is a [Transformer](https://paperswithcode.com/methods/category/transformers)-based architecture for reinforcement learning. It introduces architectural modifications that improve the stability and learning speed of the original Transformer and XL variant. Changes include: - Pl...
Given the following machine learning model name: Demon ADAM, provide a description of the model
**Demon Adam** is a stochastic optimizer where the [Demon](https://paperswithcode.com/method/demon) momentum rule is applied to the [Adam](https://paperswithcode.com/method/adam) optimizer. $$ \beta\_{t} = \beta\_{init}\cdot\frac{\left(1-\frac{t}{T}\right)}{\left(1-\beta\_{init}\right) + \beta\_{init}\left(1-\frac{t...
Given the following machine learning model name: Instances-Pixels Balance Index, provide a description of the model
In a given dataset for semantic image segmentation, the number of samples per class should be the same, so that no classifier would be biased towards the majority class (here included the background). It is very difficult, if not impossible, to achieve a perfect balance between the several classes of objects of a datas...
Given the following machine learning model name: NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video, provide a description of the model
**NeuralRecon** is a framework for real-time 3D scene reconstruction from a monocular video. Unlike previous methods that estimate single-view depth maps separately on each key-frame and fuse them later, NeuralRecon proposes to directly reconstruct local surfaces represented as sparse TSDF volumes for each video fragme...
Given the following machine learning model name: Chimera, provide a description of the model
**Chimera** is a pipeline model parallelism scheme which combines bidirectional pipelines for efficiently training large-scale models. The key idea of Chimera is to combine two pipelines in different directions (down and up pipelines). Denote $N$ as the number of micro-batches executed by each worker within a train...
Given the following machine learning model name: WGAN-GP Loss, provide a description of the model
**Wasserstein Gradient Penalty Loss**, or **WGAN-GP Loss**, is a loss used for generative adversarial networks that augments the Wasserstein loss with a gradient norm penalty for random samples $\mathbf{\hat{x}} \sim \mathbb{P}\_{\hat{\mathbf{x}}}$ to achieve Lipschitz continuity: $$ L = \mathbb{E}\_{\mathbf{\hat{x}...
Given the following machine learning model name: Wide Residual Block, provide a description of the model
A **Wide Residual Block** is a type of [residual block](https://paperswithcode.com/method/residual-block) that utilises two conv 3x3 layers (with [dropout](https://paperswithcode.com/method/dropout)). This is wider than other variants of residual blocks (for instance [bottleneck residual blocks](https://paperswithcode....
Given the following machine learning model name: ProphetNet, provide a description of the model
**ProphetNet** is a sequence-to-sequence pre-training model that introduces a novel self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of optimizing one-step-ahead prediction in the traditional sequence-to-sequence model, the ProphetNet is optimized by $...
Given the following machine learning model name: Noisy Linear Layer, provide a description of the model
A **Noisy Linear Layer** is a [linear layer](https://paperswithcode.com/method/linear-layer) with parametric noise added to the weights. This induced stochasticity can be used in reinforcement learning networks for the agent's policy to aid efficient exploration. The parameters of the noise are learned with gradient de...
Given the following machine learning model name: PAUSE, provide a description of the model
**PAUSE**, or **Positive and Annealed Unlabeled Sentence Embedding**, is an approach for learning sentence embeddings from a partially labeled dataset. It is based on a dual encoder schema that is widely adopted in supervised sentence embedding training. Each individual sample $\mathbf{x}$ contains a pair of hypothesis...
Given the following machine learning model name: Self-Adversarial Negative Sampling, provide a description of the model
**Self-Adversarial Negative Sampling** is a negative sampling technique used for methods like [word embeddings](https://paperswithcode.com/methods/category/word-embeddings) and [knowledge graph embeddings](https://paperswithcode.com/methods/category/graph-embeddings). The traditional negative sampling loss from word2ve...
Given the following machine learning model name: Contrastive Video Representation Learning, provide a description of the model
**Contrastive Video Representation Learning**, or **CVRL**, is a self-supervised contrastive learning framework for learning spatiotemporal visual representations from unlabeled videos. Representations are learned using a contrastive loss, where two clips from the same short video are pulled together in the embedding s...
Given the following machine learning model name: Routing Transformer, provide a description of the model
The **Routing Transformer** is a [Transformer](https://paperswithcode.com/method/transformer) that endows self-attention with a sparse routing module based on online k-means. Each attention module considers a clustering of the space: the current timestep only attends to context belonging to the same cluster. In other w...
Given the following machine learning model name: Efficient Spatial Pyramid, provide a description of the model
An **Efficient Spatial Pyramid (ESP)** is an image model block based on a factorization principle that decomposes a standard [convolution](https://paperswithcode.com/method/convolution) into two steps: (1) point-wise convolutions and (2) spatial pyramid of dilated convolutions. The point-wise convolutions help in reduc...
Given the following machine learning model name: MLFPN, provide a description of the model
**Multi-Level Feature Pyramid Network**, or **MLFPN**, is a feature pyramid block used in object detection models, notably [M2Det](https://paperswithcode.com/method/m2det). We first fuse multi-level features (i.e. multiple layers) extracted by a backbone as a base feature, and then feed it into a block of alternating j...
Given the following machine learning model name: ADAHESSIAN, provide a description of the model
AdaHessian achieves new state-of-the-art results by a large margin as compared to other adaptive optimization methods, including variants of [ADAM](https://paperswithcode.com/method/adam). In particular, we perform extensive tests on CV, NLP, and recommendation system tasks and find that AdaHessian: (i) achieves 1.80%/...
Given the following machine learning model name: Heterogeneous Molecular Graph Neural Network, provide a description of the model
As they carry great potential for modeling complex interactions, graph neural network (GNN)-based methods have been widely used to predict quantum mechanical properties of molecules. Most of the existing methods treat molecules as molecular graphs in which atoms are modeled as nodes. They characterize each atom's chemi...
Given the following machine learning model name: DVD-GAN, provide a description of the model
**DVD-GAN** is a generative adversarial network for video generation built upon the [BigGAN](https://paperswithcode.com/method/biggan) architecture. DVD-GAN uses two discriminators: a Spatial Discriminator $\mathcal{D}\_{S}$ and a Temporal Discriminator $\mathcal{D}\_{T}$. $\mathcal{D}\_{S}$ critiques single frame ...
Given the following machine learning model name: InstaBoost, provide a description of the model
**InstaBoost** is a data augmentation technique for instance segmentation that utilises existing instance mask annotations. Intuitively in a small neighbor area of $(x_0, y_0, 1, 0)$, the probability map $P(x, y, s, r)$ should be high-valued since images are usually continuous and redundant in pixel level. Based on ...
Given the following machine learning model name: Source Hypothesis Transfer, provide a description of the model
**Source Hypothesis Transfer**, or **SHOT**, is a representation learning framework for unsupervised domain adaptation. SHOT freezes the classifier module (hypothesis) of the source model and learns the target-specific feature extraction module by exploiting both information maximization and self-supervised pseudo-labe...
Given the following machine learning model name: Memory Network, provide a description of the model
A **Memory Network** provides a memory component that can be read from and written to with the inference capabilities of a neural network model. The motivation is that many neural networks lack a long-term memory component, and their existing memory component encoded by states and weights is too small and not compartme...
Given the following machine learning model name: PULSE, provide a description of the model
**PULSE** is a self-supervised photo upsampling algorithm. Instead of starting with the LR image and slowly adding detail, PULSE traverses the high-resolution natural image manifold, searching for images that downscale to the original LR image. This is formalized through the downscaling loss, which guides exploration t...
Given the following machine learning model name: Conditional Batch Normalization, provide a description of the model
**Conditional Batch Normalization (CBN)** is a class-conditional variant of [batch normalization](https://paperswithcode.com/method/batch-normalization). The key idea is to predict the $\gamma$ and $\beta$ of the batch normalization from an embedding - e.g. a language embedding in VQA. CBN enables the linguistic embedd...
Given the following machine learning model name: Energy Based Process, provide a description of the model
**Energy Based Processes** extend energy based models to exchangeable data while allowing neural network parameterizations of the energy function. They extend the previously separate stochastic process and latent variable model perspectives in a common framework. The result is a generalization of [Gaussian processes](h...
Given the following machine learning model name: Principal Components Analysis, provide a description of the model
**Principle Components Analysis (PCA)** is an unsupervised method primary used for dimensionality reduction within machine learning. PCA is calculated via a singular value decomposition (SVD) of the design matrix, or alternatively, by calculating the covariance matrix of the data and performing eigenvalue decompositio...
Given the following machine learning model name: Fast-OCR, provide a description of the model
Fast-OCR is a new lightweight detection network that incorporates features from existing models focused on the speed/accuracy trade-off, such as [YOLOv2](https://paperswithcode.com/method/yolov2), [CR-NET](https://paperswithcode.com/method/cr-net), and Fast-[YOLOv4](https://paperswithcode.com/method/yolov4).
Given the following machine learning model name: Latent Diffusion Model, provide a description of the model
Diffusion models applied to latent spaces, which are normally built with (Variational) Autoencoders.
Given the following machine learning model name: Protagonist Antagonist Induced Regret Environment Design, provide a description of the model
**Protagonist Antagonist Induced Regret Environment Design**, or **PAIRED**, is an adversarial method for approximate minimax regret to generate environments for reinforcement learning. It introduces an antagonist which is allied with the environment generating adversary. The primary agent we are trying to train is the...
Given the following machine learning model name: DV3 Convolution Block, provide a description of the model
**DV3 Convolution Block** is a convolutional block used for the [Deep Voice 3](https://paperswithcode.com/method/deep-voice-3) text-to-speech architecture. It consists of a 1-D [convolution](https://paperswithcode.com/method/convolution) with a gated linear unit and a [residual connection](https://paperswithcode.com/me...
Given the following machine learning model name: k-Sparse Autoencoder, provide a description of the model
**k-Sparse Autoencoders** are autoencoders with linear activation function, where in hidden layers only the $k$ highest activities are kept. This achieves exact sparsity in the hidden representation. Backpropagation only goes through the the top $k$ activated units. This can be achieved with a [ReLU](https://paperswith...
Given the following machine learning model name: Self-Attention Network, provide a description of the model
**Self-Attention Network** (**SANet**) proposes two variations of self-attention used for image recognition: 1) pairwise self-attention which generalizes standard [dot-product attention](https://paperswithcode.com/method/dot-product-attention) and is fundamentally a set operator, and 2) patchwise self-attention which i...
Given the following machine learning model name: PP-YOLOv2, provide a description of the model
**PP-YOLOv2** is an object detector that extends upon [PP-YOLO](https://www.paperswithcode.com/method/pp-yolo) with several refinements: - A [Path Aggregation Network](https://paperswithcode.com/method/pafpn) is included for the FPN to compose bottom-up paths. - [Mish Activation functions](https://paperswithcode.co...
Given the following machine learning model name: Local SGD, provide a description of the model
**Local SGD** is a distributed training technique that runs [SGD](https://paperswithcode.com/method/sgd) independently in parallel on different workers and averages the sequences only once in a while.
Given the following machine learning model name: Adversarial Color Enhancement, provide a description of the model
**Adversarial Color Enhancement** is an approach to generating unrestricted adversarial images by optimizing a color filter via gradient descent.
Given the following machine learning model name: Deep Extreme Cut, provide a description of the model
**DEXTR**, or **Deep Extreme Cut**, obtains an object segmentation from its four extreme points: the left-most, right-most, top, and bottom pixels. The annotated extreme points are given as a guiding signal to the input of the network. To this end, we create a [heatmap](https://paperswithcode.com/method/heatmap) with a...
Given the following machine learning model name: Network On Network, provide a description of the model
Network On Network (NON) is practical tabular data classification model based on deep neural network to provide accurate predictions. Various deep methods have been proposed and promising progress has been made. However, most of them use operations like neural network and factorization machines to fuse the embeddings o...
Given the following machine learning model name: Center Pooling, provide a description of the model
**Center Pooling** is a pooling technique for object detection that aims to capture richer and more recognizable visual patterns. The geometric centers of objects do not necessarily convey very recognizable visual patterns (e.g., the human head contains strong visual patterns, but the center keypoint is often in the mi...
Given the following machine learning model name: TernaryBERT, provide a description of the model
**TernaryBERT** is a [Transformer](https://paperswithcode.com/methods/category/transformers)-based model which ternarizes the weights of a pretrained [BERT](https://paperswithcode.com/method/bert) model to $\{-1,0,+1\}$, with different granularities for word embedding and weights in the Transformer layer. Instead of di...
Given the following machine learning model name: AdaSqrt, provide a description of the model
**AdaSqrt** is a stochastic optimization technique that is motivated by the observation that methods like [Adagrad](https://paperswithcode.com/method/adagrad) and [Adam](https://paperswithcode.com/method/adam) can be viewed as relaxations of [Natural Gradient Descent](https://paperswithcode.com/method/natural-gradient-...
Given the following machine learning model name: Hierarchical Style Disentanglement, provide a description of the model
**Hierarchical Style Disentanglement**, or **HiSD**, aims to disentangle different styles in image-to-image translation models. It organizes the labels into a hierarchical structure, where independent tags, exclusive attributes, and disentangled styles are allocated from top to bottom. To make the styles identified to...
Given the following machine learning model name: Mixup, provide a description of the model
**Mixup** is a data augmentation technique that generates a weighted combination of random image pairs from the training data. Given two images and their ground truth labels: $\left(x\_{i}, y\_{i}\right), \left(x\_{j}, y\_{j}\right)$, a synthetic training example $\left(\hat{x}, \hat{y}\right)$ is generated as: $$ \...
Given the following machine learning model name: Semantic Cross Attention, provide a description of the model
Semantic Cross Attention (SCA) is based on cross attention, which we restrict with respect to a semantic mask. The goal of SCA is two-fold depending on what is the query and what is the key. Either it allows to give the feature map information from a semantically restricted set of latents or, respectively, it allows...
Given the following machine learning model name: Grid Sensitive, provide a description of the model
**Grid Sensitive** is a trick for object detection introduced by [YOLOv4](https://paperswithcode.com/method/yolov4). When we decode the coordinate of the bounding box center $x$ and $y$, in original [YOLOv3](https://paperswithcode.com/method/yolov3), we can get them by $$ \begin{aligned} &x=s \cdot\left(g\_{x}+\si...
Given the following machine learning model name: YOLOv1, provide a description of the model
**YOLOv1** is a single-stage object detection model. Object detection is framed as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection...
Given the following machine learning model name: Point-wise Spatial Attention, provide a description of the model
**Point-wise Spatial Attention (PSA)** is a [semantic segmentation](https://paperswithcode.com/task/semantic-segmentation) module. The goal is capture contextual information, especially in the long range, by aggregating information. Through the PSA module, information aggregation is performed as a kind of information f...
Given the following machine learning model name: U-Net Generative Adversarial Network, provide a description of the model
In contrast to typical GANs, a U-Net GAN uses a segmentation network as the discriminator. This segmentation network predicts two classes: real and fake. In doing so, the discriminator gives the generator region-specific feedback. This discriminator design also enables a [CutMix](https://paperswithcode.com/method/cutm...
Given the following machine learning model name: Probabilistic Anchor Assignment, provide a description of the model
**Probabilistic anchor assignment (PAA)** adaptively separates a set of anchors into positive and negative samples for a GT box according to the learning status of the model associated with it. To do so we first define a score of a detected bounding box that reflects both the classification and localization qualities. ...
Given the following machine learning model name: Region-based Fully Convolutional Network, provide a description of the model
**Region-based Fully Convolutional Networks**, or **R-FCNs**, are a type of region-based object detector. In contrast to previous region-based object detectors such as Fast/[Faster R-CNN](https://paperswithcode.com/method/faster-r-cnn) that apply a costly per-region subnetwork hundreds of times, R-FCN is fully convolut...
Given the following machine learning model name: Cascade R-CNN, provide a description of the model
**Cascade R-CNN** is an object detection architecture that seeks to address problems with degrading performance with increased IoU thresholds (due to overfitting during training and inference-time mismatch between IoUs for which detector is optimal and the inputs). It is a multi-stage extension of the [R-CNN](https://p...
Given the following machine learning model name: Packed Levitated Markers, provide a description of the model
**Packed Levitated Markers**, or **PL-Marker**, is a span representation approach for [named entity recognition](https://paperswithcode.com/task/named-entity-recognition-ner) that considers the dependencies between spans (pairs) by strategically packing the markers in the encoder. A pair of Levitated Markers, emphasizi...
Given the following machine learning model name: Beta-VAE, provide a description of the model
**Beta-VAE** is a type of variational autoencoder that seeks to discover disentangled latent factors. It modifies [VAEs](https://paperswithcode.com/method/vae) with an adjustable hyperparameter $\beta$ that balances latent channel capacity and independence constraints with reconstruction accuracy. The idea is to maximi...
Given the following machine learning model name: Local Mixup, provide a description of the model
Given the following machine learning model name: Gated Recurrent Unit, provide a description of the model
A **Gated Recurrent Unit**, or **GRU**, is a type of recurrent neural network. It is similar to an [LSTM](https://paperswithcode.com/method/lstm), but only has two gates - a reset gate and an update gate - and notably lacks an output gate. Fewer parameters means GRUs are generally easier/faster to train than their LSTM...
Given the following machine learning model name: GA-PID/NN-PID, provide a description of the model
The main control tasks in autonomous vehicles are steering (lateral) and speed (longitudinal) control. PID controllers are widely used in the industry because of their simplicity and good performance, but they are difficult to tune and need additional adaptation to control nonlinear systems with varying parameters. In ...
Given the following machine learning model name: VisTR, provide a description of the model
**VisTR** is a [Transformer](https://paperswithcode.com/method/transformer) based video instance segmentation model. It views video instance segmentation as a direct end-to-end parallel sequence decoding/prediction problem. Given a video clip consisting of multiple image frames as input, VisTR outputs the sequence of m...
Given the following machine learning model name: Compact Global Descriptor, provide a description of the model
A **Compact Global Descriptor** is an image model block for modelling interactions between positions across different dimensions (e.g., channels, frames). This descriptor enables subsequent convolutions to access the informative global features. It is a form of attention.
Given the following machine learning model name: Multi-scale Progressive Fusion Network, provide a description of the model
**Multi-scale Progressive Fusion Network** (MSFPN) is a neural network representation for single image deraining. It aims to exploit the correlated information of rain streaks across scales for single image deraining. Specifically, we first generate the Gaussian pyramid rain images using Gaussian kernels to down-sa...
Given the following machine learning model name: VoiceFilter-Lite, provide a description of the model
**VoiceFilter-Lite** is a single-channel source separation model that runs on the device to preserve only the speech signals from a target user, as part of a streaming speech recognition system. In this architecture, the voice filtering model operates as a frame-by-frame frontend signal processor to enhance the feature...
Given the following machine learning model name: Modularity preserving NMF, provide a description of the model
Given the following machine learning model name: Inception Module, provide a description of the model
An **Inception Module** is an image model block that aims to approximate an optimal local sparse structure in a CNN. Put simply, it allows for us to use multiple types of filter size, instead of being restricted to a single filter size, in a single image block, which we then concatenate and pass onto the next layer.
Given the following machine learning model name: PLATO-2, provide a description of the model
Given the following machine learning model name: Neural Tangent Kernel, provide a description of the model
Given the following machine learning model name: Manifold Mixup, provide a description of the model
**Manifold Mixup** is a regularization method that encourages neural networks to predict less confidently on interpolations of hidden representations. It leverages semantic interpolations as an additional training signal, obtaining neural networks with smoother decision boundaries at multiple levels of representation. ...
Given the following machine learning model name: node2vec, provide a description of the model
**node2vec** is a framework for learning graph embeddings for nodes in graphs. Node2vec maximizes a likelihood objective over mappings which preserve neighbourhood distances in higher dimensional spaces. From an algorithm design perspective, node2vec exploits the freedom to define neighbourhoods for nodes and provide a...
Given the following machine learning model name: OSA (identity mapping + eSE), provide a description of the model
**One-Shot Aggregation with an Identity Mapping and eSE** is an image model block that extends [one-shot aggregation](https://paperswithcode.com/method/one-shot-aggregation) with a [residual connection](https://paperswithcode.com/method/residual-connection) and [effective squeeze-and-excitation block](https://paperswit...
Given the following machine learning model name: Teacher-Tutor-Student Knowledge Distillation, provide a description of the model
**Teacher-Tutor-Student Knowledge Distillation** is a method for image virtual try-on models. It treats fake images produced by the parser-based method as "tutor knowledge", where the artifacts can be corrected by real "teacher knowledge", which is extracted from the real person images in a self-supervised way. Other t...
Given the following machine learning model name: mT0, provide a description of the model
**mT0** is a Multitask prompted finetuning (MTF) variant of mT5.
Given the following machine learning model name: Dynamic Graph Event Detection, provide a description of the model
Given the following machine learning model name: CPM-2, provide a description of the model
**CPM-2** is a 11 billion parameters pre-trained language model based on a standard Transformer architecture consisting of a bidirectional encoder and a unidirectional decoder. The model is pre-trained on WuDaoCorpus which contains 2.3TB cleaned Chinese data as well as 300GB cleaned English data. The pre-training proce...
Given the following machine learning model name: Noise2Fast, provide a description of the model
**Noise2Fast** is a model for single image blind denoising. It is similar to masking based methods -- filling in the pixel gaps -- in that the network is blind to many of the input pixels during training. The method is inspired by Neighbor2Neighbor, where the neural network learns a mapping between adjacent pixels. Noi...
Given the following machine learning model name: Context Aggregated Bi-lateral Network for Semantic Segmentation, provide a description of the model
With the increasing demand of autonomous systems, pixelwise semantic segmentation for visual scene understanding needs to be not only accurate but also efficient for potential real-time applications. In this paper, we propose Context Aggregation Network, a dual branch convolutional neural network, with significantly lo...
Given the following machine learning model name: Adaptive Content Generating and Preserving Network, provide a description of the model
**ACGPN**, or **Adaptive Content Generating and Preserving Network**, is a [generative adversarial network](https://www.paperswithcode.com/method/category/generative-adversarial-network) for virtual try-on clothing applications. In Step I, the Semantic Generation Module (SGM) takes the target clothing image $\mathc...
Given the following machine learning model name: Self-Supervised Temporal Domain Adaptation, provide a description of the model
**Self-Supervised Temporal Domain Adaptation (SSTDA)** is a method for action segmentation with self-supervised temporal domain adaptation. It contains two self-supervised auxiliary tasks (binary and sequential domain prediction) to jointly align cross-domain feature spaces embedded with local and global temporal dynam...
Given the following machine learning model name: Simulation as Augmentation, provide a description of the model
**SimAug**, or **Simulation as Augmentation**, is a data augmentation method for trajectory prediction. It augments the representation such that it is robust to the variances in semantic scenes and camera views. First, to deal with the gap between real and synthetic semantic scene, it represents each training trajecto...
Given the following machine learning model name: Parsing Incrementally for Constrained Auto-Regressive Decoding, provide a description of the model
Given the following machine learning model name: HetPipe, provide a description of the model
**HetPipe** is a hybrid parallel method that integrates pipelined model parallelism (PMP) with data parallelism (DP). In HetPipe, a group of multiple GPUs, called a virtual worker, processes minibatches in a pipelined manner, and multiple such virtual workers employ data parallelism for higher performance.
Given the following machine learning model name: RoBERTa, provide a description of the model
**RoBERTa** is an extension of [BERT](https://paperswithcode.com/method/bert) with changes to the pretraining procedure. The modifications include: - training the model longer, with bigger batches, over more data - removing the next sentence prediction objective - training on longer sequences - dynamically chang...
Given the following machine learning model name: Pointwise Convolution, provide a description of the model
**Pointwise Convolution** is a type of [convolution](https://paperswithcode.com/method/convolution) that uses a 1x1 kernel: a kernel that iterates through every single point. This kernel has a depth of however many channels the input image has. It can be used in conjunction with [depthwise convolutions](https://papersw...
Given the following machine learning model name: GAN Least Squares Loss, provide a description of the model
**GAN Least Squares Loss** is a least squares loss function for generative adversarial networks. Minimizing this objective function is equivalent to minimizing the Pearson $\chi^{2}$ divergence. The objective function (here for [LSGAN](https://paperswithcode.com/method/lsgan)) can be defined as: $$ \min\_{D}V\_{LS}\...
Given the following machine learning model name: TimeSformer, provide a description of the model
**TimeSformer** is a [convolution](https://paperswithcode.com/method/convolution)-free approach to video classification built exclusively on self-attention over space and time. It adapts the standard [Transformer](https://paperswithcode.com/method/transformer) architecture to video by enabling spatiotemporal feature le...
Given the following machine learning model name: SimCSE, provide a description of the model
**SimCSE** is a contrastive learning framework for generating sentence embeddings. It utilizes an unsupervised approach, which takes an input sentence and predicts itself in contrastive objective, with only standard [dropout](https://paperswithcode.com/method/dropout) used as noise. The authors find that dropout acts a...
Given the following machine learning model name: 2D Discrete Wavelet Transform, provide a description of the model
Given the following machine learning model name: NesT, provide a description of the model
**NesT** stacks canonical transformer layers to conduct local self-attention on every image block independently, and then "nests" them hierarchically. Coupling of processed information between spatially adjacent blocks is achieved through a proposed block aggregation between every two hierarchies. The overall hierarchi...
Given the following machine learning model name: DeepViT, provide a description of the model
**DeepViT** is a type of [vision transformer](https://paperswithcode.com/method/vision-transformer) that replaces the self-attention layer within the [transformer](https://paperswithcode.com/method/transformer) block with a [Re-attention module](https://paperswithcode.com/method/re-attention-module) to address the issu...
Given the following machine learning model name: UCTransNet, provide a description of the model
**UCTransNet** is an end-to-end deep learning network for semantic segmentation that takes [U-Net](https://paperswithcode.com/method/u-net) as the main structure of the network. The original skip connections of U-Net are replaced by CTrans consisting of two components: [Channel-wise Cross fusion Transformer](https://pa...
Given the following machine learning model name: One-Shot Aggregation, provide a description of the model
**One-Shot Aggregation** is an image model block that is an alternative to [Dense Blocks](https://paperswithcode.com/method/dense-block), by aggregating intermediate features. It is proposed as part of the [VoVNet](https://paperswithcode.com/method/vovnet) architecture. Each [convolution](https://paperswithcode.com/met...
Given the following machine learning model name: True Online TD Lambda, provide a description of the model
**True Online $TD\left(\lambda\right)$** seeks to approximate the ideal online $\lambda$-return algorithm. It seeks to invert this ideal forward-view algorithm to produce an efficient backward-view algorithm using eligibility traces. It uses dutch traces rather than accumulating traces. Source: [Sutton and Seijen](h...
Given the following machine learning model name: UNiversal Image-TExt Representation Learning, provide a description of the model
UNITER or UNiversal Image-TExt Representation model is a large-scale pre-trained model for joint multimodal embedding. It is pre-trained using four image-text datasets COCO, Visual Genome, Conceptual Captions, and SBU Captions. It can power heterogeneous downstream V+L tasks with joint multimodal embeddings. UNITER t...
Given the following machine learning model name: Stochastic Weight Averaging, provide a description of the model
**Stochastic Weight Averaging** is an optimization procedure that averages multiple points along the trajectory of [SGD](https://paperswithcode.com/method/sgd), with a cyclical or constant learning rate. On the one hand it averages weights, but it also has the property that, with a cyclical or constant learning rate, S...
Given the following machine learning model name: efficient channel attention, provide a description of the model
An ECA block has similar formulation to an SE block including a squeeze module for aggregating global spatial information and an efficient excitation module for modeling cross-channel interaction. Instead of indirect correspondence, an ECA block only considers direct interaction between each channel and its k-nearest n...
Given the following machine learning model name: 3DSSD, provide a description of the model
**3DSSD** is a point-based 3D single stage object detection detector. In this paradigm, all upsampling layers and refinement stage, which are indispensable in all existing point-based methods, are abandoned to reduce the large computation cost. The authors propose a fusion sampling strategy in the downsampling process ...
Given the following machine learning model name: Distance Shrinking with Angular Marginalizing Loss, provide a description of the model
Given the following machine learning model name: Locally-Grouped Self-Attention, provide a description of the model
**Locally-Grouped Self-Attention**, or **LSA**, is a local attention mechanism used in the [Twins-SVT](https://paperswithcode.com/method/twins-svt) architecture. Locally-grouped self-attention (LSA). Motivated by the group design in depthwise convolutions for efficient inference, we first equally divide the 2D feature ...
Given the following machine learning model name: GreedyNAS-B, provide a description of the model
**GreedyNAS-B** is a convolutional neural network discovered using the [GreedyNAS](https://paperswithcode.com/method/greedynas) [neural architecture search](https://paperswithcode.com/method/neural-architecture-search) method. The basic building blocks used are inverted residual blocks (from [MobileNetV2](https://paper...
Given the following machine learning model name: Fast Minimum-Norm Attack, provide a description of the model
**Fast Minimum-Norm Attack**, or **FNM**, is a type of adversarial attack that works with different $\ell_{p}$-norm perturbation models ($p=0,1,2,\infty$), is robust to hyperparameter choices, does not require adversarial starting points, and converges within few lightweight steps. It works by iteratively finding the s...
Given the following machine learning model name: PowerSGD, provide a description of the model
**PowerSGD** is a distributed optimization technique that computes a low-rank approximation of the gradient using a generalized power iteration (known as subspace iteration). The approximation is computationally light-weight, avoiding any prohibitively expensive Singular Value Decomposition. To improve the quality of t...
Given the following machine learning model name: Bridge-net, provide a description of the model
**Bridge-net** is an audio model block used in the [ClariNet](https://paperswithcode.com/method/clarinet) text-to-speech architecture. Bridge-net maps frame-level hidden representation to sample-level through several [convolution](https://paperswithcode.com/method/convolution) blocks and [transposed convolution](https:...
Given the following machine learning model name: Noisy Student, provide a description of the model
**Noisy Student Training** is a semi-supervised learning approach. It extends the idea of self-training and distillation with the use of equal-or-larger student models and noise added to the student during learning. It has three main steps: 1. train a teacher model on labeled images 2. use the teacher to generate...
Given the following machine learning model name: Distributed Distributional DDPG, provide a description of the model
**D4PG**, or **Distributed Distributional DDPG**, is a policy gradient algorithm that extends upon the [DDPG](https://paperswithcode.com/method/ddpg). The improvements include a distributional updates to the DDPG algorithm, combined with the use of multiple distributed workers all writing into the same replay table. Th...