prompts
stringlengths
87
212
description
stringlengths
0
6.76k
Given the following machine learning model name: 1cycle learning rate scheduling policy, provide a description of the model
Given the following machine learning model name: TSRUc, provide a description of the model
**TSRUc**, or **Transformation-based Spatial Recurrent Unit c**, is a modification of a [ConvGRU](https://paperswithcode.com/method/cgru) used in the [TriVD-GAN](https://paperswithcode.com/method/trivd-gan) architecture for video generation. Instead of computing the reset gate $r$ and resetting $h\_{t−1}$, the TSRUc...
Given the following machine learning model name: Coordinate attention, provide a description of the model
Hou et al. proposed coordinate attention, a novel attention mechanism which embeds positional information into channel attention, so that the network can focus on large important regions at little computational cost. The coordinate attention mechanism has two consecutive steps, coordinate information embedding ...
Given the following machine learning model name: Hierarchical Information Threading, provide a description of the model
An unsupervised approach for identifying Hierarchical Information Threads by analysing the network of related articles in a collection. In particular, HINT leverages article timestamps and the 5W1H questions to identify related articles about an event or discussion. HINT then constructs a network representation of the ...
Given the following machine learning model name: Adaptive Instance Normalization, provide a description of the model
**Adaptive Instance Normalization** is a normalization method that aligns the mean and variance of the content features with those of the style features. [Instance Normalization](https://paperswithcode.com/method/instance-normalization) normalizes the input to a single style specified by the affine parameters. Adap...
Given the following machine learning model name: POMO, provide a description of the model
Given the following machine learning model name: Spectral Dropout, provide a description of the model
Please enter a description about the method here
Given the following machine learning model name: Graph-to-Tree MWP Solver, provide a description of the model
Given the following machine learning model name: A Dynamic Multi-Scale Voxel Flow Network, provide a description of the model
Given the following machine learning model name: Parrot, provide a description of the model
**Parrot** is an imitation learning approach to automatically learn cache access patterns by leveraging Belady’s optimal policy. Belady’s optimal policy is an oracle policy that computes the theoretically optimal cache eviction decision based on knowledge of future cache accesses, which Parrot approximates with a polic...
Given the following machine learning model name: Hierarchical Variational Autoencoder, provide a description of the model
Given the following machine learning model name: Taylor Expansion Policy Optimization, provide a description of the model
**TayPO**, or **Taylor Expansion Policy Optimization**, refers to a set of algorithms that apply the $k$-th order Taylor expansions for policy optimization. This generalizes prior work, including [TRPO](https://paperswithcode.com/method/trpo) as a special case. It can be thought of unifying ideas from trust-region poli...
Given the following machine learning model name: Ghost Bottleneck, provide a description of the model
A **Ghost BottleNeck** is a skip connection block, similar to the basic [residual block](https://paperswithcode.com/method/residual-block) in [ResNet](https://paperswithcode.com/method/resnet) in which several convolutional layers and shortcuts are integrated, but stacks [Ghost Modules](https://paperswithcode.com/metho...
Given the following machine learning model name: Inception-v3, provide a description of the model
**Inception-v3** is a convolutional neural network architecture from the Inception family that makes several improvements including using [Label Smoothing](https://paperswithcode.com/method/label-smoothing), Factorized 7 x 7 convolutions, and the use of an auxiliary classifer to propagate label information lower down t...
Given the following machine learning model name: Strided EESP, provide a description of the model
A **Strided EESP** unit is based on the [EESP Unit](https://paperswithcode.com/method/eesp) but is modified to learn representations more efficiently at multiple scales. Depth-wise dilated convolutions are given strides, an [average pooling](https://paperswithcode.com/method/average-pooling) operation is added instead ...
Given the following machine learning model name: SESAME Discriminator, provide a description of the model
Extends [PatchGAN](https://paperswithcode.com/method/patchgan) discriminator for the task of layout2image generation. The discriminator is comprised of two processing streams: one for the RGB image and one for its semantics, which are fused together at the later stages of the discriminator.
Given the following machine learning model name: Active Convolution, provide a description of the model
An **Active Convolution** is a type of [convolution](https://paperswithcode.com/method/convolution) which does not have a fixed shape of the receptive field, and can be used to take more diverse forms of receptive fields for convolutions. Its shape can be learned through backpropagation during training. It can be seen ...
Given the following machine learning model name: Mixing Adam and SGD, provide a description of the model
This optimizer mix [ADAM](https://paperswithcode.com/method/adam) and [SGD](https://paperswithcode.com/method/sgd) creating the MAS optimizer.
Given the following machine learning model name: Mechanism Transfer, provide a description of the model
**Mechanism Transfer** is a meta-distributional scenario for few-shot domain adaptation in which a data generating mechanism is invariant across domains. This transfer assumption can accommodate nonparametric shifts resulting in apparently different distributions while providing a solid statistical basis for domain ada...
Given the following machine learning model name: UNet++, provide a description of the model
UNet++ is an architecture for semantic segmentation based on the [U-Net](https://paperswithcode.com/method/u-net). Through the use of densely connected nested decoder sub-networks, it enhances extracted feature processing and was reported by its authors to outperform the U-Net in [Electron Microscopy (EM)](https://imag...
Given the following machine learning model name: Projection Discriminator, provide a description of the model
A **Projection Discriminator** is a type of discriminator for generative adversarial networks. It is motivated by a probabilistic model in which the distribution of the conditional variable $\textbf{y}$ given $\textbf{x}$ is discrete or uni-modal continuous distributions. If we look at the original solution for the ...
Given the following machine learning model name: Average Pooling, provide a description of the model
**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not...
Given the following machine learning model name: FastSpeech 2, provide a description of the model
**FastSpeech2** is a text-to-speech model that aims to improve upon FastSpeech by better solving the one-to-many mapping problem in TTS, i.e., multiple speech variations corresponding to the same text. It attempts to solve this problem by 1) directly training the model with ground-truth target instead of the simplified...
Given the following machine learning model name: 1-bit Adam, provide a description of the model
**1-bit Adam** is a [stochastic optimization](https://paperswithcode.com/methods/category/stochastic-optimization) technique that is a variant of [ADAM](https://paperswithcode.com/method/adam) with error-compensated 1-bit compression, based on finding that Adam's variance term becomes stable at an early stage. First va...
Given the following machine learning model name: DeBERTa, provide a description of the model
**DeBERTa** is a [Transformer](https://paperswithcode.com/methods/category/transformers)-based neural language model that aims to improve the [BERT](https://paperswithcode.com/method/bert) and [RoBERTa](https://paperswithcode.com/method/roberta) models with two techniques: a [disentangled attention mechanism](https://p...
Given the following machine learning model name: SkipInit, provide a description of the model
**SkipInit** is a method that aims to allow [normalization](https://paperswithcode.com/methods/category/normalization)-free training of neural networks by downscaling [residual branches](https://paperswithcode.com/method/residual-block) at initialization. This is achieved by including a learnable scalar multiplier at ...
Given the following machine learning model name: GRLIA, provide a description of the model
**GRLIA** is an incident aggregation framework for online service systems based on graph representation learning over the cascading graph of cloud failures. A representation vector is learned for each unique type of incident in an unsupervised and unified manner, which is able to simultaneously encode the topological a...
Given the following machine learning model name: Recurrent Event Network, provide a description of the model
Recurrent Event Network (RE-NET) is an autoregressive architecture for predicting future interactions. The occurrence of a fact (event) is modeled as a probability distribution conditioned on temporal sequences of past knowledge graphs. RE-NET employs a recurrent event encoder to encode past facts and uses a neighborho...
Given the following machine learning model name: RTMDet: An Empirical Study of Designing Real-Time Object Detectors, provide a description of the model
Please enter a description about the method here
Given the following machine learning model name: MDETR, provide a description of the model
**MDETR** is an end-to-end modulated detector that detects objects in an image conditioned on a raw text query, like a caption or a question. It utilizes a [transformer](https://paperswithcode.com/method/transformer)-based architecture to reason jointly over text and image by fusing the two modalities at an early stage...
Given the following machine learning model name: Spatially Separable Self-Attention, provide a description of the model
**Spatially Separable Self-Attention**, or **SSSA**, is an [attention module](https://paperswithcode.com/methods/category/attention-modules) used in the [Twins-SVT](https://paperswithcode.com/method/twins-svt) architecture that aims to reduce the computational complexity of [vision transformers](https://paperswithcode....
Given the following machine learning model name: Anycost GAN, provide a description of the model
**Anycost GAN** is a type of generative adversarial network for image synthesis and editing. Given an input image, we project it into the latent space with encoder $E$ and backward optimization. We can modify the latent code with user input to edit the image. During editing, a sub-generator of small cost is used for fa...
Given the following machine learning model name: Concrete Dropout, provide a description of the model
Please enter a description about the method here
Given the following machine learning model name: Rainbow DQN, provide a description of the model
**Rainbow DQN** is an extended [DQN](https://paperswithcode.com/method/dqn) that combines several improvements into a single learner. Specifically: - It uses [Double Q-Learning](https://paperswithcode.com/method/double-q-learning) to tackle overestimation bias. - It uses [Prioritized Experience Replay](https://pape...
Given the following machine learning model name: Highway networks, provide a description of the model
There is plenty of theoretical and empirical evidence that depth of neural networks is a crucial ingredient for their success. However, network training becomes more difficult with increasing depth and training of very deep networks remains an open problem. In this extended abstract, we introduce a new architecture des...
Given the following machine learning model name: Spatio-Temporal Attention LSTM, provide a description of the model
In human action recognition, each type of action generally only depends on a few specific kinematic joints. Furthermore, over time, multiple actions may be performed. Motivated by these observations, Song et al. proposed a joint spatial and temporal attention network based on LSTM, to adaptively find discrimina...
Given the following machine learning model name: DeepIR, provide a description of the model
**DeepIR**, or **Deep InfraRed image processing**, is a thermal image processing framework for recovering high quality images from a very small set of images captured with camera motion. Enhancement is achieved by noting that camera motion, which is usually a hinderance, can be exploited to our advantage to separate a ...
Given the following machine learning model name: Fragmentation, provide a description of the model
Given a pattern $P,$ that is more complicated than the patterns, we fragment $P$ into simpler patterns such that their exact count is known. In the subgraph GNN proposed earlier, look into the subgraph of the host graph. We have seen that this technique is scalable on large graphs. Also, we have seen that subgraph GNN ...
Given the following machine learning model name: Rational Activation Function, provide a description of the model
Rational Activation Functions, ratio of polynomials as learnable functions
Given the following machine learning model name: Evolved Sign Momentum, provide a description of the model
The Lion optimizer is discovered by symbolic program search. It is more memory-efficient than most adaptive optimizers as it only needs to momentum. The update of Lion is produced by the sign function.
Given the following machine learning model name: Connectionist Temporal Classification Loss, provide a description of the model
A **Connectionist Temporal Classification Loss**, or **CTC Loss**, is designed for tasks where we need alignment between sequences, but where that alignment is difficult - e.g. aligning each character to its location in an audio file. It calculates a loss between a continuous (unsegmented) time series and a target sequ...
Given the following machine learning model name: PEGASUS, provide a description of the model
**PEGASUS** proposes a transformer-based model for abstractive summarization. It uses a special self-supervised pre-training objective called gap-sentences generation (GSG) that's designed to perform well on summarization-related downstream tasks. As reported in the paper, "both GSG and MLM are applied simultaneously t...
Given the following machine learning model name: ShuffleNet v2, provide a description of the model
**ShuffleNet v2** is a convolutional neural network optimized for a direct metric (speed) rather than indirect metrics like FLOPs. It builds upon [ShuffleNet v1](https://paperswithcode.com/method/shufflenet), which utilised pointwise group convolutions, bottleneck-like structures, and a [channel shuffle](https://papers...
Given the following machine learning model name: NormFormer, provide a description of the model
**NormFormer** is a type of [Pre-LN](https://paperswithcode.com/method/layer-normalization) transformer that adds three normalization operations to each layer: a Layer Norm after self attention, head-wise scaling of self-attention outputs, and a Layer Norm after the first [fully connected layer](https://paperswithcode....
Given the following machine learning model name: Message Passing Neural Network, provide a description of the model
There are at least eight notable examples of models from the literature that can be described using the **Message Passing Neural Networks** (**MPNN**) framework. For simplicity we describe MPNNs which operate on undirected graphs $G$ with node features $x_{v}$ and edge features $e_{vw}$. It is trivial to extend the for...
Given the following machine learning model name: Video Language Graph Matching Network, provide a description of the model
VLG-Net leverages recent advantages in Graph Neural Networks (GCNs) and leverages a novel multi-modality graph-based fusion method for the task of natural language video grounding.
Given the following machine learning model name: Temporal Jittering, provide a description of the model
**Temporal Jittering** is a method used in deep learning for video, where we sample multiple training clips from each video with random start times during at every epoch.
Given the following machine learning model name: Gravity, provide a description of the model
Gravity is a kinematic approach to optimization based on gradients.
Given the following machine learning model name: LayoutLMv2, provide a description of the model
**LayoutLMv2** is an architecture and pre-training method for document understanding. The model is pre-trained with a great number of unlabeled scanned document images from the IIT-CDIP dataset, where some images in the text-image pairs are randomly replaced with another document image to make the model learn whether t...
Given the following machine learning model name: Multiplex Molecular Graph Neural Network, provide a description of the model
The **Multiplex Molecular Graph Neural Network (MXMNet)** is an approach for the representation learning of molecules. The molecular interactions are divided into two categories: local and global. Then a two-layer multiplex graph $G = \\{ G_{l}, G_{g} \\}$ is constructed for a molecule. In $G$, the local layer $G_{l}$ ...
Given the following machine learning model name: StyleSwin: Transformer-based GAN for High-resolution Image Generation, provide a description of the model
Despite the tantalizing success in a broad of vision tasks, transformers have not yet demonstrated on-par ability as ConvNets in high-resolution image generative modeling. In this paper, we seek to explore using pure transformers to build a generative adversarial network for high-resolution image synthesis. To this end...
Given the following machine learning model name: Primer, provide a description of the model
**Primer** is a [Transformer](https://paperswithcode.com/methods/category/transformers)-based architecture that improves upon the [Transformer](https://paperswithcode.com/method/transformer) architecture with two improvements found through [neural architecture search](https://paperswithcode.com/methods/category/neural-...
Given the following machine learning model name: State-Aware Tracker, provide a description of the model
**State-Aware Tracker** is a pipeline for semi-supervised video object segmentation. It takes each target object as a tracklet, which not only makes the pipeline more efficient but also filters distractors to facilitate target modeling. For more stable and robust performance over video sequences, SAT gets awareness for...
Given the following machine learning model name: BigGAN, provide a description of the model
**BigGAN** is a type of generative adversarial network that was designed for scaling generation to high-resolution, high-fidelity images. It includes a number of incremental changes and innovations. The baseline and incremental changes are: - Using [SAGAN](https://paperswithcode.com/method/sagan) as a baseline with ...
Given the following machine learning model name: Sandwich Transformer, provide a description of the model
A **Sandwich Transformer** is a variant of a [Transformer](https://paperswithcode.com/method/transformer) that reorders sublayers in the architecture to achieve better performance. The reordering is based on the authors' analysis that models with more self-attention toward the bottom and more feedforward sublayers tow...
Given the following machine learning model name: Long Short-Term Memory, provide a description of the model
An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components,...
Given the following machine learning model name: Batch Transformer, provide a description of the model
learn to explore the sample relationships via transformer networks
Given the following machine learning model name: Single-path NAS, provide a description of the model
**Single-Path NAS** is a convolutional neural network architecture discovered through the Single-Path [neural architecture search](https://paperswithcode.com/method/neural-architecture-search) approach. The NAS utilises a single-path search space. Specifically, compared to previous differentiable NAS methods, Single-P...
Given the following machine learning model name: BiDet, provide a description of the model
**BiDet** is a binarized neural network learning method for efficient object detection. Conventional network binarization methods directly quantize the weights and activations in one-stage or two-stage detectors with constrained representational capacity, so that the information redundancy in the networks causes numero...
Given the following machine learning model name: Knowledge Graph Refiner, provide a description of the model
Given the following machine learning model name: Asymmetrical Bi-RNN, provide a description of the model
An aspect of Bi-RNNs that could be undesirable is the architecture's symmetry in both time directions. Bi-RNNs are often used in natural language processing, where the order of the words is almost exclusively determined by grammatical rules and not by temporal sequentiality. However, in some cases, the data has a ...
Given the following machine learning model name: Sticker Response Selector, provide a description of the model
**Sticker Response Selector**, or **SRS**, is a model for multi-turn dialog that automatically selects a sticker response. SRS first employs a convolutional based sticker image encoder and a self-attention based multi-turn dialog encoder to obtain the representation of stickers and utterances. Next, deep interaction ne...
Given the following machine learning model name: ORB-Simultaneous localization and mapping, provide a description of the model
ORB-SLAM2 is a complete SLAM system for monocular, stereo and RGB-D cameras, including map reuse, loop closing and relocalization capabilities. The system works in real-time on standard CPUs in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars dr...
Given the following machine learning model name: AccoMontage, provide a description of the model
**AccoMontage** is a model for accompaniment arrangement, a type of music generation task involving intertwined constraints of melody, harmony, texture, and music structure. AccoMontage generates piano accompaniments for folk/pop songs based on a lead sheet (i.e. a melody with chord progression). It first retrieves phr...
Given the following machine learning model name: ENet Initial Block, provide a description of the model
The **ENet Initial Block** is an image model block used in the [ENet](https://paperswithcode.com/method/enet) semantic segmentation architecture. [Max Pooling](https://paperswithcode.com/method/max-pooling) is performed with non-overlapping 2 × 2 windows, and the [convolution](https://paperswithcode.com/method/convolut...
Given the following machine learning model name: LAMB, provide a description of the model
**LAMB** is a a layerwise adaptive large batch optimization technique. It provides a strategy for adapting the learning rate in large batch settings. LAMB uses [Adam](https://paperswithcode.com/method/adam) as the base algorithm and then forms an update as: $$r\_{t} = \frac{m\_{t}}{\sqrt{v\_{t}} + \epsilon}$$ $$x\_...
Given the following machine learning model name: Slot Attention, provide a description of the model
**Slot Attention** is an architectural component that interfaces with perceptual representations such as the output of a convolutional neural network and produces a set of task-dependent abstract representations which we call slots. These slots are exchangeable and can bind to any object in the input by specializing th...
Given the following machine learning model name: Random Ensemble Mixture, provide a description of the model
Random Ensemble Mixture (REM) is an easy to implement extension of [DQN](https://paperswithcode.com/method/dqn) inspired by [Dropout](https://paperswithcode.com/method/dropout). The key intuition behind REM is that if one has access to multiple estimates of Q-values, then a weighted combination of the Q-value estimates...
Given the following machine learning model name: Pix2Pix, provide a description of the model
**Pix2Pix** is a conditional image-to-image translation architecture that uses a conditional [GAN](https://paperswithcode.com/method/gan) objective combined with a reconstruction loss. The conditional GAN objective for observed images $x$, output images $y$ and the random noise vector $z$ is: $$ \mathcal{L}\_{cGAN}\...
Given the following machine learning model name: BLIP: Bootstrapping Language-Image Pre-training, provide a description of the model
Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-te...
Given the following machine learning model name: Handwritten OCR augmentation, provide a description of the model
We are introducing a universal handwritten image augmentation method that is language-agnostic. This groundbreaking technique can be applied to handwritten images in any language worldwide, marking it as the first of its kind. There are four methods for handwritten images which are ThickOCR, ThinOCR, Elongate OCR, Line...
Given the following machine learning model name: Phase Gradient Heap Integration, provide a description of the model
Z. Průša, P. Balazs and P. L. Søndergaard, "A Noniterative Method for Reconstruction of Phase From STFT Magnitude," in IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 25, no. 5, pp. 1154-1164, May 2017, doi: 10.1109/TASLP.2017.2678166. Abstract: A noniterative method for the reconstruction of the...
Given the following machine learning model name: Neighborhood Attention, provide a description of the model
Neighborhood Attention is a restricted self attention pattern in which each token's receptive field is limited to its nearest neighboring pixels. It was proposed in [Neighborhood Attention Transformer](https://paperswithcode.com/paper/neighborhood-attention-transformer) as an alternative to other local attention mechan...
Given the following machine learning model name: Triplet Loss, provide a description of the model
The goal of **Triplet loss**, in the context of Siamese Networks, is to maximize the joint probability among all score-pairs i.e. the product of all probabilities. By using its negative logarithm, we can get the loss formulation as follows: $$ L\_{t}\left(\mathcal{V}\_{p}, \mathcal{V}\_{n}\right)=-\frac{1}{M N} \su...
Given the following machine learning model name: Spatially Separable Convolution, provide a description of the model
A **Spatially Separable Convolution** decomposes a [convolution](https://paperswithcode.com/method/convolution) into two separate operations. In regular convolution, if we have a 3 x 3 kernel then we directly convolve this with the image. We can divide a 3 x 3 kernel into a 3 x 1 kernel and a 1 x 3 kernel. Then, in spa...
Given the following machine learning model name: DBlock, provide a description of the model
**DBlock** is a residual based block used in the discriminator of the [GAN-TTS](https://paperswithcode.com/method/gan-tts) architecture. They are similar to the [GBlocks](https://paperswithcode.com/method/gblock) used in the generator, but without batch normalisation.
Given the following machine learning model name: Log-time and Log-space Extreme Classification, provide a description of the model
**LTLS** is a technique for multiclass and multilabel prediction that can perform training and inference in logarithmic time and space. LTLS embeds large classification problems into simple structured prediction problems and relies on efficient dynamic programming algorithms for inference. It tackles extreme multi-clas...
Given the following machine learning model name: Automatic Structured Variational Inference, provide a description of the model
**Automatic Structured Variational Inference (ASVI)** is a fully automated method for constructing structured variational families, inspired by the closed-form update in conjugate Bayesian models. These convex-update families incorporate the forward pass of the input probabilistic program and can therefore capture comp...
Given the following machine learning model name: DeepLabv2, provide a description of the model
**DeepLabv2** is an architecture for semantic segmentation that build on [DeepLab](https://paperswithcode.com/method/deeplab) with an atrous [spatial pyramid pooling](https://paperswithcode.com/method/spatial-pyramid-pooling) scheme. Here we have parallel dilated convolutions with different rates applied in the input f...
Given the following machine learning model name: SGDW, provide a description of the model
**SGDW** is a stochastic optimization technique that decouples [weight decay](https://paperswithcode.com/method/weight-decay) from the gradient update: $$ g\_{t} = \nabla{f\_{t}}\left(\theta\_{t-1}\right) + \lambda\theta\_{t-1}$$ $$ m\_{t} = \beta\_{1}m\_{t-1} + \eta\_{t}\alpha{g}\_{t}$$ $$ \theta\_{t} = \th...
Given the following machine learning model name: AdaRNN, provide a description of the model
**AdaRNN** is an adaptive [RNN](https://paperswithcode.com/methods/category/recurrent-neural-networks) that learns an adaptive model through two modules: [Temporal Distribution Characterization](https://paperswithcode.com/method/temporal-distribution-characterization) (TDC) and [Temporal Distribution Matching](https://...
Given the following machine learning model name: Gaussian Process, provide a description of the model
**Gaussian Processes** are non-parametric models for approximating functions. They rely upon a measure of similarity between points (the kernel function) to predict the value for an unseen point from training data. The models are fully probabilistic so uncertainty bounds are baked in with the model. Image Source: Ga...
Given the following machine learning model name: Parallel Feature Pyramid Network, provide a description of the model
Given the following machine learning model name: ZoomNet, provide a description of the model
**ZoomNet** is a 2D human whole-body pose estimation technique. It aims to localize dense landmarks on the entire human body including face, hands, body, and feet. ZoomNet follows the top-down paradigm. Given a human bounding box of each person, ZoomNet first localizes the easy-to-detect body keypoints and estimates th...
Given the following machine learning model name: CornerNet-Saccade, provide a description of the model
**CornerNet-Saccade** is an extension of [CornerNet](https://paperswithcode.com/method/cornernet) with an attention mechanism similar to saccades in human vision. It starts with a downsized full image and generates an attention map, which is then zoomed in on and processed further by the model. This differs from the or...
Given the following machine learning model name: spatial transformer networks, provide a description of the model
spatial transformer networks uses an explicit procedure to learn invariance to translation, scaling, rotation and other more general warps, making the network pay attention to the most relevant regions. STN was the first attention mechanism to explicitly predict important regions and provide a deep neural network with ...
Given the following machine learning model name: SlowMo, provide a description of the model
**Slow Momentum** (SlowMo) is a distributed optimization method where workers periodically synchronize and perform a momentum update, after multiple iterations of a base optimization algorithm. Periodically, after taking some number $\tau$ of base algorithm steps, workers average their parameters using ALLREDUCE and p...
Given the following machine learning model name: TResNet, provide a description of the model
A **TResNet** is a variant on a [ResNet](https://paperswithcode.com/method/resnet) that aim to boost accuracy while maintaining GPU training and inference efficiency. They contain several design tricks including a SpaceToDepth stem, [Anti-Alias downsampling](https://paperswithcode.com/method/anti-alias-downsampling), ...
Given the following machine learning model name: MixText, provide a description of the model
**MixText** is a semi-supervised learning method for text classification, which uses a new data augmentation method called TMix. TMix creates a large amount of augmented training samples by interpolating text in hidden space. The technique leverages advances in data augmentation to guess low-entropy labels for unlabele...
Given the following machine learning model name: Criss-Cross Network, provide a description of the model
**Criss-Cross Network** (**CCNet**) aims to obtain full-image contextual information in an effective and efficient way. Concretely, for each pixel, a novel criss-cross attention module harvests the contextual information of all the pixels on its criss-cross path. By taking a further recurrent operation, each pixel can...
Given the following machine learning model name: Generative Adversarial Imitation Learning, provide a description of the model
**Generative Adversarial Imitation Learning** presents a new general framework for directly extracting a policy from data, as if it were obtained by reinforcement learning following inverse reinforcement learning.
Given the following machine learning model name: Adaptive Smooth Optimizer, provide a description of the model
**AdaSmooth** is a stochastic optimization technique that allows for per-dimension learning rate method for [SGD](https://paperswithcode.com/method/sgd). It is an extension of [Adagrad](https://paperswithcode.com/method/adagrad) and [AdaDelta](https://paperswithcode.com/method/adadelta) that seek to reduce its aggressi...
Given the following machine learning model name: Random Scaling, provide a description of the model
**Random Scaling** is a type of image data augmentation where we randomly change the scale the image between a specified range.
Given the following machine learning model name: Temporal Adaptive Module, provide a description of the model
TAM is designed to capture complex temporal relationships both efficiently and flexibly, It adopts an adaptive kernel instead of self-attention to capture global contextual information, with lower time complexity than GLTR. TAM has two branches, a local branch and a global branch. Given the input feature map $...
Given the following machine learning model name: Image Scale Augmentation, provide a description of the model
Image Scale Augmentation is an augmentation technique where we randomly pick the short size of a image within a dimension range. One use case of this augmentation technique is in object detectiont asks.
Given the following machine learning model name: Padé Activation Units, provide a description of the model
Parametrized learnable activation function, based on the Padé approximant.
Given the following machine learning model name: Heatmap, provide a description of the model
Given the following machine learning model name: Continuously Differentiable Exponential Linear Units, provide a description of the model
Exponential Linear Units (ELUs) are a useful rectifier for constructing deep learning architectures, as they may speed up and otherwise improve learning by virtue of not have vanishing gradients and by having mean activations near zero. However, the ELU activation as parametrized in [1] is not continuously differentiab...
Given the following machine learning model name: Channel Attention Module, provide a description of the model
A **Channel Attention Module** is a module for channel-based attention in convolutional neural networks. We produce a channel attention map by exploiting the inter-channel relationship of features. As each channel of a feature map is considered as a feature detector, channel attention focuses on ‘what’ is meaningful gi...
Given the following machine learning model name: Boundary-Aware Segmentation Network, provide a description of the model
**BASNet**, or **Boundary-Aware Segmentation Network**, is an image segmentation architecture that consists of a predict-refine architecture and a hybrid loss. The proposed BASNet comprises a predict-refine architecture and a hybrid loss, for highly accurate image segmentation. The predict-refine architecture consists...