prompts
stringlengths
87
212
description
stringlengths
0
6.76k
Given the following machine learning model name: Polynomial, provide a description of the model
Given the following machine learning model name: Unigram Segmentation, provide a description of the model
**Unigram Segmentation** is a subword segmentation algorithm based on a unigram language model. It provides multiple segmentations with probabilities. The language model allows for emulating the noise generated during the segmentation of actual data. The unigram language model makes an assumption that each subword o...
Given the following machine learning model name: Fast Focal Detection Network, provide a description of the model
F2DNet, a novel two-stage object detection architecture which eliminates redundancy of classical two-stage detectors by replacing the region proposal network with focal detection network and bounding box head with fast suppression head.
Given the following machine learning model name: SwiGLU, provide a description of the model
**SwiGLU** is an activation function which is a variant of [GLU](https://paperswithcode.com/method/glu). The definition is as follows: $$ \text{SwiGLU}\left(x, W, V, b, c, \beta\right) = \text{Swish}\_{\beta}\left(xW + b\right) \otimes \left(xV + c\right) $$
Given the following machine learning model name: Channel & Spatial attention, provide a description of the model
Channel & spatial attention combines the advantages of channel attention and spatial attention. It adaptively selects both important objects and regions
Given the following machine learning model name: R-CNN, provide a description of the model
**R-CNN**, or **Regions with CNN Features**, is an object detection model that uses high-capacity CNNs to bottom-up region proposals in order to localize and segment objects. It uses [selective search](https://paperswithcode.com/method/selective-search) to identify a number of bounding-box object region candidates (“re...
Given the following machine learning model name: Keypoint Pose Encoding, provide a description of the model
Given the following machine learning model name: Skip-gram Word2Vec, provide a description of the model
**Skip-gram Word2Vec** is an architecture for computing word embeddings. Instead of using surrounding words to predict the center word, as with CBow Word2Vec, Skip-gram Word2Vec uses the central word to predict the surrounding words. The skip-gram objective function sums the log probabilities of the surrounding $n$ ...
Given the following machine learning model name: Unbiased Online Recurrent Optimization, provide a description of the model
Given the following machine learning model name: Transformer-XL, provide a description of the model
**Transformer-XL** (meaning extra long) is a [Transformer](https://paperswithcode.com/method/transformer) architecture that introduces the notion of recurrence to the deep self-attention network. Instead of computing the hidden states from scratch for each new segment, Transformer-XL reuses the hidden states obtained i...
Given the following machine learning model name: Submanifold Convolution, provide a description of the model
**Submanifold Convolution (SC)** is a spatially sparse [convolution](https://paperswithcode.com/method/convolution) operation used for tasks with sparse data like semantic segmentation of 3D point clouds. An SC convolution computes the set of active sites in the same way as a regular convolution: it looks for the prese...
Given the following machine learning model name: Group Normalization, provide a description of the model
**Group Normalization** is a normalization layer that divides channels into groups and normalizes the features within each group. GN does not exploit the batch dimension, and its computation is independent of batch sizes. In the case where the group size is 1, it is equivalent to [Instance Normalization](https://papers...
Given the following machine learning model name: Deep Graph Infomax, provide a description of the model
Deep Graph Infomax (DGI), a general approach for learning node representations within graph-structured data in an unsupervised manner. DGI relies on maximizing mutual information between patch representations and corresponding high-level summaries of graphs—both derived using established graph convolutional network arc...
Given the following machine learning model name: Ghost Module, provide a description of the model
A **Ghost Module** is an image block for convolutional neural network that aims to generate more features by using fewer parameters. Specifically, an ordinary convolutional layer in deep neural networks is split into two parts. The first part involves ordinary convolutions but their total number is controlled. Given th...
Given the following machine learning model name: GreedyNAS, provide a description of the model
**GreedyNAS** is a one-shot [neural architecture search](https://paperswithcode.com/method/neural-architecture-search) method. Previous methods held the assumption that a supernet should give a reasonable ranking over all paths. They thus treat all paths equally, and spare much effort to train paths. However, it is har...
Given the following machine learning model name: Gated Graph Sequence Neural Networks, provide a description of the model
Gated Graph Sequence Neural Networks (GGS-NNs) is a novel graph-based neural network model. GGS-NNs modifies Graph Neural Networks (Scarselli et al., 2009) to use gated recurrent units and modern optimization techniques and then extend to output sequences. Source: [Li et al.](https://arxiv.org/pdf/1511.05493v4.pdf) ...
Given the following machine learning model name: Tacotron, provide a description of the model
**Tacotron** is an end-to-end generative text-to-speech model that takes a character sequence as input and outputs the corresponding spectrogram. The backbone of Tacotron is a seq2seq model with attention. The Figure depicts the model, which includes an encoder, an attention-based decoder, and a post-processing net. At...
Given the following machine learning model name: Minibatch Discrimination, provide a description of the model
**Minibatch Discrimination** is a discriminative technique for generative adversarial networks where we discriminate between whole minibatches of samples rather than between individual samples. This is intended to avoid collapse of the generator.
Given the following machine learning model name: Multi-Head Attention, provide a description of the model
**Multi-head Attention** is a module for attention mechanisms which runs through an attention mechanism several times in parallel. The independent attention outputs are then concatenated and linearly transformed into the expected dimension. Intuitively, multiple attention heads allows for attending to parts of the sequ...
Given the following machine learning model name: Targeted Dropout, provide a description of the model
Please enter a description about the method here
Given the following machine learning model name: ooJpiued, provide a description of the model
Please enter a description about the method here
Given the following machine learning model name: Path Length Regularization, provide a description of the model
**Path Length Regularization** is a type of regularization for [generative adversarial networks](https://paperswithcode.com/methods/category/generative-adversarial-networks) that encourages good conditioning in the mapping from latent codes to images. The idea is to encourage that a fixed-size step in the latent space ...
Given the following machine learning model name: Triplet Attention, provide a description of the model
Triplet attention comprises of three branches each responsible for capturing crossdimension between the spatial dimensions and channel dimension of the input. Given an input tensor with shape (C × H × W), each branch is responsible for aggregating cross-dimensional interactive features between either the spatial dimens...
Given the following machine learning model name: Deep-MAC, provide a description of the model
**Deep-MAC**, or **Deep Mask-heads Above CenterNet**, is a type of anchor-free instance segmentation model based on [CenterNet](https://paperswithcode.com/method/centernet). The motivation for this new architecture is that boxes are much cheaper to annotate than masks, so the authors address the “partially supervised”...
Given the following machine learning model name: Funnel Transformer, provide a description of the model
**Funnel Transformer** is a type of [Transformer](https://paperswithcode.com/methods/category/transformers) that gradually compresses the sequence of hidden states to a shorter one and hence reduces the computation cost. By re-investing the saved FLOPs from length reduction in constructing a deeper or wider model, the ...
Given the following machine learning model name: Feedback Alignment, provide a description of the model
Given the following machine learning model name: Deactivable Skip Connection, provide a description of the model
A **Deactivable Skip Connection** is a type of skip connection which, instead of concatenating the encoder features (red) and decoder features (blue), as with [standard skip connections](https://paperswithcode.com/methods/category/skip-connections), it instead fuses the encoder features with part of the decoder featur...
Given the following machine learning model name: VQSVD, provide a description of the model
**Variational Quantum Singular Value Decomposition** is a variational quantum algorithm for singular value decomposition (VQSVD). By exploiting the variational principles for singular values and the Ky Fan Theorem, a novel loss function is designed such that two quantum neural networks (or parameterized quantum circuit...
Given the following machine learning model name: Position-Wise Feed-Forward Layer, provide a description of the model
**Position-Wise Feed-Forward Layer** is a type of [feedforward layer](https://www.paperswithcode.com/method/category/feedforwad-networks) consisting of two [dense layers](https://www.paperswithcode.com/method/dense-connections) that applies to the last dimension, which means the same dense layers are used for each posi...
Given the following machine learning model name: Progressively Growing GAN, provide a description of the model
**ProGAN**, or **Progressively Growing GAN**, is a generative adversarial network that utilises a progressively growing training approach. The idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses...
Given the following machine learning model name: Height-driven Attention Network, provide a description of the model
**Height-driven Attention Network**, or **HANet**, is a general add-on module for improving semantic segmentation for urban-scene images. It emphasizes informative features or classes selectively according to the vertical position of a pixel. The pixel-wise class distributions are significantly different from each othe...
Given the following machine learning model name: Neural Probabilistic Language Model, provide a description of the model
A **Neural Probablistic Language Model** is an early language modelling architecture. It involves a feedforward architecture that takes in input vector representations (i.e. word embeddings) of the previous $n$ words, which are looked up in a table $C$. The word embeddings are concatenated and fed into a hidden laye...
Given the following machine learning model name: Multiscale Vision Transformer, provide a description of the model
**Multiscale Vision Transformer**, or **MViT**, is a [transformer](https://paperswithcode.com/method/transformer) architecture for modeling visual data such as images and videos. Unlike conventional transformers, which maintain a constant channel capacity and resolution throughout the network, Multiscale Transformers h...
Given the following machine learning model name: FiLM Module, provide a description of the model
The **Feature-wise linear modulation** (**FiLM**) module combines information from both noisy waveform and input mel-spectrogram. It is used in the [WaveGrad](https://paperswithcode.com/method/wavegrad) model. The authors also added iteration index $n$ which indicates the noise level of the input waveform by using the ...
Given the following machine learning model name: FastGCN, provide a description of the model
FastGCN is a fast improvement of the GCN model recently proposed by Kipf & Welling (2016a) for learning graph embeddings. It generalizes transductive training to an inductive manner and also addresses the memory bottleneck issue of GCN caused by recursive expansion of neighborhoods. The crucial ingredient is a sampling...
Given the following machine learning model name: Global and Sliding Window Attention, provide a description of the model
**Global and Sliding Window Attention** is an attention pattern for attention-based models. It is motivated by the fact that non-sparse attention in the original [Transformer](https://paperswithcode.com/method/transformer) formulation has a [self-attention component](https://paperswithcode.com/method/scaled) with $O\le...
Given the following machine learning model name: CodeGen, provide a description of the model
**CodeGen** is an autoregressive transformers with next-token prediction language modeling as the learning objective trained on a natural language corpus and programming language data curated from GitHub.
Given the following machine learning model name: AdaGPR, provide a description of the model
**AdaGPR** is an adaptive, layer-wise graph [convolution](https://paperswithcode.com/method/convolution) model. AdaGPR applies adaptive generalized Pageranks at each layer of a [GCNII](https://paperswithcode.com/method/gcnii) model by learning to predict the coefficients of generalized Pageranks using sparse solvers.
Given the following machine learning model name: Context-aware Visual Attention-based (CoVA) webpage object detection pipeline, provide a description of the model
Context-Aware Visual Attention-based end-to-end pipeline for Webpage Object Detection (_CoVA_) aims to learn function _f_ to predict labels _y = [$y_1, y_2, ..., y_N$]_ for a webpage containing _N_ elements. The input to CoVA consists of: 1. a screenshot of a webpage, 2. list of bounding boxes _[x, y, w, h]_ of the w...
Given the following machine learning model name: BS-Net, provide a description of the model
**BS-Net** is an architecture for COVID-19 severity prediction based on clinical data from different modalities. The architecture comprises 1) a shared multi-task feature extraction backbone, 2) a lung segmentation branch, 3) an original registration mechanism that acts as a ”multi-resolution feature alignment” block o...
Given the following machine learning model name: Review-guided Answer Helpfulness Prediction, provide a description of the model
**Review-guided Answer Helpfulness Prediction** (RAHP) is a textual inference model for identifying helpful answers in e-commerce. It not only considers the interactions between QA pairs, but also investigates the opinion coherence between the answer and crowds' opinions reflected in the reviews, which is another impor...
Given the following machine learning model name: Radial Basis Function, provide a description of the model
Given the following machine learning model name: AggMo, provide a description of the model
**Aggregated Momentum (AggMo)** is a variant of the [classical momentum](https://paperswithcode.com/method/sgd-with-momentum) stochastic optimizer which maintains several velocity vectors with different $\beta$ parameters. AggMo averages the velocity vectors when updating the parameters. It resolves the problem of choo...
Given the following machine learning model name: Dilated Bottleneck Block, provide a description of the model
**Dilated Bottleneck Block** is an image model block used in the [DetNet](https://paperswithcode.com/method/detnet) convolutional neural network architecture. It employs a bottleneck structure with dilated convolutions to efficiently enlarge the receptive field.
Given the following machine learning model name: GraphSAGE, provide a description of the model
GraphSAGE is a general inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data. Image from: [Inductive Representation Learning on Large Graphs](https://arxiv.org/pdf/1706.02216v4.pdf)
Given the following machine learning model name: DistilBERT, provide a description of the model
**DistilBERT** is a small, fast, cheap and light [Transformer](https://paperswithcode.com/method/transformer) model based on the [BERT](https://paperswithcode.com/method/bert) architecture. Knowledge distillation is performed during the pre-training phase to reduce the size of a BERT model by 40%. To leverage the indu...
Given the following machine learning model name: nnFormer, provide a description of the model
**nnFormer**, or **not-another transFormer**, is a semantic segmentation model with an interleaved architecture based on empirical combination of self-attention and [convolution](https://paperswithcode.com/method/convolution). Firstly, a light-weight convolutional embedding layer ahead is used ahead of [transformer](ht...
Given the following machine learning model name: Counterfactuals Explanations, provide a description of the model
Given the following machine learning model name: Strip Pooling Network, provide a description of the model
Spatial pooling usually operates on a small region which limits its capability to capture long-range dependencies and focus on distant regions. To overcome this, Hou et al. proposed strip pooling, a novel pooling method capable of encoding long-range context in either horizontal or vertical spatial domains. Strip...
Given the following machine learning model name: Wasserstein GAN (Gradient Penalty), provide a description of the model
**Wasserstein GAN + Gradient Penalty**, or **WGAN-GP**, is a generative adversarial network that uses the Wasserstein loss formulation plus a gradient norm penalty to achieve Lipschitz continuity. The original [WGAN](https://paperswithcode.com/method/wgan) uses weight clipping to achieve 1-Lipschitz functions, but t...
Given the following machine learning model name: Generalized additive models, provide a description of the model
Given the following machine learning model name: Varifocal Loss, provide a description of the model
**Varifocal Loss** is a loss function for training a dense object detector to predict the IACS, inspired by [focal loss](https://paperswithcode.com/method/focal-loss). Unlike the focal loss that deals with positives and negatives equally, Varifocal Loss treats them asymmetrically. $$ VFL\left(p, q\right) = −q\left(q...
Given the following machine learning model name: Learnable graph convolutional layer, provide a description of the model
Learnable graph convolutional layer (LGCL) automatically selects a fixed number of neighboring nodes for each feature based on value ranking in order to transform graph data into grid-like structures in 1-D format, thereby enabling the use of regular convolutional operations on generic graphs. Description and image ...
Given the following machine learning model name: SCNet, provide a description of the model
**Sample Consistency Network (SCNet)** is a method for instance segmentation which ensures the IoU distribution of the samples at training time are as close to that at inference time. To this end, only the outputs of the last box stage are used for mask predictions at both training and inference. The Figure shows the I...
Given the following machine learning model name: Euclidean Norm Regularization, provide a description of the model
**Euclidean Norm Regularization** is a regularization step used in [generative adversarial networks](https://paperswithcode.com/methods/category/generative-adversarial-networks), and is typically added to both the generator and discriminator losses: $$ R\_{z} = w\_{r} \cdot ||\Delta{z}||^{2}\_{2} $$ where the sca...
Given the following machine learning model name: T5, provide a description of the model
**T5**, or **Text-to-Text Transfer Transformer**, is a [Transformer](https://paperswithcode.com/method/transformer) based architecture that uses a text-to-text approach. Every task – including translation, question answering, and classification – is cast as feeding the model text as input and training it to generate so...
Given the following machine learning model name: Rectified Linear Unit N, provide a description of the model
The **Rectified Linear Unit N**, or **ReLUN**, is a modification of **[ReLU6](https://paperswithcode.com/method/relu6)** activation function that has trainable parameter **n**. $$ReLUN(x) = min(max(0, x), n)$$
Given the following machine learning model name: TuckER with Relation Prediction, provide a description of the model
TuckER model trained with a relation prediction objective on top of the 1vsAll loss
Given the following machine learning model name: Hyper-parameter optimization, provide a description of the model
In machine learning, a hyperparameter is a parameter whose value is used to control learning process, and HPO is the problem of choosing a set of optimal hyperparameters for a learning algorithm.
Given the following machine learning model name: Random Search, provide a description of the model
**Random Search** replaces the exhaustive enumeration of all combinations by selecting them randomly. This can be simply applied to the discrete setting described above, but also generalizes to continuous and mixed spaces. It can outperform Grid search, especially when only a small number of hyperparameters affects the...
Given the following machine learning model name: Spatial CNN with UNet based Encoder-decoder and ConvLSTM, provide a description of the model
Spatial CNN with UNet based Encoder-decoder and ConvLSTM
Given the following machine learning model name: Soft Nearest Neighbor Loss with Annealing Temperature, provide a description of the model
Given the following machine learning model name: ClipBERT, provide a description of the model
**ClipBERT** is a framework for end-to-end-learning for video-and-language tasks, by employing sparse sampling, where only a single or a few sparsely sampled short clips from a video are used at each training step. Two aspects distinguish ClipBERT from previous work. First, in contrast to densely extracting video f...
Given the following machine learning model name: Neighborhood Contrastive Learning, provide a description of the model
Given the following machine learning model name: Barlow Twins, provide a description of the model
**Barlow Twins** is a self-supervised learning method that applies redundancy-reduction — a principle first proposed in neuroscience — to self supervised learning. The objective function measures the cross-correlation matrix between the embeddings of two identical networks fed with distorted versions of a batch of samp...
Given the following machine learning model name: Symbolic Deep Learning, provide a description of the model
This is a general approach to convert a neural network into an analytic equation. The technique works as follows: 1. Encourage sparse latent representations 2. Apply symbolic regression to approximate the transformations between in/latent/out layers 3. Compose the symbolic expressions. In the [paper](https://ar...
Given the following machine learning model name: OASIS, provide a description of the model
OASIS is a [GAN](https://paperswithcode.com/method/gan)-based model to translate semantic label maps into realistic-looking images. The model builds on preceding work such as [Pix2Pix](https://paperswithcode.com/method/pix2pix) and SPADE. OASIS introduces the following innovations: 1. The method is not dependent o...
Given the following machine learning model name: None, provide a description of the model
Given the following machine learning model name: RESCAL, provide a description of the model
Given the following machine learning model name: Res2Net Block, provide a description of the model
A **Res2Net Block** is an image model block that constructs hierarchical residual-like connections within one single [residual block](https://paperswithcode.com/method/residual-block). It was proposed as part of the [Res2Net](https://paperswithcode.com/method/res2net) CNN architecture. The block represents multi-sc...
Given the following machine learning model name: Multi-Attention Network, provide a description of the model
Given the following machine learning model name: Cosine Normalization, provide a description of the model
Multi-layer neural networks traditionally use dot products between the output vector of previous layer and the incoming weight vector as the input to activation function. The result of dot product is unbounded. To bound dot product and decrease the variance, **Cosine Normalization** uses cosine similarity or centered ...
Given the following machine learning model name: Spatially-Adaptive Normalization, provide a description of the model
**SPADE**, or **Spatially-Adaptive Normalization** is a conditional normalization method for semantic image synthesis. Similar to [Batch Normalization](https://www.paperswithcode.com/method/batch-normalization), the activation is normalized in the channel-wise manner and then modulated with learned scale and bias. In t...
Given the following machine learning model name: RoIAlign, provide a description of the model
**Region of Interest Align**, or **RoIAlign**, is an operation for extracting a small feature map from each RoI in detection and segmentation based tasks. It removes the harsh quantization of [RoI Pool](https://paperswithcode.com/method/roi-pooling), properly *aligning* the extracted features with the input. To avoid a...
Given the following machine learning model name: Robust Predictable Control, provide a description of the model
**Robust Predictable Control**, or **RPC**, is an RL algorithm for learning policies that uses only a few bits of information. RPC brings together ideas from information bottlenecks, model-based RL, and bits-back coding. The main idea of RPC is that if the agent can accurately predict the future, then the agent will no...
Given the following machine learning model name: MDTVSFA, provide a description of the model
Given the following machine learning model name: MoGA-C, provide a description of the model
**MoGA-C** is a convolutional neural network optimized for mobile latency and discovered via Mobile GPU-Aware (MoGA) [neural architecture search](https://paperswithcode.com/method/neural-architecture-search). The basic building block is MBConvs (inverted residual blocks) from [MobileNetV2](https://paperswithcode.com/me...
Given the following machine learning model name: Graph InfoClust, provide a description of the model
Given the following machine learning model name: Poly-CAM, provide a description of the model
Given the following machine learning model name: Attention Model, provide a description of the model
Given the following machine learning model name: Seq2Edits, provide a description of the model
**Seq2Edits** is an open-vocabulary approach to sequence editing for natural language processing (NLP) tasks with a high degree of overlap between input and output texts. In this approach, each sequence-to-sequence transduction is represented as a sequence of edit operations, where each operation either replaces an ent...
Given the following machine learning model name: EMQAP, provide a description of the model
**EMQAP**, or **E-Manual Question Answering Pipeline**, is an approach for answering questions pertaining to electronics devices. Built upon the pretrained [RoBERTa](https://paperswithcode.com/method/roberta), it harbors a supervised multi-task learning framework which efficiently performs the dual tasks of identifying...
Given the following machine learning model name: Concurrent Spatial and Channel Squeeze & Excitation (scSE), provide a description of the model
Combines the channel attention of the widely known [spatial squeeze and channel excitation (SE)](https://paperswithcode.com/method/squeeze-and-excitation-block) block and the spatial attention of the [channel squeeze and spatial excitation (sSE)](https://paperswithcode.com/method/channel-squeeze-and-spatial-excitation#...
Given the following machine learning model name: ARMA GNN, provide a description of the model
The ARMA GNN layer implements a rational graph filter with a recursive approximation.
Given the following machine learning model name: Bilateral Guided Aggregation Layer, provide a description of the model
**Bilateral Guided Aggregation Layer** is a feature fusion layer for semantic segmentation that aims to enhance mutual connections and fuse different types of feature representation. It was used in the [BiSeNet V2](https://paperswithcode.com/method/bisenet-v2) architecture. Specifically, within the BiSeNet implementati...
Given the following machine learning model name: Learning Cross-Modality Encoder Representations from Transformers, provide a description of the model
LXMERT is a model for learning vision-and-language cross-modality representations. It consists of a Transformer model that consists three encoders: object relationship encoder, a language encoder, and a cross-modality encoder. The model takes two inputs: image with its related sentence. The images are represented as a ...
Given the following machine learning model name: FastPitch, provide a description of the model
**FastPitch** is a fully-parallel text-to-speech model based on FastSpeech, conditioned on fundamental frequency contours. The architecture of FastPitch is shown in the Figure. It is based on FastSpeech and composed mainly of two feed-forward [Transformer](https://paperswithcode.com/method/transformer) (FFTr) stacks. T...
Given the following machine learning model name: DouZero, provide a description of the model
**DouZero** is an AI system for the card game DouDizhu that enhances traditional Monte-Carlo methods with deep neural networks, action encoding, and parallel actors. The [Q-network](https://paperswithcode.com/method/dqn) of DouZero consists of an [LSTM](https://paperswithcode.com/method/lstm) to encode historical actio...
Given the following machine learning model name: Local Importance-based Pooling, provide a description of the model
**Local Importance-based Pooling (LIP)** is a pooling layer that can enhance discriminative features during the downsampling procedure by learning adaptive importance weights based on inputs. By using a learnable network $G$ in $F$, the importance function now is not limited in hand-crafted forms and able to learn the ...
Given the following machine learning model name: Primal Wasserstein Imitation Learning, provide a description of the model
**Primal Wasserstein Imitation Learning**, or **PWIL**, is a method for imitation learning which ties to the primal form of the Wasserstein distance between the expert and the agent state-action distributions. The reward function is derived offline, as opposed to recent adversarial IL algorithms that learn a reward fun...
Given the following machine learning model name: DAMO-YOLO, provide a description of the model
Given the following machine learning model name: ReInfoSelect, provide a description of the model
**ReInfoSelect** is a reinforcement weak supervision selection method for information retrieval. It learns to select anchor-document pairs that best weakly supervise the neural ranker (action), using the ranking performance on a handful of relevance labels as the reward. Iteratively, for a batch of anchor-document pair...
Given the following machine learning model name: TD-VAE, provide a description of the model
**TD-VAE**, or **Temporal Difference VAE**, is a generative sequence model that learns representations containing explicit beliefs about states several steps into the future, and that can be rolled out directly without single-step transitions. TD-VAE is trained on pairs of temporally separated time points, using an ana...
Given the following machine learning model name: Balanced Feature Pyramid, provide a description of the model
**Balanced Feature Pyramid** is a feature pyramid module. It differs from approaches like [FPNs](https://paperswithcode.com/method/fpn) that integrate multi-level features using lateral connections. Instead the BFP strengthens the multi-level features using the same deeply integrated balanced semantic features. The pip...
Given the following machine learning model name: Skim and Intensive Reading Model, provide a description of the model
**Skim and Intensive Reading Model**, or **SIRM**, is a deep neural network for figuring out implied textual meaning. It consists of two main components, namely the skim reading component and intensive reading component. N-gram features are quickly extracted from the skim reading component, which is a combination of se...
Given the following machine learning model name: SpineNet, provide a description of the model
**SpineNet** is a convolutional neural network backbone with scale-permuted intermediate features and cross-scale connections that is learned on an object detection task by [Neural Architecture Search](https://paperswithcode.com/method/neural-architecture-search).
Given the following machine learning model name: Bottleneck Transformer Block, provide a description of the model
A **Bottleneck Transformer Block** is a block used in [Bottleneck Transformers](https://www.paperswithcode.com/method/bottleneck-transformer) that replaces the spatial 3 × 3 [convolution](https://paperswithcode.com/method/convolution) layer in a [Residual Block](https://paperswithcode.com/method/residual-block) with Mu...
Given the following machine learning model name: Contextual Word Vectors, provide a description of the model
**CoVe**, or **Contextualized Word Vectors**, uses a deep [LSTM](https://paperswithcode.com/method/lstm) encoder from an attentional sequence-to-sequence model trained for machine translation to contextualize word vectors. $\text{CoVe}$ word embeddings are therefore a function of the entire input sequence. These word e...
Given the following machine learning model name: RegionViT, provide a description of the model
**RegionViT** consists of two tokenization processes that convert an image into regional (upper path) and local tokens (lower path). Each tokenization is a convolution with different patch sizes, the patch size of regional tokens is $28^2$ while $4^2$ is used for local tokens with dimensions projected to $C$, which mea...
Given the following machine learning model name: Visual-Spatial-Graph Network, provide a description of the model
**Visual-Spatial-Graph Network** (VSGNet) is a network for human-object interaction detection. It extracts visual features from the image representing the human-object pair, refines the features with spatial configurations of the pair, and utilizes the structural connections between the pair via graph convolutions.