prompts
stringlengths
87
212
description
stringlengths
0
6.76k
Given the following machine learning model name: Fastformer, provide a description of the model
**Fastformer** is an type of [Transformer](https://paperswithcode.com/method/transformer) which uses [additive attention](https://www.paperswithcode.com/method/additive-attention) as a building block. Instead of modeling the pair-wise interactions between tokens, [additive attention](https://paperswithcode.com/method/a...
Given the following machine learning model name: Thinned U-shape Module, provide a description of the model
**Thinned U-shape Module**, or **TUM**, is a feature extraction block used for object detection models. It was introduced as part of the [M2Det](https://paperswithcode.com/method/m2det) architecture. Different from [FPN](https://paperswithcode.com/method/fpn) and [RetinaNet](https://paperswithcode.com/method/retinanet)...
Given the following machine learning model name: Knowledge Enhanced Masked Language Model, provide a description of the model
Given the following machine learning model name: VEGA, provide a description of the model
**VEGA** is an AutoML framework that is compatible and optimized for multiple hardware platforms. It integrates various modules of AutoML, including [Neural Architecture Search](https://paperswithcode.com/method/neural-architecture-search) (NAS), Hyperparameter Optimization (HPO), Auto Data Augmentation, Model Compress...
Given the following machine learning model name: Detection Transformer, provide a description of the model
**Detr**, or **Detection Transformer**, is a set-based object detector using a [Transformer](https://paperswithcode.com/method/transformer) on top of a convolutional backbone. It uses a conventional CNN backbone to learn a 2D representation of an input image. The model flattens it and supplements it with a positional e...
Given the following machine learning model name: Early Dropout, provide a description of the model
Introduced by Hinton et al. in 2012, dropout has stood the test of time as a regularizer for preventing overfitting in neural networks. In this study, we demonstrate that dropout can also mitigate underfitting when used at the start of training. During the early phase, we find dropout reduces the directional variance o...
Given the following machine learning model name: Inception v2, provide a description of the model
**Inception v2** is the second generation of Inception convolutional neural network architectures which notably uses [batch normalization](https://paperswithcode.com/method/batch-normalization). Other changes include dropping [dropout](https://paperswithcode.com/method/dropout) and removing [local response normalizatio...
Given the following machine learning model name: Temporal attention, provide a description of the model
Temporal attention can be seen as a dynamic time selection mechanism determining when to pay attention, and is thus usually used for video processing.
Given the following machine learning model name: Neural Cache, provide a description of the model
A **Neural Cache**, or a **Continuous Cache**, is a module for language modelling which stores previous hidden states in memory cells. They are then used as keys to retrieve their corresponding word, that is the next word. There is no transformation applied to the storage during writing and reading. More formally it...
Given the following machine learning model name: MagFace, provide a description of the model
**MagFace** is a category of losses for face recognition that learn a universal feature embedding whose magnitude can measure the quality of a given face. Under the new loss, it can be proven that the magnitude of the feature embedding monotonically increases if the subject is more likely to be recognized. In addition,...
Given the following machine learning model name: ParaNet Convolution Block, provide a description of the model
A **ParaNet Convolution Block** is a convolutional block that appears in the encoder and decoder of the [ParaNet](https://paperswithcode.com/method/paranet) text-to-speech architecture. It consists of a 1-D [convolution](https://paperswithcode.com/method/convolution) with a gated linear unit ([GLU](https://paperswithco...
Given the following machine learning model name: ProxylessNAS, provide a description of the model
**ProxylessNAS** directly learns neural network architectures on the target task and target hardware without any proxy task. Additional contributions include: - Using a new path-level pruning perspective for [neural architecture search](https://paperswithcode.com/method/neural-architecture-search), showing a close c...
Given the following machine learning model name: MoGA-B, provide a description of the model
**MoGA-B** is a convolutional neural network optimized for mobile latency and discovered via Mobile GPU-Aware (MoGA) [neural architecture search](https://paperswithcode.com/method/neural-architecture-search). The basic building block is MBConvs (inverted residual blocks) from [MobileNetV2](https://paperswithcode.com/me...
Given the following machine learning model name: Graph Neural Networks with Continual Learning, provide a description of the model
Although significant effort has been applied to fact-checking, the prevalence of fake news over social media, which has profound impact on justice, public trust and our society, remains a serious problem. In this work, we focus on propagation-based fake news detection, as recent studies have demonstrated that fake news...
Given the following machine learning model name: Embedding Dropout, provide a description of the model
**Embedding Dropout** is equivalent to performing [dropout](https://paperswithcode.com/method/dropout) on the embedding matrix at a word level, where the dropout is broadcast across all the word vector’s embedding. The remaining non-dropped-out word embeddings are scaled by $\frac{1}{1-p\_{e}}$ where $p\_{e}$ is the pr...
Given the following machine learning model name: Neural Turing Machine, provide a description of the model
A **Neural Turing Machine** is a working memory neural network model. It couples a neural network architecture with external memory resources. The whole architecture is differentiable end-to-end with gradient descent. The models can infer tasks such as copying, sorting and associative recall. A Neural Turing Machine...
Given the following machine learning model name: MT-PET, provide a description of the model
**MT-PET** is a multi-task version of [Pattern Exploiting Training](https://arxiv.org/abs/2001.07676) (PET) for exaggeration detection, which leverages knowledge from complementary cloze-style QA tasks to improve few-shot learning. It defines pairs of complementary pattern-verbalizer pairs for a main task and auxiliary...
Given the following machine learning model name: Kaleido-BERT, provide a description of the model
**Kaleido-BERT**(CVPR2021) is the pioneering work that focus on solving PTM in e-commerce field. It achieves SOTA performances compared with many models published in general domain.
Given the following machine learning model name: ShuffleNet V2 Block, provide a description of the model
**ShuffleNet V2 Block** is an image model block used in the [ShuffleNet V2](https://paperswithcode.com/method/shufflenet-v2) architecture, where speed is the metric optimized for (instead of indirect ones like FLOPs). It utilizes a simple operator called channel split. At the beginning of each unit, the input of $c$ fe...
Given the following machine learning model name: Multi-source Sentiment Generative Adversarial Network, provide a description of the model
**Multi-source Sentiment Generative Adversarial Network** is a multi-source domain adaptation (MDA) method for visual sentiment classification. It is composed of three pipelines, i.e., image reconstruction, image translation, and cycle-reconstruction. To handle data from multiple source domains, it learns to find a uni...
Given the following machine learning model name: online deep learning, provide a description of the model
Deep Neural Networks (DNNs) are typically trained by backpropagation in a batch learning setting, which requires the entire training data to be made available prior to the learning task. This is not scalable for many real-world scenarios where new data arrives sequentially in a stream form. We aim to address an open ch...
Given the following machine learning model name: DynaBERT, provide a description of the model
**DynaBERT** is a [BERT](https://paperswithcode.com/method/bert)-variant which can flexibly adjust the size and latency by selecting adaptive width and depth. The training process of DynaBERT includes first training a width-adaptive BERT and then allowing both adaptive width and depth, by distilling knowledge from the ...
Given the following machine learning model name: AdaShift, provide a description of the model
**AdaShift** is a type of adaptive stochastic optimizer that decorrelates $v\_{t}$ and $g\_{t}$ in [Adam](https://paperswithcode.com/method/adam) by temporal shifting, i.e., using temporally shifted gradient $g\_{t−n}$ to calculate $v\_{t}$. The authors argue that an inappropriate correlation between gradient $g\_{t}$ ...
Given the following machine learning model name: Rotary Position Embedding, provide a description of the model
**Rotary Position Embedding**, or **RoPE**, is a type of position embedding which encodes absolute positional information with rotation matrix and naturally incorporates explicit relative position dependency in self-attention formulation. Notably, RoPE comes with valuable properties such as flexibility of being expand ...
Given the following machine learning model name: Inception-ResNet-v2 Reduction-B, provide a description of the model
**Inception-ResNet-v2 Reduction-B** is an image model block used in the [Inception-ResNet-v2](https://paperswithcode.com/method/inception-resnet-v2) architecture.
Given the following machine learning model name: Directed Acyclic Graph Neural Network, provide a description of the model
A GNN for dags, which injects their topological order as an inductive bias via asynchronous message passing.
Given the following machine learning model name: Linear Regression, provide a description of the model
**Linear Regression** is a method for modelling a relationship between a dependent variable and independent variables. These models can be fit with numerous approaches. The most common is *least squares*, where we minimize the mean square error between the predicted values $\hat{y} = \textbf{X}\hat{\beta}$ and actual v...
Given the following machine learning model name: An Easier Data Augmentation, provide a description of the model
**AEDA**, or **An Easier Data Augmentation**, is a type of data augmentation technique for text classification which includes only the insertion of various punctuation marks into the input sequence. AEDA preserves all the input information and does not mislead the network since it keeps the word order intact while chan...
Given the following machine learning model name: Reversible Residual Block, provide a description of the model
**Reversible Residual Blocks** are skip-connection blocks that learn *reversible* residual functions with reference to the layer inputs. It is proposed as part of the [RevNet](https://paperswithcode.com/method/revnet) CNN architecture. Units in each layer are partitioned into two groups, denoted $x\_{1}$ and $x\_{2}$; ...
Given the following machine learning model name: BP-Transformer, provide a description of the model
The **BP-Transformer (BPT)** is a type of [Transformer](https://paperswithcode.com/method/transformer) that is motivated by the need to find a better balance between capability and computational complexity for self-attention. The architecture partitions the input sequence into different multi-scale spans via binary par...
Given the following machine learning model name: Coresets, provide a description of the model
Given the following machine learning model name: Global Context Block, provide a description of the model
A **Global Context Block** is an image model block for global context modeling. The aim is to have both the benefits of the simplified [non-local block](https://paperswithcode.com/method/non-local-block) with effective modeling of long-range dependencies, and the [squeeze-excitation block](https://paperswithcode.com/me...
Given the following machine learning model name: Holographic Reduced Representation, provide a description of the model
**Holographic Reduced Representations** are a simple mechanism to represent an associative array of key-value pairs in a fixed-size vector. Each individual key-value pair is the same size as the entire associative array; the array is represented by the sum of the pairs. Concretely, consider a complex vector key $r = (a...
Given the following machine learning model name: Random Synthesized Attention, provide a description of the model
**Random Synthesized Attention** is a form of synthesized attention where the attention weights are not conditioned on any input tokens. Instead, the attention weights are initialized to random values. It was introduced with the [Synthesizer](https://paperswithcode.com/method/synthesizer) architecture. Random Synthesiz...
Given the following machine learning model name: CORAD: Correlation-Aware Compression of Massive Time Series using Sparse Dictionary Coding, provide a description of the model
Given the following machine learning model name: BezierAlign, provide a description of the model
**BezierAlign** is a feature sampling method for arbitrarily-shaped scene text recognition that exploits parameterization nature of a compact Bezier curve bounding box. Unlike RoIAlign, the shape of sampling grid of BezierAlign is not rectangular. Instead, each column of the arbitrarily-shaped grid is orthogonal to th...
Given the following machine learning model name: Highway Network, provide a description of the model
A **Highway Network** is an architecture designed to ease gradient-based training of very deep networks. They allow unimpeded information flow across several layers on "information highways". The architecture is characterized by the use of gating units which learn to regulate the flow of information through a network. ...
Given the following machine learning model name: Spatial Attention Module (ThunderNet), provide a description of the model
**Spatial Attention Module (SAM)** is a feature extraction module for object detection used in [ThunderNet](https://paperswithcode.com/method/thundernet). The ThunderNet SAM explicitly re-weights the feature map before RoI warping over the spatial dimensions. The key idea of SAM is to use the knowledge from [RPN](ht...
Given the following machine learning model name: Conditional DBlock, provide a description of the model
**Conditional DBlock** is a residual based block used in the discriminator of the [GAN-TTS](https://paperswithcode.com/method/gan-tts) architecture. They are similar to the [GBlocks](https://paperswithcode.com/method/gblock) used in the generator, but without [batch normalization](https://paperswithcode.com/method/batc...
Given the following machine learning model name: Adversarial Soft Advantage Fitting (ASAF), provide a description of the model
Given the following machine learning model name: ChebNet, provide a description of the model
ChebNet involves a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs. Description from: [Convolutional Neural Networks on Graphs with Fast Localized Spectral Filter...
Given the following machine learning model name: MacBERT, provide a description of the model
**MacBERT** is a [Transformer](https://paperswithcode.com/methods/category/transformers)-based model for Chinese NLP that alters [RoBERTa](https://paperswithcode.com/method/roberta) in several ways, including a modified masking strategy. Instead of masking with [MASK] token, which never appears in the fine-tuning stage...
Given the following machine learning model name: Causal Convolution, provide a description of the model
**Causal convolutions** are a type of [convolution](https://paperswithcode.com/method/convolution) used for temporal data which ensures the model cannot violate the ordering in which we model the data: the prediction $p(x_{t+1} | x_{1}, \ldots, x_{t})$ emitted by the model at timestep $t$ cannot depend on any of the fu...
Given the following machine learning model name: DeeBERT, provide a description of the model
**DeeBERT** is a method for accelerating [BERT](https://paperswithcode.com/method/bert) inference. It inserts extra classification layers (which are referred to as off-ramps) between each [transformer](https://paperswithcode.com/method/transformer) layer of BERT. All transformer layers and off-ramps are jointly fine-tu...
Given the following machine learning model name: AutoDropout, provide a description of the model
**AutoDropout** automates the process of designing [dropout](https://paperswithcode.com/method/dropout) patterns using a [Transformer](https://paperswithcode.com/method/transformer) based controller. In this method, a controller learns to generate a dropout pattern at every channel and layer of a target network, such a...
Given the following machine learning model name: SimpleNet, provide a description of the model
**SimpleNet** is a convolutional neural network with 13 layers. The network employs a homogeneous design utilizing 3 × 3 kernels for convolutional layer and 2 × 2 kernels for pooling operations. The only layers which do not use 3 × 3 kernels are 11th and 12th layers, these layers, utilize 1 × 1 convolutional kernels. F...
Given the following machine learning model name: HyperGraph Self-Attention, provide a description of the model
An extension of Self-Attention to hypergraph Skeleton-based action recognition aims to recognize human actions given human joint coordinates with skeletal interconnections. By defining a graph with joints as vertices and their natural connections as edges, previous works successfully adopted Graph Convolutional networ...
Given the following machine learning model name: Non Maximum Suppression, provide a description of the model
**Non Maximum Suppression** is a computer vision method that selects a single entity out of many overlapping entities (for example bounding boxes in object detection). The criteria is usually discarding entities that are below a given probability bound. With remaining entities we repeatedly pick the entity with the hig...
Given the following machine learning model name: Unsupervised Feature Loss, provide a description of the model
**UFLoss**, or **Unsupervised Feature Loss**, is a patch-based unsupervised learned feature loss for deep learning (DL) based reconstructions. The UFLoss provides instance-level discrimination by mapping similar instances to similar low-dimensional feature vectors using a pre-trained mapping network (UFLoss Network). T...
Given the following machine learning model name: TabTransformer, provide a description of the model
**TabTransformer** is a deep tabular data modeling architecture for supervised and semi-supervised learning. The TabTransformer is built upon self-attention based Transformers. The Transformer layers transform the embeddings of categorical features into robust contextual embeddings to achieve higher prediction accuracy...
Given the following machine learning model name: Gait Emotion Recognition, provide a description of the model
We present a novel classifier network called STEP, to classify perceived human emotion from gaits, based on a Spatial Temporal Graph Convolutional Network (ST-[GCN](https://paperswithcode.com/method/gcn)) architecture. Given an RGB video of an individual walking, our formulation implicitly exploits the gait features to...
Given the following machine learning model name: Denoising Score Matching, provide a description of the model
Training a denoiser on signals gives you a powerful prior over this signal that you can then use to sample examples of this signal.
Given the following machine learning model name: Model-Free Episodic Control, provide a description of the model
Non-parametric approximation of Q-values by storing all visited states and doing inference through k-Nearest Neighbors.
Given the following machine learning model name: Revision Network, provide a description of the model
**Revision Network** is a style transfer module that aims to revise the rough stylized image via generating residual details image $r_{c s}$, while the final stylized image is generated by combining $r\_{c s}$ and rough stylized image $\bar{x}\_{c s}$. This procedure ensures that the distribution of global style patter...
Given the following machine learning model name: Local Interpretable Model-Agnostic Explanations, provide a description of the model
**LIME**, or **Local Interpretable Model-Agnostic Explanations**, is an algorithm that can explain the predictions of any classifier or regressor in a faithful way, by approximating it locally with an interpretable model. It modifies a single data sample by tweaking the feature values and observes the resulting impact ...
Given the following machine learning model name: LayerScale, provide a description of the model
**LayerScale** is a method used for [vision transformer](https://paperswithcode.com/methods/category/vision-transformer) architectures to help improve training dynamics. It adds a learnable diagonal matrix on output of each residual block, initialized close to (but not at) 0. Adding this simple layer after each residua...
Given the following machine learning model name: Adaptive Robust Loss, provide a description of the model
The Robust Loss is a generalization of the Cauchy/Lorentzian, Geman-McClure, Welsch/Leclerc, generalized Charbonnier, Charbonnier/pseudo-Huber/L1-L2, and L2 loss functions. By introducing robustness as a continuous parameter, the loss function allows algorithms built around robust loss minimization to be generalized, w...
Given the following machine learning model name: Polynomial Rate Decay, provide a description of the model
**Polynomial Rate Decay** is a learning rate schedule where we polynomially decay the learning rate.
Given the following machine learning model name: Unsupervised Deep Manifold Attributed Graph Embedding, provide a description of the model
Unsupervised attributed graph representation learning is challenging since both structural and feature information are required to be represented in the latent space. Existing methods concentrate on learning latent representation via reconstruction tasks, but cannot directly optimize representation and are prone to ove...
Given the following machine learning model name: Dropout, provide a description of the model
**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$). The idea is to prevent co-adaptation, wh...
Given the following machine learning model name: ASGD Weight-Dropped LSTM, provide a description of the model
**ASGD Weight-Dropped LSTM**, or **AWD-LSTM**, is a type of recurrent neural network that employs [DropConnect](https://paperswithcode.com/method/dropconnect) for regularization, as well as [NT-ASGD](https://paperswithcode.com/method/nt-asgd) for optimization - non-monotonically triggered averaged [SGD](https://papersw...
Given the following machine learning model name: Gumbel Cross Entropy, provide a description of the model
Gumbel activation function, is defined using the cumulative Gumbel distribution and it can be used to perform Gumbel regression. Gumbel activation is an alternative activation function to the sigmoid or softmax activation functions and can be used to transform the unormalised output of a model to probability. Gumbel ac...
Given the following machine learning model name: Deformable Convolutional Networks, provide a description of the model
Deformable ConvNets do not learn an affine transformation. They divide convolution into two steps, firstly sampling features on a regular grid $ \mathcal{R} $ from the input feature map, then aggregating sampled features by weighted summation using a convolution kernel. The process can be written as: \begin{align} ...
Given the following machine learning model name: UL2, provide a description of the model
**UL2** is a unified framework for pretraining models that are universally effective across datasets and setups. UL2 uses Mixture-of-Denoisers (MoD), a pre-training objective that combines diverse pre-training paradigms together. UL2 introduces a notion of mode switching, wherein downstream fine-tuning is associated wi...
Given the following machine learning model name: Hard Swish, provide a description of the model
**Hard Swish** is a type of activation function based on [Swish](https://paperswithcode.com/method/swish), but replaces the computationally expensive sigmoid with a piecewise linear analogue: $$\text{h-swish}\left(x\right) = x\frac{\text{ReLU6}\left(x+3\right)}{6} $$
Given the following machine learning model name: Short-Term Dense Concatenate, provide a description of the model
**STDC**, or **Short-Term Dense Concatenate**, is a module for semantic segmentation to extract deep features with scalable receptive field and multi-scale information. It aims to remove structure redundancy in the BiSeNet architecture, specifically BiSeNet adds an extra path to encode spatial information which can be...
Given the following machine learning model name: SAGA, provide a description of the model
SAGA is a method in the spirit of SAG, SDCA, MISO and SVRG, a set of recently proposed incremental gradient algorithms with fast linear convergence rates. SAGA improves on the theory behind SAG and SVRG, with better theoretical convergence rates, and has support for composite objectives where a proximal operator is use...
Given the following machine learning model name: Focus, provide a description of the model
Given the following machine learning model name: Batch Nuclear-norm Maximization, provide a description of the model
**Batch Nuclear-norm Maximization** is an approach for aiding classification in label insufficient situations. It involves maximizing the nuclear-norm of the batch output matrix. The nuclear-norm of a matrix is an upper bound of the Frobenius-norm of the matrix. Maximizing nuclear-norm ensures large Frobenius-norm of t...
Given the following machine learning model name: Low-Rank Factorization-based Multi-Head Attention, provide a description of the model
**Low-Rank Factorization-based Multi-head Attention Mechanism**, or **LAMA**, is a type of attention module that uses low-rank factorization to reduce computational complexity. It uses low-rank bilinear pooling to construct a structured sentence representation that attends to multiple aspects of a sentence.
Given the following machine learning model name: TaxoExpan, provide a description of the model
**TaxoExpan** is a self-supervised taxonomy expansion framework. It automatically generates a set of <query concept, anchor concept> pairs from the existing taxonomy as training data. Using such self-supervision data, TaxoExpan learns a model to predict whether a query concept is the direct hyponym of an anchor concept...
Given the following machine learning model name: Attention-augmented Convolution, provide a description of the model
**Attention-augmented Convolution** is a type of [convolution](https://paperswithcode.com/method/convolution) with a two-dimensional relative self-attention mechanism that can replace convolutions as a stand-alone computational primitive for image classification. It employs [scaled-dot product attention](https://papers...
Given the following machine learning model name: OFA, provide a description of the model
In this work, we pursue a unified paradigm for multimodal pretraining to break the scaffolds of complex task/modality-specific customization. We propose OFA, a Task-Agnostic and Modality-Agnostic framework that supports Task Comprehensiveness. OFA unifies a diverse set of cross-modal and unimodal tasks, including image...
Given the following machine learning model name: Fire Module, provide a description of the model
A **Fire Module** is a building block for convolutional neural networks, notably used as part of [SqueezeNet](https://paperswithcode.com/method/squeezenet). A Fire module is comprised of: a squeeze [convolution](https://paperswithcode.com/method/convolution) layer (which has only 1x1 filters), feeding into an expand la...
Given the following machine learning model name: RoIPool, provide a description of the model
**Region of Interest Pooling**, or **RoIPool**, is an operation for extracting a small feature map (e.g., $7×7$) from each RoI in detection and segmentation based tasks. Features are extracted from each candidate box, and thereafter in models like [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn), are then cla...
Given the following machine learning model name: TD Lambda, provide a description of the model
**TD_INLINE_MATH_1** is a generalisation of **TD_INLINE_MATH_2** reinforcement learning algorithms, but it employs an [eligibility trace](https://paperswithcode.com/method/eligibility-trace) $\lambda$ and $\lambda$-weighted returns. The eligibility trace vector is initialized to zero at the beginning of the episode, an...
Given the following machine learning model name: Boost-GNN, provide a description of the model
**Boost-GNN** is an architecture that trains GBDT and GNN jointly to get the best of both worlds: the GBDT model deals with heterogeneous features, while GNN accounts for the graph structure. The model benefits from end-to-end optimization by allowing new trees to fit the gradient updates of GNN.
Given the following machine learning model name: Matrix Non-Maximum Suppression, provide a description of the model
**Matrix NMS**, or **Matrix Non-Maximum Suppression**, performs [non-maximum suppression](https://paperswithcode.com/method/non-maximum-suppression) with parallel matrix operations in one shot. It is motivated by [Soft-NMS](https://paperswithcode.com/method/soft-nms). Soft-NMS decays the other detection scores as a mo...
Given the following machine learning model name: Voxel Transformer, provide a description of the model
**VoTr** is a [Transformer](https://paperswithcode.com/method/transformer)-based 3D backbone for 3D object detection from point clouds. It contains a series of sparse and submanifold voxel modules. Submanifold voxel modules perform multi-head self-attention strictly on the non-empty voxels, while sparse voxel modules c...
Given the following machine learning model name: Gumbel Softmax, provide a description of the model
**Gumbel-Softmax** is a continuous distribution that has the property that it can be smoothly annealed into a categorical distribution, and whose parameter gradients can be easily computed via the reparameterization trick.
Given the following machine learning model name: Time-homogenuous Top-K Ranking, provide a description of the model
Please enter a description about the method here
Given the following machine learning model name: Harm-Net, provide a description of the model
A **Harmonic Network**, or **Harm-Net**, is a type of convolutional neural network that replaces convolutional layers with "harmonic blocks" that use [Discrete Cosine Transform](https://paperswithcode.com/method/discrete-cosine-transform) (DCT) filters. These blocks can be useful in truncating high-frequency informati...
Given the following machine learning model name: Location Sensitive Attention, provide a description of the model
**Location Sensitive Attention** is an attention mechanism that extends the [additive attention mechanism](https://paperswithcode.com/method/additive-attention) to use cumulative attention weights from previous decoder time steps as an additional feature. This encourages the model to move forward consistently through t...
Given the following machine learning model name: Optimal Transport Modeling, provide a description of the model
Given the following machine learning model name: Strain Elevation Tension Spring embedding, provide a description of the model
SETSe is a deterministic physics based graph embedding algorithm. It embeds weighted feature rich networks. It treats each edge as a spring and each node as a bead whose movement is constrained by the graph adjacency matrix so that the nodes move in parallel planes enforcing a minimum distance between neighboring nodes...
Given the following machine learning model name: DELG, provide a description of the model
**DELG** is a convolutional neural network for image retrieval that combines generalized mean pooling for global features and attentive selection for local features. The entire network can be learned end-to-end by carefully balancing the gradient flow between two heads – requiring only image-level labels. This allows f...
Given the following machine learning model name: Edge-augmented Graph Transformer, provide a description of the model
Transformer neural networks have achieved state-of-the-art results for unstructured data such as text and images but their adoption for graph-structured data has been limited. This is partly due to the difficulty of incorporating complex structural information in the basic transformer framework. We propose a simple yet...
Given the following machine learning model name: Batch Normalization, provide a description of the model
**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the networ...
Given the following machine learning model name: Capsule Network, provide a description of the model
**Capsule Network** is a machine learning system that is a type of artificial neural network that can be used to better model hierarchical relationships. The approach is an attempt to more closely mimic biological neural organization.
Given the following machine learning model name: Quick Attention, provide a description of the model
\begin{equation} QA\left( x \right) = \sigma\left( f\left( x \right)^{1x1} \right) + x \end{equation} Quick Attention takes in the feature map as an input WxHxC (Width x Height x Channels) and creates two instances of the input feature map then it performs the 1x1xC convolution on the first instance and calculates ...
Given the following machine learning model name: PipeTransformer, provide a description of the model
**PipeTransformer** is a method for automated elastic pipelining for efficient distributed training of [Transformer](https://paperswithcode.com/method/transformer) models. In PipeTransformer, an adaptive on the fly freeze algorithm is used that can identify and freeze some layers gradually during training, as well as a...
Given the following machine learning model name: ELMo, provide a description of the model
**Embeddings from Language Models**, or **ELMo**, is a type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Word vectors are learned functions of the intern...
Given the following machine learning model name: You Only Hypothesize Once, provide a description of the model
**You Only Hypothesize Once** is a local descriptor-based framework for the registration of two unaligned point clouds. The proposed descriptor achieves the rotation invariance by recent technologies of group equivariant feature learning, which brings more robustness to point density and noise. The descriptor in YOHO a...
Given the following machine learning model name: Field Embedded Factorization Machine, provide a description of the model
**Field Embedded Factorization Machine**, or **FEFM**, is a factorization machine variant. For each field pair, FEFM introduces symmetric matrix embeddings along with the usual feature vector embeddings that are present in FM. Like FM, $v\_{i}$ is the vector embedding of the $i^{t h}$ feature. However, unlike Field-Awa...
Given the following machine learning model name: Dilated Causal Convolution, provide a description of the model
A **Dilated Causal Convolution** is a [causal convolution](https://paperswithcode.com/method/causal-convolution) where the filter is applied over an area larger than its length by skipping input values with a certain step. A dilated causal [convolution](https://paperswithcode.com/method/convolution) effectively allows ...
Given the following machine learning model name: WaveNet, provide a description of the model
**WaveNet** is an audio generative model based on the [PixelCNN](https://paperswithcode.com/method/pixelcnn) architecture. In order to deal with long-range temporal dependencies needed for raw audio generation, architectures are developed based on dilated causal convolutions, which exhibit very large receptive fields. ...
Given the following machine learning model name: GPT-4, provide a description of the model
**GPT-4** is a transformer based model pre-trained to predict the next token in a document.
Given the following machine learning model name: Double Q-learning, provide a description of the model
**Double Q-learning** is an off-policy reinforcement learning algorithm that utilises double estimation to counteract overestimation problems with traditional Q-learning. The max operator in standard [Q-learning](https://paperswithcode.com/method/q-learning) and [DQN](https://paperswithcode.com/method/dqn) uses the...
Given the following machine learning model name: NAS-FCOS, provide a description of the model
**NAS-FCOS** consists of two sub networks, an [FPN](https://paperswithcode.com/method/fpn) $f$ and a set of prediction heads $h$ which have shared structures. One notable difference with other FPN-based one-stage detectors is that our heads have partially shared weights. Only the last several layers of the predictions ...
Given the following machine learning model name: Adaptive Early-Learning Correction, provide a description of the model
Adaptive Early-Learning Correction for Segmentation from Noisy Annotations