prompts
stringlengths
87
212
description
stringlengths
0
6.76k
Given the following machine learning model name: Gated Linear Unit, provide a description of the model
A **Gated Linear Unit**, or **GLU** computes: $$ \text{GLU}\left(a, b\right) = a\otimes \sigma\left(b\right) $$ It is used in natural language processing architectures, for example the [Gated CNN](https://paperswithcode.com/method/gated-convolution-network), because here $b$ is the gate that control what informat...
Given the following machine learning model name: Neural Tangent Transfer, provide a description of the model
**Neural Tangent Transfer**, or **NTT**, is a method for finding trainable sparse networks in a label-free manner. Specifically, NTT finds sparse networks whose training dynamics, as characterized by the neural tangent kernel, mimic those of dense networks in function space.
Given the following machine learning model name: Stochastic Depth, provide a description of the model
**Stochastic Depth** aims to shrink the depth of a network during training, while keeping it unchanged during testing. This is achieved by randomly dropping entire [ResBlocks](https://paperswithcode.com/method/residual-block) during training and bypassing their transformations through skip connections. Let $b\_{l}...
Given the following machine learning model name: Global Local Attention Module, provide a description of the model
The Global Local Attention Module (GLAM) is an image model block that attends to the feature map's channels and spatial dimensions locally, and also attends to the feature map's channels and spatial dimensions globally. The locally attended feature maps, globally attended feature maps, and the original feature maps are...
Given the following machine learning model name: EfficientNetV2, provide a description of the model
**EfficientNetV2** is a type convolutional neural network that has faster training speed and better parameter efficiency than [previous models](https://paperswithcode.com/method/efficientnet). To develop these models, the authors use a combination of training-aware [neural architecture search](https://paperswithcode.co...
Given the following machine learning model name: Vision-Language pretrained Model, provide a description of the model
VLMo is a unified vision-language pre-trained model that jointly learns a dual encoder and a fusion encoder with a modular Transformer network. A Mixture-of-Modality-Experts (MOME) transformer is introduced to encode different modalities which helps it to capture modality-specific information by modality experts, and a...
Given the following machine learning model name: Kernel Activation Function, provide a description of the model
A **Kernel Activation Function** is a non-parametric activation function defined as a one-dimensional kernel approximator: $$ f(s) = \sum_{i=1}^D \alpha_i \kappa( s, d_i) $$ where: 1. The dictionary of the kernel elements $d_0, \ldots, d_D$ is fixed by sampling the $x$-axis with a uniform step around 0. 2. Th...
Given the following machine learning model name: Position-Sensitive RoIAlign, provide a description of the model
**Position-Sensitive RoIAlign** is a positive sensitive version of [RoIAlign](https://paperswithcode.com/method/roi-align) - i.e. it performs selective alignment, allowing for the learning of position-sensitive region of interest aligning.
Given the following machine learning model name: Spatial and Channel-wise Attention-based Convolutional Neural Network, provide a description of the model
As CNN features are naturally spatial, channel-wise and multi-layer, Chen et al. proposed a novel spatial and channel-wise attention-based convolutional neural network (SCA-CNN). It was designed for the task of image captioning, and uses an encoder-decoder framework where a CNN first encodes an input image into a v...
Given the following machine learning model name: PoolFormer, provide a description of the model
PoolFormer is instantiated from MetaFormer by specifying the token mixer as extremely simple operator, pooling. PoolFormer is utilized as a tool to verify MetaFormer hypothesis "MetaFormer is actually what you need" (vs "Attention is all you need").
Given the following machine learning model name: Accuracy-Robustness Area, provide a description of the model
In the space of adversarial perturbation against classifier accuracy, the ARA is the area between a classifier's curve and the straight line defined by a naive classifier's maximum accuracy. Intuitively, the ARA measures a combination of the classifier’s predictive power and its ability to overcome an adversary. Import...
Given the following machine learning model name: Visual Attention, provide a description of the model
Given the following machine learning model name: Frequency channel attention networks, provide a description of the model
FCANet contains a novel multi-spectral channel attention module. Given an input feature map $X \in \mathbb{R}^{C \times H \times W}$, multi-spectral channel attention first splits $X$ into many parts $x^{i} \in \mathbb{R}^{C' \times H \times W}$. Then it applies a 2D DCT to each part $x^{i}$. Note that a 2D DCT can use...
Given the following machine learning model name: TD-Gammon, provide a description of the model
**TD-Gammon** is a game-learning architecture for playing backgammon. It involves the use of a $TD\left(\lambda\right)$ learning algorithm and a feedforward neural network. Credit: [Temporal Difference Learning and TD-Gammon](https://cling.csd.uwo.ca/cs346a/extra/tdgammon.pdf)
Given the following machine learning model name: Dynamic Time Warping, provide a description of the model
Dynamic Time Warping (DTW) [1] is one of well-known distance measures between a pairwise of time series. The main idea of DTW is to compute the distance from the matching of similar elements between time series. It uses the dynamic programming technique to find the optimal temporal matching between elements of two time...
Given the following machine learning model name: CAMoE, provide a description of the model
**CAMoE** is a multi-stream Corpus Alignment network with single gate Mixture-of-Experts (MoE) for video-text retrieval. The CAMoE employs Mixture-of-Experts (MoE) to extract multi-perspective video representations, including action, entity, scene, etc., then align them with the corresponding part of the text. A [Dual ...
Given the following machine learning model name: Random Mutation Search, provide a description of the model
Given the following machine learning model name: GLOW, provide a description of the model
**GLOW** is a type of flow-based generative model that is based on an invertible $1 \times 1$ [convolution](https://paperswithcode.com/method/convolution). This builds on the flows introduced by [NICE](https://paperswithcode.com/method/nice) and [RealNVP](https://paperswithcode.com/method/realnvp). It consists of a ser...
Given the following machine learning model name: Adafactor, provide a description of the model
**Adafactor** is a stochastic optimization method based on [Adam](https://paperswithcode.com/method/adam) that reduces memory usage while retaining the empirical benefits of adaptivity. This is achieved through maintaining a factored representation of the squared gradient accumulator across training steps. Specifically...
Given the following machine learning model name: MoCo v3, provide a description of the model
**MoCo v3** aims to stabilize training of self-supervised ViTs. MoCo v3 is an incremental improvement of MoCo v1/2. Two crops are used for each image under random data augmentation. They are encoded by two encoders $f_q$ and $f_k$ with output vectors $q$ and $k$. $q$ behaves like a "query", where the goal of learning i...
Given the following machine learning model name: Enhanced Fusion Framework, provide a description of the model
The **Enhanced Fusion Framework** proposes three different ideas to improve the existing MI-based BCI frameworks. Image source: [Fumanal-Idocin et al.](https://arxiv.org/pdf/2101.06968v1.pdf)
Given the following machine learning model name: Adaptive Input Representations, provide a description of the model
**Adaptive Input Embeddings** extend the [adaptive softmax](https://paperswithcode.com/method/adaptive-softmax) to input word representations. The factorization assigns more capacity to frequent words and reduces the capacity for less frequent words with the benefit of reducing overfitting to rare words.
Given the following machine learning model name: Spatial-Channel Token Distillation, provide a description of the model
The **Spatial-Channel Token Distillation** method is proposed to improve the spatial and channel mixing from a novel knowledge distillation (KD) perspective. To be specific, we design a special KD mechanism for MLP-like Vision Models called Spatial-channel Token Distillation (STD), which improves the information mixing...
Given the following machine learning model name: Generic RoI Extractor, provide a description of the model
**GroIE** is an RoI extractor which intends to overcome the limitation of existing extractors which select only one (the best) layer from the [FPN](https://paperswithcode.com/method/fpn). The intuition is that all the layers of FPN retain useful information. Therefore, the proposed layer introduces non-local building ...
Given the following machine learning model name: Adaptive Parameter-wise Diagonal Quasi-Newton Method, provide a description of the model
Please enter a description about the method here
Given the following machine learning model name: Parametric Exponential Linear Unit, provide a description of the model
**Parameterized Exponential Linear Units**, or **PELU**, is an activation function for neural networks. It involves learning a parameterization of [ELU](https://paperswithcode.com/method/elu) in order to learn the proper activation shape at each layer in a CNN. The PELU has two additional parameters over the ELU: ...
Given the following machine learning model name: Global Sub-Sampled Attention, provide a description of the model
**Global Sub-Sampled Attention**, or **GSA**, is a local [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) used in the [Twins-SVT](https://paperswithcode.com/method/twins-svt) architecture. A single representative is used to summarize the key information for each of $m \times...
Given the following machine learning model name: TaBERT, provide a description of the model
**TaBERT** is a pretrained language model (LM) that jointly learns representations for natural language sentences and (semi-)structured tables. TaBERT is trained on a large corpus of 26 million tables and their English contexts. In summary, TaBERT's process for learning representations for NL sentences is as follow...
Given the following machine learning model name: Poincaré Embeddings, provide a description of the model
**Poincaré Embeddings** learn hierarchical representations of symbolic data by embedding them into hyperbolic space -- or more precisely into an $n$-dimensional Poincaré ball. Due to the underlying hyperbolic geometry, this allows for learning of parsimonious representations of symbolic data by simultaneously capturing...
Given the following machine learning model name: Proxy Optimization for initial Proxies in Proxy Anchor Loss, provide a description of the model
Given the following machine learning model name: Panoptic-PolarNet, provide a description of the model
**Panoptic-PolarNet** is a point cloud segmentation framework for LiDAR point clouds. It learns both semantic segmentation and class-agnostic instance clustering in a single inference network using a polar Bird's Eye View (BEV) representation, enabling the authors to circumvent the issue of occlusion among instances in...
Given the following machine learning model name: Channel-wise Soft Attention, provide a description of the model
**Channel-wise Soft Attention** is an attention mechanism in computer vision that assigns "soft" attention weights for each channel $c$. In soft channel-wise attention, the alignment weights are learned and placed "softly" over each channel. This would contrast with hard attention which would only selects one channel t...
Given the following machine learning model name: DCN-V2, provide a description of the model
**DCN-V2** is an architecture for learning-to-rank that improves upon the original [DCN](http://paperswithcode.com/method/dcn) model. It first learns explicit feature interactions of the inputs (typically the embedding layer) through cross layers, and then combines with a deep network to learn complementary implicit in...
Given the following machine learning model name: Commute Times Layer, provide a description of the model
**TL;DR: CT-Layer is a GNN Layer which is able to rewire a graph in an inductive an parameter-free way according to the commute times distance (or effective resistance). We address it learning a differentiable way to compute the CT-embedding of the graph.** ### Summary **CT-Layer** is able to Learn the *Commute T...
Given the following machine learning model name: Massively multilingual probing based on Universal Dependencies, provide a description of the model
Given the following machine learning model name: Hue — Bi-Dimensional Empirical Mode Decomposition, provide a description of the model
Given the following machine learning model name: FashionCLIP, provide a description of the model
FashionCLIP is a fine-tuned CLIP model on fashion data (more than 800K pairs). It is the first foundation model for Fashion.
Given the following machine learning model name: Squared ReLU, provide a description of the model
**Squared ReLU** is an activation function used in the [Primer](https://paperswithcode.com/method/primer) architecture in the feedforward block of the [Transformer](https://paperswithcode.com/methods/category/transformers) layer. It is simply squared [ReLU](https://paperswithcode.com/method/relu) activations. The ef...
Given the following machine learning model name: Balanced L1 Loss, provide a description of the model
**Balanced L1 Loss** is a loss function used for the object detection task. Classification and localization problems are solved simultaneously under the guidance of a multi-task loss since [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn), defined as: $$ L\_{p,u,t\_{u},v} = L\_{cls}\left(p, u\right) + \lam...
Given the following machine learning model name: Darknet-53, provide a description of the model
**Darknet-53** is a convolutional neural network that acts as a backbone for the [YOLOv3](https://paperswithcode.com/method/yolov3) object detection approach. The improvements upon its predecessor [Darknet-19](https://paperswithcode.com/method/darknet-19) include the use of residual connections, as well as more layers.
Given the following machine learning model name: A3C, provide a description of the model
**A3C**, **Asynchronous Advantage Actor Critic**, is a policy gradient algorithm in reinforcement learning that maintains a policy $\pi\left(a\_{t}\mid{s}\_{t}; \theta\right)$ and an estimate of the value function $V\left(s\_{t}; \theta\_{v}\right)$. It operates in the forward view and uses a mix of $n$-step returns t...
Given the following machine learning model name: FastSpeech 2s, provide a description of the model
**FastSpeech 2s** is a text-to-speech model that abandons mel-spectrograms as intermediate output completely and directly generates speech waveform from text during inference. In other words there is no cascaded mel-spectrogram generation (acoustic model) and waveform generation (vocoder). FastSpeech 2s generates wavef...
Given the following machine learning model name: Feedforward Network, provide a description of the model
A **Feedforward Network**, or a **Multilayer Perceptron (MLP)**, is a neural network with solely densely connected layers. This is the classic neural network architecture of the literature. It consists of inputs $x$ passed through units $h$ (of which there can be many layers) to predict a target $y$. Activation functio...
Given the following machine learning model name: IoU-Net, provide a description of the model
**IoU-Net** is an object detection architecture that introduces localization confidence. IoU-Net learns to predict the IoU between each detected bounding box and the matched ground-truth. The network acquires this confidence of localization, which improves the NMS procedure by preserving accurately localized bounding b...
Given the following machine learning model name: Squeeze aggregated excitation network, provide a description of the model
This method introduces the aggregated dense block within the squeeze excitation block to enhance representation. The squeeze method compresses the input flow and sends it to excitation with dense layers to regain its shape. The paper introduces multiple dense layers stacked side by side, similar to ResNeXt. This learns...
Given the following machine learning model name: Elastic ResNeXt Block, provide a description of the model
An **Elastic ResNeXt Block** is a modification of the [ResNeXt Block](https://paperswithcode.com/method/resnext-block) that adds downsamplings and upsamplings in parallel branches at each layer. It is called "elastic" because each layer in the network is flexible in terms of choosing the best scale by a soft policy.
Given the following machine learning model name: Independent Component Analysis, provide a description of the model
_**Independent component analysis** (ICA) is a statistical and computational technique for revealing hidden factors that underlie sets of random variables, measurements, or signals._ _ICA defines a generative model for the observed multivariate data, which is typically given as a large database of samples. In the mo...
Given the following machine learning model name: Contrastive BERT, provide a description of the model
**Contrastive BERT** is a reinforcement learning agent that combines a new contrastive loss and a hybrid [LSTM](https://paperswithcode.com/method/lstm)-[transformer](https://paperswithcode.com/method/transformer) architecture to tackle the challenge of improving data efficiency for RL. It uses bidirectional masked pred...
Given the following machine learning model name: LMOT: Efficient Light-Weight Detection and Tracking in Crowds, provide a description of the model
Rana Mostafa, Hoda Baraka and AbdelMoniem Bayoumi **LMOT**, i.e., Light-weight Multi-Object Tracker, performs joint pedestrian detection and tracking. LMOT introduces a simplified DLA-34 encoder network to extract detection features for the current image that are computationally efficient. Furthermore, we generate ...
Given the following machine learning model name: Enhanced-Multimodal Fuzzy Framework, provide a description of the model
BCI MI framework to classifiy brain signals using a multimodal decission making phase, with an addtional differentiation of the signal.
Given the following machine learning model name: Characteristic Function Estimation for Discrete Probability Distributions, provide a description of the model
Given the following machine learning model name: SqueezeNet, provide a description of the model
**SqueezeNet** is a convolutional neural network that employs design strategies to reduce the number of parameters, notably with the use of fire modules that "squeeze" parameters using 1x1 convolutions.
Given the following machine learning model name: Cluster-GCN, provide a description of the model
Cluster-GCN is a novel GCN algorithm that is suitable for SGD-based training by exploiting the graph clustering structure. Cluster-GCN works as the following: at each step, it samples a block of nodes that associate with a dense subgraph identified by a graph clustering algorithm, and restricts the neighborhood search ...
Given the following machine learning model name: Deformable Attention Module, provide a description of the model
**Deformable Attention Module** is an attention module used in the [Deformable DETR](https://paperswithcode.com/method/deformable-detr) architecture, which seeks to overcome one issue base [Transformer attention](https://paperswithcode.com/method/scaled) in that it looks over all possible spatial locations. Inspired by...
Given the following machine learning model name: Nearest-Neighbor Contrastive Learning of Visual Representations, provide a description of the model
Given the following machine learning model name: Co-Correcting, provide a description of the model
**Co-Correcting** is a noise-tolerant deep learning framework for medical image classification based on mutual learning and annotation correction. It consists of three modules: the dual-network architecture, the curriculum learning module, and the label correction module.
Given the following machine learning model name: VisuoSpatial Foresight, provide a description of the model
**VisuoSpatial Foresight** is a method for robotic fabric manipulation that leverages a combination of RGB and depth information to learn goal conditioned fabric manipulation policies for a variety of long horizon tasks.
Given the following machine learning model name: Weight Standardization, provide a description of the model
**Weight Standardization** is a normalization technique that smooths the loss landscape by standardizing the weights in convolutional layers. Different from the previous normalization methods that focus on *activations*, WS considers the smoothing effects of *weights* more than just length-direction decoupling. Theoret...
Given the following machine learning model name: Mobile DenseNet, provide a description of the model
Given the following machine learning model name: ECA-Net, provide a description of the model
An **ECA-Net** is a type of convolutional neural network that utilises an [Efficient Channel Attention](https://paperswithcode.com/method/efficient-channel-attention) module.
Given the following machine learning model name: Gradient-based optimization, provide a description of the model
GBO is a novel metaheuristic optimization algorithm. The GBO, inspired by the gradient-based Newton’s method, uses two main operators: gradient search rule (GSR) and local escaping operator (LEO) and a set of vectors to explore the search space. The GSR employs the gradient-based method to enhance the exploration tende...
Given the following machine learning model name: InternVideo: General Video Foundation Models via Generative and Discriminative Learning, provide a description of the model
The foundation models have recently shown excellent performance on a variety of downstream tasks in computer vision. However, most existing vision foundation models simply focus on image-level pretraining and adpation, which are limited for dynamic and complex video-level understanding tasks. To fill the gap, we presen...
Given the following machine learning model name: DetNAS, provide a description of the model
**DetNAS** is a [neural architecture search](https://paperswithcode.com/method/neural-architecture-search) algorithm for the design of better backbones for object detection. It is based on the technique of one-shot supernet, which contains all possible networks in the search space. The supernet is trained under the typ...
Given the following machine learning model name: Variational Entanglement Detection, provide a description of the model
**Variational Entanglement Detection** is a variational quantum algorithm which uses criteria based on positive maps as a bridge and works as follows. Given an unknown target bipartite quantum state, it firstly decomposes the chosen positive map into a linear combination of NISQ implementable quantum operations. Then, ...
Given the following machine learning model name: Snapshot Ensembles: Train 1, get M for free, provide a description of the model
The overhead cost of training multiple deep neural networks could be very high in terms of the training time, hardware, and computational resource requirement and often acts as obstacle for creating deep ensembles. To overcome these barriers Huang et al. proposed a unique method to create ense...
Given the following machine learning model name: Contextualized Topic Models, provide a description of the model
Contextualized Topic Models are based on the Neural-ProdLDA variational autoencoding approach by Srivastava and Sutton (2017). This approach trains an encoding neural network to map pre-trained contextualized word embeddings (e.g., [BERT](https://paperswithcode.com/method/bert)) to latent representations. Those lat...
Given the following machine learning model name: Prioritized Sweeping, provide a description of the model
**Prioritized Sweeping** is a reinforcement learning technique for model-based algorithms that prioritizes updates according to a measure of urgency, and performs these updates first. A queue is maintained of every state-action pair whose estimated value would change nontrivially if updated, prioritized by the size of ...
Given the following machine learning model name: Deformable Kernel, provide a description of the model
A **Deformable Kernels** is a type of convolutional operator for deformation modeling. DKs learn free-form offsets on kernel coordinates to deform the original kernel space towards specific data modality, rather than recomposing data. This can directly adapt the effective receptive field (ERF) while leaving the recepti...
Given the following machine learning model name: Fast Bi-level Adversarial Training, provide a description of the model
Fast-BAT is a new method for accelerated adversarial training.
Given the following machine learning model name: Neural network for graphs, provide a description of the model
NN4G is based on a constructive feedforward architecture with state variables that uses neurons with no feedback connections. The neurons are applied to the input graphs by a general traversal process that relaxes the constraints of previous approaches derived by the causality assumption over hierarchical input data. ...
Given the following machine learning model name: Lipschitz Constant Constraint, provide a description of the model
Please enter a description about the method here
Given the following machine learning model name: Deep Equilibrium Models, provide a description of the model
A new kind of implicit models, where the output of the network is defined as the solution to an "infinite-level" fixed point equation. Thanks to this we can compute the gradient of the output without activations and therefore with a significantly reduced memory footprint.
Given the following machine learning model name: Proximal Policy Optimization, provide a description of the model
**Proximal Policy Optimization**, or **PPO**, is a policy gradient method for reinforcement learning. The motivation was to have an algorithm with the data efficiency and reliable performance of [TRPO](https://paperswithcode.com/method/trpo), while using only first-order optimization. Let $r\_{t}\left(\theta\right)...
Given the following machine learning model name: GreedyNAS-A, provide a description of the model
**GreedyNAS-A** is a convolutional neural network discovered using the [GreedyNAS](https://paperswithcode.com/method/greedynas) [neural architecture search](https://paperswithcode.com/method/neural-architecture-search) method. The basic building blocks used are inverted residual blocks (from [MobileNetV2](https://paper...
Given the following machine learning model name: Convolutional Vision Transformer, provide a description of the model
The **Convolutional vision Transformer (CvT)** is an architecture which incorporates convolutions into the [Transformer](https://paperswithcode.com/method/transformer). The CvT design introduces convolutions to two core sections of the ViT architecture. First, the Transformers are partitioned into multiple stages th...
Given the following machine learning model name: BiSeNet V2, provide a description of the model
**BiSeNet V2** is a two-pathway architecture for real-time semantic segmentation. One pathway is designed to capture the spatial details with wide channels and shallow layers, called Detail Branch. In contrast, the other pathway is introduced to extract the categorical semantics with narrow channels and deep layers, ca...
Given the following machine learning model name: XCiT, provide a description of the model
**Cross-Covariance Image Transformers**, or **XCiT**, is a type of [vision transformer](https://paperswithcode.com/methods/category/vision-transformer) that aims to combine the accuracy of [conventional transformers](https://paperswithcode.com/methods/category/transformers) with the scalability of [convolutional archit...
Given the following machine learning model name: Value Imputation and Mask Estimation, provide a description of the model
**VIME **, or **Value Imputation and Mask Estimation**, is a self- and semi-supervised learning framework for tabular data. It consists of a pretext task of estimating mask vectors from corrupted tabular data in addition to the reconstruction pretext task for self-supervised learning.
Given the following machine learning model name: Residual Attention Network, provide a description of the model
Inspired by the success of ResNet, Wang et al. proposed the very deep convolutional residual attention network (RAN) by combining an attention mechanism with residual connections. Each attention module stacked in a residual attention network can be divided into a mask branch and a trunk branch. The trunk br...
Given the following machine learning model name: Seesaw Loss, provide a description of the model
**Seesaw Loss** is a loss function for long-tailed instance segmentation. It dynamically re-balances the gradients of positive and negative samples on a tail class with two complementary factors: mitigation factor and compensation factor. The mitigation factor reduces punishments to tail categories w.r.t the ratio of c...
Given the following machine learning model name: NVAE Generative Residual Cell, provide a description of the model
The **NVAE Generative Residual Cell** is a skip connection block used as part of the [NVAE](https://paperswithcode.com/method/nvae) architecture for the generator. The residual cell expands the number of channels $E$ times before applying the [depthwise separable convolution](https://paperswithcode.com/method/depthwise...
Given the following machine learning model name: Local Prior Matching, provide a description of the model
**Local Prior Matching** is a semi-supervised objective for speech recognition that distills knowledge from a strong prior (e.g. a language model) to provide learning signal to a discriminative model trained on unlabeled speech. The LPM objective minimizes the cross entropy between the local prior and the model distrib...
Given the following machine learning model name: k-Nearest Neighbors, provide a description of the model
**$k$-Nearest Neighbors** is a clustering-based algorithm for classification and regression. It is a a type of instance-based learning as it does not attempt to construct a general internal model, but simply stores instances of the training data. Prediction is computed from a simple majority vote of the nearest neighbo...
Given the following machine learning model name: Random Erasing, provide a description of the model
Random Erasing is a data augmentation method for training the convolutional neural network (CNN), which randomly selects a rectangle region in an image and erases its pixels with random values. In this process, training images with various levels of occlusion are generated, which reduces the risk of over-fitting and ma...
Given the following machine learning model name: Augmented SBERT, provide a description of the model
**Augmented SBERT** is a data augmentation strategy for pairwise sentence scoring that uses a [BERT](https://paperswithcode.com/method/bert) cross-encoder to improve the performance for the [SBERT](https://paperswithcode.com/method/sbert) bi-encoders. Given a pre-trained, well-performing crossencoder, we sample sentenc...
Given the following machine learning model name: Nonuniform Quantization for Stochastic Gradient Descent, provide a description of the model
As the size and complexity of models and datasets grow, so does the need for communication-efficient variants of stochastic gradient descent that can be deployed to perform parallel model training. One popular communication-compression method for data-parallel [SGD](https://paperswithcode.com/method/sgd) is QSGD (Alist...
Given the following machine learning model name: High-Order Proximity preserved Embedding, provide a description of the model
Given the following machine learning model name: WaveGAN, provide a description of the model
**WaveGAN** is a generative adversarial network for unsupervised synthesis of raw-waveform audio (as opposed to image-like spectrograms). The WaveGAN architecture is based off [DCGAN](https://paperswithcode.com/method/dcgan). The DCGAN generator uses the [transposed convolution](https://paperswithcode.com/method/tr...
Given the following machine learning model name: CornerNet, provide a description of the model
**CornerNet** is an object detection model that detects an object bounding box as a pair of keypoints, the top-left corner and the bottom-right corner, using a single [convolution](https://paperswithcode.com/method/convolution) neural network. By detecting objects as paired keypoints, we eliminate the need for designin...
Given the following machine learning model name: M2Det, provide a description of the model
**M2Det** is a one-stage object detection model that utilises a Multi-Level Feature Pyramid Network ([MLFPN](https://paperswithcode.com/method/mlfpn)) to extract features from the input image, and then similar to [SSD](https://paperswithcode.com/method/ssd), produces dense bounding boxes and category scores based on th...
Given the following machine learning model name: Gradient Clipping, provide a description of the model
One difficulty that arises with optimization of deep neural networks is that large parameter gradients can lead an [SGD](https://paperswithcode.com/method/sgd) optimizer to update the parameters strongly into a region where the loss function is much greater, effectively undoing much of the work that was needed to get t...
Given the following machine learning model name: DetNet, provide a description of the model
**DetNet** is a backbone convolutional neural network for object detection. Different from traditional pre-trained models for ImageNet classification, DetNet maintains the spatial resolution of the features even though extra stages are included. DetNet attempts to stay efficient by employing a low complexity dilated bo...
Given the following machine learning model name: Probability Guided Maxout, provide a description of the model
A regularization criterion that, differently from [dropout](https://paperswithcode.com/method/dropout) and its variants, is deterministic rather than random. It grounds on the empirical evidence that feature descriptors with larger L2-norm and highly-active nodes are strongly correlated to confident class predictions. ...
Given the following machine learning model name: Schrödinger Network, provide a description of the model
**SchNet** is an end-to-end deep neural network architecture based on continuous-filter convolutions. It follows the deep tensor neural network framework, i.e. atom-wise representations are constructed by starting from embedding vectors that characterize the atom type before introducing the configuration of the system ...
Given the following machine learning model name: RandWire, provide a description of the model
**RandWire** is a type of convolutional neural network that arise from randomly wired neural networks that are sampled from stochastic network generators, in which a human-designed random process defines generation.
Given the following machine learning model name: Channel Shuffle, provide a description of the model
**Channel Shuffle** is an operation to help information flow across feature channels in convolutional neural networks. It was used as part of the [ShuffleNet](https://paperswithcode.com/method/shufflenet) architecture. If we allow a group [convolution](https://paperswithcode.com/method/convolution) to obtain input ...
Given the following machine learning model name: Recurrent Dropout, provide a description of the model
**Recurrent Dropout** is a regularization method for [recurrent neural networks](https://paperswithcode.com/methods/category/recurrent-neural-networks). [Dropout](https://paperswithcode.com/method/dropout) is applied to the updates to [LSTM](https://paperswithcode.com/method/lstm) memory cells (or [GRU](https://papersw...
Given the following machine learning model name: Iterative Inpainting, provide a description of the model
Given the following machine learning model name: DenseNet-Elastic, provide a description of the model
**DenseNet-Elastic** is a convolutional neural network that is a modification of a [DenseNet](https://paperswithcode.com/method/densenet) with elastic blocks (extra upsampling and downsampling).
Given the following machine learning model name: FSAF, provide a description of the model
**FSAF**, or Feature Selective Anchor-Free, is a building block for single-shot object detectors. It can be plugged into single-shot detectors with feature pyramid structure. The FSAF module addresses two limitations brought up by the conventional anchor-based detection: 1) heuristic-guided feature selection; 2) overla...