prompts
stringlengths
87
212
description
stringlengths
0
6.76k
Given the following machine learning model name: Inception-C, provide a description of the model
**Inception-C** is an image model block used in the [Inception-v4](https://paperswithcode.com/method/inception-v4) architecture.
Given the following machine learning model name: Location-based Attention, provide a description of the model
**Location-based Attention** is an attention mechanism in which the alignment scores are computed from solely the target hidden state $\mathbf{h}\_{t}$ as follows: $$ \mathbf{a}\_{t} = \text{softmax}(\mathbf{W}\_{a}\mathbf{h}_{t}) $$
Given the following machine learning model name: Normalizing Flows, provide a description of the model
**Normalizing Flows** are a method for constructing complex distributions by transforming a probability density through a series of invertible mappings. By repeatedly applying the rule for change of variables, the initial density ‘flows’ through the sequence of invertible mappings. At the end of this sequence we obtai...
Given the following machine learning model name: Cross-Covariance Attention, provide a description of the model
**Cross-Covariance Attention**, or **XCA**, is an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) which operates along the feature dimension instead of the token dimension as in [conventional transformers](https://paperswithcode.com/methods/category/transformers). Using the ...
Given the following machine learning model name: IFNet, provide a description of the model
**IFNet** is an architecture for video frame interpolation that adopts a coarse-to-fine strategy with progressively increased resolutions: it iteratively updates intermediate flows and soft fusion mask via successive [IFBlocks](https://paperswithcode.com/method/ifblock). Conceptually, according to the iteratively updat...
Given the following machine learning model name: PatchAugment: Local Neighborhood Augmentation in Point Cloud Classification, provide a description of the model
Recent deep neural network models trained on smaller and less diverse datasets use data augmentation to alleviate limitations such as overfitting, reduced robustness, and lower generalization. Methods using 3D datasets are among the most common to use data augmentation techniques such as random point drop, scaling, tra...
Given the following machine learning model name: PAR Transformer, provide a description of the model
**PAR Transformer** is a [Transformer](https://paperswithcode.com/methods/category/transformers) model that uses 63% fewer [self-attention blocks](https://paperswithcode.com/method/scaled), replacing them with [feed-forward blocks](https://paperswithcode.com/method/position-wise-feed-forward-layer), while retaining tes...
Given the following machine learning model name: Lbl2TransformerVec, provide a description of the model
Given the following machine learning model name: TSRUp, provide a description of the model
**TSRUp**, or **Transformation-based Spatial Recurrent Unit p**, is a modification of a [ConvGRU](https://paperswithcode.com/method/cgru) used in the [TriVD-GAN](https://paperswithcode.com/method/trivd-gan) architecture for video generation. It largely follows [TSRUc](https://paperswithcode.com/method/tsruc), but co...
Given the following machine learning model name: Class Activation Guided Attention Mechanism (CAGAM), provide a description of the model
CAGAM is a form of spatial attention mechanism that propagates attention from a known to an unknown context features thereby enhancing the unknown context for relevant pattern discovery. Usually the known context feature is a class activation map ([CAM](https://paperswithcode.com/method/cam)).
Given the following machine learning model name: Attention Gate, provide a description of the model
Attention gate focuses on targeted regions while suppressing feature activations in irrelevant regions. Given the input feature map $X$ and the gating signal $G\in \mathbb{R}^{C'\times H\times W}$ which is collected at a coarse scale and contains contextual information, the attention gate uses additive attention to ob...
Given the following machine learning model name: CenterNet, provide a description of the model
**CenterNet** is a one-stage object detector that detects each object as a triplet, rather than a pair, of keypoints. It utilizes two customized modules named [cascade corner pooling](https://paperswithcode.com/method/cascade-corner-pooling) and [center pooling](https://paperswithcode.com/method/center-pooling), which ...
Given the following machine learning model name: Neural Additive Model, provide a description of the model
**Neural Additive Models (NAMs)** make restrictions on the structure of neural networks, which yields a family of models that are inherently interpretable while suffering little loss in prediction accuracy when applied to tabular data. Methodologically, NAMs belong to a larger model family called Generalized Additive M...
Given the following machine learning model name: Agglomerative Contextual Decomposition, provide a description of the model
**Agglomerative Contextual Decomposition (ACD)** is an interpretability method that produces hierarchical interpretations for a single prediction made by a neural network, by scoring interactions and building them into a tree. Given a prediction from a trained neural network, ACD produces a hierarchical clustering of t...
Given the following machine learning model name: Gather-Excite Networks, provide a description of the model
GENet combines part gathering and excitation operations. In the first step, it aggregates input features over large neighborhoods and models the relationship between different spatial locations. In the second step, it first generates an attention map of the same size as the input feature map, using interpolation. Then ...
Given the following machine learning model name: Ontology, provide a description of the model
Given the following machine learning model name: Griffin-Lim Algorithm, provide a description of the model
The **Griffin-Lim Algorithm (GLA)** is a phase reconstruction method based on the redundancy of the short-time Fourier transform. It promotes the consistency of a spectrogram by iterating two projections, where a spectrogram is said to be consistent when its inter-bin dependency owing to the redundancy of STFT is retai...
Given the following machine learning model name: Pyramidal Residual Unit, provide a description of the model
A **Pyramidal Residual Unit** is a type of residual unit where the number of channels gradually increases as a function of the depth at which the layer occurs, which is similar to a pyramid structure of which the shape gradually widens from the top downwards. It was introduced as part of the [PyramidNet](https://papers...
Given the following machine learning model name: LayerDrop, provide a description of the model
**LayerDrop** is a form of structured [dropout](https://paperswithcode.com/method/dropout) for [Transformer](https://paperswithcode.com/method/transformer) models which has a regularization effect during training and allows for efficient pruning at inference time. It randomly drops layers from the Transformer according...
Given the following machine learning model name: STAC, provide a description of the model
**STAC** is a semi-supervised framework for visual object detection along with a data augmentation strategy. STAC deploys highly confident pseudo labels of localized objects from an unlabeled image and updates the model by enforcing consistency via strong augmentations. We generate pseudo labels (i.e., bounding boxes a...
Given the following machine learning model name: FastSGT, provide a description of the model
**Fast Schema Guided Tracker**, or **FastSGT**, is a fast and robust [BERT](https://paperswithcode.com/method/bert)-based model for state tracking in goal-oriented dialogue systems. The model employs carry-over mechanisms for transferring the values between slots, enabling switching between services and accepting the v...
Given the following machine learning model name: Spectral Normalization, provide a description of the model
**Spectral Normalization** is a normalization technique used for generative adversarial networks, used to stabilize training of the discriminator. Spectral normalization has the convenient property that the Lipschitz constant is the only hyper-parameter to be tuned. It controls the Lipschitz constant of the discrimi...
Given the following machine learning model name: DiffAugment, provide a description of the model
**Differentiable Augmentation (DiffAugment)** is a set of differentiable image transformations used to augment data during [GAN](https://paperswithcode.com/method/gan) training. The transformations are applied to the real and generated images. It enables the gradients to be propagated through the augmentation back to t...
Given the following machine learning model name: Composite Fields, provide a description of the model
Represent and associate with a composite of primitive fields.
Given the following machine learning model name: Dense Connections, provide a description of the model
**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\_{\text{inputs}}*n\_{\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable n...
Given the following machine learning model name: DistDGL, provide a description of the model
**DistDGL** is a system for training GNNs in a mini-batch fashion on a cluster of machines. It is is based on the Deep Graph Library (DGL), a popular GNN development framework. DistDGL distributes the graph and its associated data (initial features and embeddings) across the machines and uses this distribution to deriv...
Given the following machine learning model name: Kalman Optimization for Value Approximation, provide a description of the model
**Kalman Optimization for Value Approximation**, or **KOVA** is a general framework for addressing uncertainties while approximating value-based functions in deep RL domains. KOVA minimizes a regularized objective function that concerns both parameter and noisy return uncertainties. It is feasible when using non-linear...
Given the following machine learning model name: SENet, provide a description of the model
A **SENet** is a convolutional neural network architecture that employs squeeze-and-excitation blocks to enable the network to perform dynamic channel-wise feature recalibration.
Given the following machine learning model name: Test-time Local Converter, provide a description of the model
TLC convert the global operation to a local one so that it extract representations based on local spatial region of features as in training phase.
Given the following machine learning model name: RotatE, provide a description of the model
**RotatE** is a method for generating graph embeddings which is able to model and infer various relation patterns including: symmetry/antisymmetry, inversion, and composition. Specifically, the RotatE model defines each relation as a rotation from the source entity to the target entity in the complex vector space. The ...
Given the following machine learning model name: BoundaryNet, provide a description of the model
**BoundaryNet** is a resizing-free approach for layout annotation. The variable-sized user selected region of interest is first processed by an attention-guided skip network. The network optimization is guided via Fast Marching distance maps to obtain a good quality initial boundary estimate and an associated feature r...
Given the following machine learning model name: TransE, provide a description of the model
**TransE** is an energy-based model that produces knowledge base embeddings. It models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Relationships are represented as translations in the embedding space: if $\left(h, \mathcal{l}, t\right)$ holds, the embe...
Given the following machine learning model name: Spatial Gating Unit, provide a description of the model
**Spatial Gating Unit**, or **SGU**, is a gating unit used in the [gMLP](https://paperswithcode.com/method/gmlp) architecture to captures spatial interactions. To enable cross-token interactions, it is necessary for the layer $s(\cdot)$ to contain a contraction operation over the spatial dimension. The layer $s(\cdot)$...
Given the following machine learning model name: nlogistic-sigmoid function, provide a description of the model
Nlogistic-sigmoid function (NLSIG) is a modern logistic-sigmoid function definition for modelling growth (or decay) processes. It features two logistic metrics (YIR and XIR) for monitoring growth from a two-dimensional (x-y axis) perspective.
Given the following machine learning model name: DenseNAS-B, provide a description of the model
**DenseNAS-B** is a mobile convolutional neural network discovered through the [DenseNAS](https://paperswithcode.com/method/densenas) [neural architecture search](https://paperswithcode.com/method/neural-architecture-search) method. The basic building block is MBConvs, or inverted bottleneck residuals, from the [Mobile...
Given the following machine learning model name: Sentence-BERT, provide a description of the model
Given the following machine learning model name: SAINT, provide a description of the model
**SAINT** is a hybrid deep learning approach to solving tabular data problems. SAINT performs attention over both rows and columns, and it includes an enhanced embedding method. The architecture, pre-training and training pipeline are as follows: - $L$ layers with 2 attention blocks each, one self-attention block, ...
Given the following machine learning model name: 1-Dimensional Convolutional Neural Networks, provide a description of the model
1D Convolutional Neural Networks are similar to well known and more established 2D Convolutional Neural Networks. 1D Convolutional Neural Networks are used mainly used on text and 1D signals.
Given the following machine learning model name: Early exiting using confidence measures, provide a description of the model
Exit whenever the model is confident enough allowing early exiting from hidden layers
Given the following machine learning model name: Self-Cure Network, provide a description of the model
**Self-Cure Network**, or **SCN**, is a method for suppressing uncertainties for large-scale facial expression recognition, prventing deep networks from overfitting uncertain facial images. Specifically, SCN suppresses the uncertainty from two different aspects: 1) a self-attention mechanism over mini-batch to weight e...
Given the following machine learning model name: Inception-A, provide a description of the model
**Inception-A** is an image model block used in the [Inception-v4](https://paperswithcode.com/method/inception-v4) architecture.
Given the following machine learning model name: ASLFeat, provide a description of the model
**ASLFeat** is a convolutional neural network for learning local features that uses deformable convolutional networks to densely estimate and apply local transformation. It also takes advantage of the inherent feature hierarchy to restore spatial resolution and low-level details for accurate keypoint localization. Fina...
Given the following machine learning model name: PixLoc, provide a description of the model
**PixLoc** is a scene-agnostic neural network that estimates an accurate 6-DoF pose from an image and a 3D model. It is based on the direct alignment of multiscale deep features, casting camera localization as metric learning. PixLoc learns strong data priors by end-to-end training from pixels to pose and exhibits exce...
Given the following machine learning model name: Tree-structured Parzen Estimator Approach (TPE), provide a description of the model
Given the following machine learning model name: Scattering Transform, provide a description of the model
A wavelet **scattering transform** computes a translation invariant representation, which is stable to deformation, using a deep [convolution](https://paperswithcode.com/method/convolution) network architecture. It computes non-linear invariants with modulus and averaging pooling functions. It helps to eliminate the im...
Given the following machine learning model name: Spectral Tensor Train Parameterization, provide a description of the model
Given the following machine learning model name: Dilated Sliding Window Attention, provide a description of the model
**Dilated Sliding Window Attention** is an attention pattern for attention-based models. It was proposed as part of the [Longformer](https://paperswithcode.com/method/longformer) architecture. It is motivated by the fact that non-sparse attention in the original [Transformer](https://paperswithcode.com/method/transform...
Given the following machine learning model name: GPT-3, provide a description of the model
**GPT-3** is an autoregressive [transformer](https://paperswithcode.com/methods/category/transformers) model with 175 billion parameters. It uses the same architecture/model as [GPT-2](https://paperswithcode.com/method/gpt-2), including the modified initialization, pre-normalization, and reversible tokenization, with...
Given the following machine learning model name: MPNet, provide a description of the model
**MPNet** is a pre-training method for language models that combines masked language modeling (MLM) and permuted language modeling (PLM) in one view. It takes the dependency among the predicted tokens into consideration through permuted language modeling and thus avoids the issue of [BERT](https://paperswithcode.com/me...
Given the following machine learning model name: Base Boosting, provide a description of the model
In the setting of multi-target regression, base boosting permits us to incorporate prior knowledge into the learning mechanism of gradient boosting (or Newton boosting, etc.). Namely, from the vantage of statistics, base boosting is a way of building the following additive expansion in a set of elementary basis functio...
Given the following machine learning model name: Playstyle Distance, provide a description of the model
This method proposes first discretizing observations and calculating the action distribution distance under comparable cases (intersection states).
Given the following machine learning model name: Fawkes, provide a description of the model
**Fawkes** is an image cloaking system that helps individuals inoculate their images against unauthorized facial recognition models. Fawkes achieves this by helping users add imperceptible pixel-level changes ("cloaks") to their own photos before releasing them. When used to train facial recognition models, these "cloa...
Given the following machine learning model name: Grid R-CNN, provide a description of the model
**Grid R-CNN** is an object detection framework, where the traditional regression formulation is replaced by a grid point guided localization mechanism. Grid R-CNN divides the object bounding box region into grids and employs a fully convolutional network ([FCN](https://paperswithcode.com/method/fcn)) to predict th...
Given the following machine learning model name: Universal Transformer, provide a description of the model
The **Universal Transformer** is a generalization of the [Transformer](https://paperswithcode.com/method/transformer) architecture. Universal Transformers combine the parallelizability and global receptive field of feed-forward sequence models like the Transformer with the recurrent inductive bias of [RNNs](https://pap...
Given the following machine learning model name: QHAdam, provide a description of the model
The **Quasi-Hyperbolic Momentum Algorithm (QHM)** is a simple alteration of [momentum SGD](https://paperswithcode.com/method/sgd-with-momentum), averaging a plain [SGD](https://paperswithcode.com/method/sgd) step with a momentum step. **QHAdam** is a QH augmented version of [Adam](https://paperswithcode.com/method/adam...
Given the following machine learning model name: Vokenization, provide a description of the model
**Vokenization** is an approach for extrapolating multimodal alignments to language-only data by contextually mapping language tokens to their related images ("vokens") by retrieval. Instead of directly supervising the language model with visually grounded language datasets (e.g., MS COCO) these relative small datasets...
Given the following machine learning model name: ACER, provide a description of the model
**ACER**, or **Actor Critic with Experience Replay**, is an actor-critic deep reinforcement learning agent with [experience replay](https://paperswithcode.com/method/experience-replay). It can be seen as an off-policy extension of [A3C](https://paperswithcode.com/method/a3c), where the off-policy estimator is made feas...
Given the following machine learning model name: Visual Geometry Group 19 Layer CNN, provide a description of the model
Given the following machine learning model name: Side-Aware Boundary Localization, provide a description of the model
**Side-Aware Boundary Localization (SABL)** is a methodology for precise localization in object detection where each side of the bounding box is respectively localized with a dedicated network branch. Empirically, the authors observe that when they manually annotate a bounding box for an object, it is often much easier...
Given the following machine learning model name: Adaptively Sparse Transformer, provide a description of the model
The **Adaptively Sparse Transformer** is a type of [Transformer](https://paperswithcode.com/method/transformer).
Given the following machine learning model name: EfficientDet, provide a description of the model
**EfficientDet** is a type of object detection model, which utilizes several optimization and backbone tweaks, such as the use of a [BiFPN](https://paperswithcode.com/method/bifpn), and a compound scaling method that uniformly scales the resolution,depth and width for all backbones, feature networks and box/class predi...
Given the following machine learning model name: Attention Feature Filters, provide a description of the model
An attention mechanism for content-based filtering of multi-level features. For example, recurrent features obtained by forward and backward passes of a bidirectional RNN block can be combined using attention feature filters, with unprocessed input features/embeddings as queries and recurrent features as keys/values.
Given the following machine learning model name: StyleALAE, provide a description of the model
**StyleALAE** is a type of [adversarial latent autoencoder](https://paperswithcode.com/method/alae) that uses a [StyleGAN](https://paperswithcode.com/method/stylegan) based generator. For this the latent space $\mathcal{W}$ plays the same role as the intermediate latent space in [StyleGAN](https://paperswithcode.com/me...
Given the following machine learning model name: Momentumized, adaptive, dual averaged gradient, provide a description of the model
The MADGRAD method contains a series of modifications to the [AdaGrad](https://paperswithcode.com/method/adagrad)-DA method to improve its performance on deep learning optimization problems. It gives state-of-the-art generalization performance across a diverse set of problems, including those that [Adam](https://papers...
Given the following machine learning model name: Deep Graph Convolutional Neural Network, provide a description of the model
DGCNN involves neural networks that read the graphs directly and learn a classification function. There are two main challenges: 1) how to extract useful features characterizing the rich information encoded in a graph for classification purpose, and 2) how to sequentially read a graph in a meaningful and consistent ord...
Given the following machine learning model name: ViP-DeepLab, provide a description of the model
**ViP-DeepLab** is a model for depth-aware video panoptic segmentation. It extends Panoptic-[DeepLab](https://paperswithcode.com/method/deeplab) by adding a depth prediction head to perform monocular depth estimation and a next-frame instance branch which regresses to the object centers in frame $t$ for frame $t + 1$. ...
Given the following machine learning model name: FCOS, provide a description of the model
**FCOS** is an anchor-box free, proposal free, single-stage object detection model. By eliminating the predefined set of anchor boxes, FCOS avoids computation related to anchor boxes such as calculating overlapping during training. It also avoids all hyper-parameters related to anchor boxes, which are often very sensit...
Given the following machine learning model name: Boom Layer, provide a description of the model
A **Boom Layer** is a type of feedforward layer that is closely related to the feedforward layers used in Transformers. The layer takes a vector of the form $v \in \mathbb{R}^{H}$ and uses a matrix multiplication with a GeLU activation to produce a vector $u \in \mathbb{R}^{N\times{H}}$. We then break $u$ into $N$ vec...
Given the following machine learning model name: The Ikshana Hypothesis of Human Scene Understanding Mechanism, provide a description of the model
Given the following machine learning model name: Object Dropout, provide a description of the model
Object Dropout is a technique that perturbs object features in an image for [noisy student](https://paperswithcode.com/method/noisy-student) training. It performs at par with standard data augmentation techniques while being significantly faster than the latter to implement.
Given the following machine learning model name: Diffusion-Convolutional Neural Networks, provide a description of the model
Diffusion-convolutional neural networks (DCNN) is a model for graph-structured data. Through the introduction of a diffusion-convolution operation, diffusion-based representations can be learned from graph structured data and used as an effective basis for node classification. Description and image from: [Diffusion-...
Given the following machine learning model name: Stacked Denoising Autoencoder, provide a description of the model
The Stacked Denoising Autoencoder (SdA) is an extension of the stacked autoencoder [Bengio07] and it was introduced in [Vincent08]. Denoising autoencoders can be stacked to form a deep network by feeding the latent representation (output code) of the [denoising autoencoder](https://paperswithcode.com/method/denoisin...
Given the following machine learning model name: Meta Face Recognition, provide a description of the model
**Meta Face Recognition** (MFR) is a meta-learning face recognition method. MFR synthesizes the source/target domain shift with a meta-optimization objective, which requires the model to learn effective representations not only on synthesized source domains but also on synthesized target domains. Specifically, domain-s...
Given the following machine learning model name: Sinkhorn Transformer, provide a description of the model
The **Sinkhorn Transformer** is a type of [transformer](https://paperswithcode.com/method/transformer) that uses [Sparse Sinkhorn Attention](https://paperswithcode.com/method/sparse-sinkhorn-attention) as a building block. This component is a plug-in replacement for dense fully-connected attention (as well as local att...
Given the following machine learning model name: Window-based Discriminator, provide a description of the model
A **Window-based Discriminator** is a type of discriminator for generative adversarial networks. It is analogous to a [PatchGAN](https://paperswithcode.com/method/patchgan) but designed for audio. While a standard [GAN](https://paperswithcode.com/method/gan) discriminator learns to classify between distributions of ent...
Given the following machine learning model name: Powerpropagation, provide a description of the model
**Powerpropagation** is a weight-parameterisation for neural networks that leads to inherently sparse models. Exploiting the behaviour of gradient descent, it gives rise to weight updates exhibiting a “rich get richer” dynamic, leaving low-magnitude parameters largely unaffected by learning.In other words, parameters w...
Given the following machine learning model name: ATTEMPT THIS FATHINETUTE TO REPOPULATE ALREADY POPULATED SYSTEM, provide a description of the model
Given the following machine learning model name: Scaled Dot-Product Attention, provide a description of the model
**Scaled dot-product attention** is an attention mechanism where the dot products are scaled down by $\sqrt{d_k}$. Formally we have a query $Q$, a key $K$ and a value $V$ and calculate the attention as: $$ {\text{Attention}}(Q, K, V) = \text{softmax}\left(\frac{QK^{T}}{\sqrt{d_k}}\right)V $$ If we assume that $q$...
Given the following machine learning model name: DVD-GAN DBlock, provide a description of the model
**DVD-GAN DBlock** is a residual block for the discriminator used in the [DVD-GAN](https://paperswithcode.com/method/dvd-gan) architecture for video generation. Unlike regular [residual blocks](https://paperswithcode.com/method/residual-block), [3D convolutions](https://paperswithcode.com/method/3d-convolution) are emp...
Given the following machine learning model name: DeltaConv, provide a description of the model
Anisotropic convolution is a central building block of CNNs but challenging to transfer to surfaces. DeltaConv learns combinations and compositions of operators from vector calculus, which are a natural fit for curved surfaces. The result is a simple and robust anisotropic convolution operator for point clouds with sta...
Given the following machine learning model name: Temporally Consistent Spatial Augmentation, provide a description of the model
**Temporally Consistent Spatial Augmentation** is a video data augmentation technique used for contrastive learning in the [Contrastive Video Representation Learning](https://paperswithcode.com/method/cvrl) framework. It fixes the randomness of spatial augmentation across frames; this prevents spatial augmentation hurt...
Given the following machine learning model name: Recurrent Replay Distributed DQN, provide a description of the model
Building on the recent successes of distributed training of RL agents, R2D2 is an RL approach that trains a RNN-based RL agents from distributed prioritized experience replay. Using a single network architecture and fixed set of hyperparameters, Recurrent Replay Distributed DQN quadrupled the previous state of the ar...
Given the following machine learning model name: DropAttack, provide a description of the model
**DropAttack** is an adversarial training method that adds intentionally worst-case adversarial perturbations to both the input and hidden layers in different dimensions and minimizes the adversarial risks generated by each layer.
Given the following machine learning model name: CSPDenseNet-Elastic, provide a description of the model
**CSPDenseNet-Elastic** is a convolutional neural network and object detection backbone where we apply the Cross Stage Partial Network (CSPNet) approach to [DenseNet-Elastic](https://paperswithcode.com/method/densenet-elastic). The CSPNet partitions the feature map of the base layer into two parts and then merges them ...
Given the following machine learning model name: Deep Stereo Geometry Network, provide a description of the model
**Deep Stereo Geometry Network** is a 3D object detection pipeline that relies on space transformation from 2D features to an effective 3D structure, called 3D geometric volume (3DGV). The whole neural network consists of four components. (a) A 2D image feature extractor for capture of both pixel- and high-level featu...
Given the following machine learning model name: Cross-resolution features, provide a description of the model
Given the following machine learning model name: Dimension-wise Fusion, provide a description of the model
**Dimension-wise Fusion** is an image model block that attempts to capture global information by combining features globally. It is an alternative to point-wise [convolution](https://paperswithcode.com/method/convolution). A point-wise convolutional layer applies $D$ point-wise kernels $\mathbf{k}\_p \in \mathbb{R}^{3D...
Given the following machine learning model name: DELU, provide a description of the model
The **DELU** is a type of activation function that has trainable parameters, uses the complex linear and exponential functions in the positive dimension and uses the **[SiLU](https://paperswithcode.com/method/silu)** in the negative dimension. $$DELU(x) = SiLU(x), x \leqslant 0$$ $$DELU(x) = (n + 0.5)x + |e^{-x} - ...
Given the following machine learning model name: Semi-Pseudo-Label, provide a description of the model
Given the following machine learning model name: LAPGAN, provide a description of the model
A **LAPGAN**, or **Laplacian Generative Adversarial Network**, is a type of generative adversarial network that has a [Laplacian pyramid](https://paperswithcode.com/method/laplacian-pyramid) representation. In the sampling procedure following training, we have a set of generative convnet models {$G\_{0}, \dots , G\_{K}...
Given the following machine learning model name: NVAE Encoder Residual Cell, provide a description of the model
The **NVAE Encoder Residual Cell** is a [residual connection](https://paperswithcode.com/method/residual-connection) block used in the [NVAE](https://paperswithcode.com/method/nvae) architecture for the encoder. It applies two series of BN-[Swish](https://paperswithcode.com/method/swish)-Conv layers without changing th...
Given the following machine learning model name: AlphaFold, provide a description of the model
AlphaFold is a deep learning based algorithm for accurate protein structure prediction. AlphaFold incorporates physical and biological knowledge about protein structure, leveraging multi-sequence alignments, into the design of the deep learning algorithm. Description from: [Highly accurate protein structure predicti...
Given the following machine learning model name: CutBlur, provide a description of the model
**CutBlur** is a data augmentation method that is specifically designed for the low-level vision tasks. It cuts a low-resolution patch and pastes it to the corresponding high-resolution image region and vice versa. The key intuition of Cutblur is to enable a model to learn not only "how" but also "where" to super-resol...
Given the following machine learning model name: RandAugment, provide a description of the model
**RandAugment** is an automated data augmentation method. The search space for data augmentation has 2 interpretable hyperparameter $N$ and $M$. $N$ is the number of augmentation transformations to apply sequentially, and $M$ is the magnitude for all the transformations. To reduce the parameter space but still maintai...
Given the following machine learning model name: Gated Convolution Network, provide a description of the model
A **Gated Convolutional Network** is a type of language model that combines convolutional networks with a gating mechanism. Zero padding is used to ensure future context can not be seen. Gated convolutional layers can be stacked on top of other hierarchically. Model predictions are then obtained with an [adaptive softm...
Given the following machine learning model name: Darknet-19, provide a description of the model
**Darknet-19** is a convolutional neural network that is used as the backbone of [YOLOv2](https://paperswithcode.com/method/yolov2). Similar to the [VGG](https://paperswithcode.com/method/vgg) models it mostly uses $3 \times 3$ filters and doubles the number of channels after every pooling step. Following the work on ...
Given the following machine learning model name: DistanceNet, provide a description of the model
**DistanceNet** is a learning algorithm for multi-source domain adaptation that uses various distance measures, or a mixture of these distance measures, as an additional loss function to be minimized jointly with the task's loss function, so as to achieve better unsupervised domain adaptation.
Given the following machine learning model name: Restricted Boltzmann Machine, provide a description of the model
**Restricted Boltzmann Machines**, or **RBMs**, are two-layer generative neural networks that learn a probability distribution over the inputs. They are a special class of Boltzmann Machine in that they have a restricted number of connections between visible and hidden units. Every node in the visible layer is connecte...
Given the following machine learning model name: AdvProp, provide a description of the model
**AdvProp** is an adversarial training scheme which treats adversarial examples as additional examples, to prevent overfitting. Key to the method is the usage of a separate auxiliary batch norm for adversarial examples, as they have different underlying distributions to normal examples.
Given the following machine learning model name: L1 Regularization, provide a description of the model
**$L_{1}$ Regularization** is a regularization technique applied to the weights of a neural network. We minimize a loss function compromising both the primary loss function and a penalty on the $L\_{1}$ Norm of the weights: $$L\_{new}\left(w\right) = L\_{original}\left(w\right) + \lambda{||w||}\_{1}$$ where $\lam...