prompts
stringlengths
87
212
description
stringlengths
0
6.76k
Given the following machine learning model name: CheXNet, provide a description of the model
**CheXNet** is a 121-layer [DenseNet](https://paperswithcode.com/method/densenet) trained on ChestX-ray14 for pneumonia detection.
Given the following machine learning model name: Drafting Network, provide a description of the model
**Drafting Network** is a style transfer module designed to transfer global style patterns in low-resolution, since global patterns can be transferred easier in low resolution due to larger receptive field and less local details. To achieve single style transfer, earlier work trained an encoder-decoder module, where on...
Given the following machine learning model name: Demon CM, provide a description of the model
**Demon CM**, or **SGD with Momentum and Demon**, is the [Demon](https://paperswithcode.com/method/demon) momentum rule applied to [SGD with momentum](https://paperswithcode.com/method/sgd-with-momentum). $$ \beta\_{t} = \beta\_{init}\cdot\frac{\left(1-\frac{t}{T}\right)}{\left(1-\beta\_{init}\right) + \beta\_{init...
Given the following machine learning model name: CTAB-GAN, provide a description of the model
**CTAB-GAN** is a model for conditional tabular data generation. The generator and discriminator utilize the [DCGAN](https://paperswithcode.com/method/dcgan) architecture. An [auxiliary classifier](https://paperswithcode.com/method/auxiliary-classifier) is also used with an MLP architecture.
Given the following machine learning model name: Directional Sparse FIltering, provide a description of the model
Given the following machine learning model name: Distributional Generalization, provide a description of the model
**Distributional Generalization** is a type of generalization that roughly states that outputs of a classifier at train and test time are close as distributions, as opposed to close in just their average error. This behavior is not captured by classical generalization, which would only consider the average error and no...
Given the following machine learning model name: ComplEx with N3 Regularizer and Relation Prediction Objective, provide a description of the model
ComplEx model trained with a nuclear norm regularizer; A relation prediction objective is added on top of the commonly used 1vsAll objective.
Given the following machine learning model name: Mode Normalization, provide a description of the model
**Mode Normalization** extends normalization to more than a single mean and variance, allowing for detection of modes of data on-the-fly, jointly normalizing samples that share common features. It first assigns samples in a mini-batch to different modes via a gating network, and then normalizes each sample with estimat...
Given the following machine learning model name: CayleyNet, provide a description of the model
The core ingredient of **CayleyNet** is a new class of parametric rational complex functions (Cayley polynomials) allowing to efficiently compute spectral filters on graphs that specialize on frequency bands of interest. The model generates rich spectral filters that are localized in space, scales linearly with the siz...
Given the following machine learning model name: Self-Supervised Motion Disentanglement, provide a description of the model
A self-supervised learning method to disentangle irregular (anomalous) motion from regular motion in unlabeled videos.
Given the following machine learning model name: AutoSmart, provide a description of the model
**AutoSmart** is AutoML framework for temporal relational data. The framework includes automatic data processing, table merging, feature engineering, and model tuning, integrated with a time&memory control unit.
Given the following machine learning model name: Asynchronous Proximal Policy Optimization, provide a description of the model
Given the following machine learning model name: Inception-v3 Module, provide a description of the model
**Inception-v3 Module** is an image block used in the [Inception-v3](https://paperswithcode.com/method/inception-v3) architecture. This architecture is used on the coarsest (8 × 8) grids to promote high dimensional representations.
Given the following machine learning model name: Deformable RoI Pooling, provide a description of the model
**Deformable RoI Pooling** adds an offset to each bin position in the regular bin partition of the RoI Pooling. Similarly, the offsets are learned from the preceding feature maps and the RoIs, enabling adaptive part localization for objects with different shapes.
Given the following machine learning model name: Recurrent models of visual attention, provide a description of the model
RAM adopts RNNs and reinforcement learning (RL) to make the network learn where to pay attention.
Given the following machine learning model name: Depth-wise Plane Sweeping, provide a description of the model
Given the following machine learning model name: ResNet-D, provide a description of the model
**ResNet-D** is a modification on the [ResNet](https://paperswithcode.com/method/resnet) architecture that utilises an [average pooling](https://paperswithcode.com/method/average-pooling) tweak for downsampling. The motivation is that in the unmodified ResNet, the 1 × 1 [convolution](https://paperswithcode.com/method/c...
Given the following machine learning model name: RetinaNet-RS, provide a description of the model
**RetinaNet-RS** is an object detection model produced through a model scaling method based on changing the the input resolution and [ResNet](https://paperswithcode.com/method/resnet) backbone depth. For [RetinaNet](https://paperswithcode.com/method/retinanet), we scale up input resolution from 512 to 768 and the ResNe...
Given the following machine learning model name: Twins-PCPVT, provide a description of the model
**Twins-PCPVT** is a type of [vision transformer](https://paperswithcode.com/methods/category/vision-transformer) that combines global attention, specifically the global sub-sampled attention as proposed in [Pyramid Vision Transformer](https://paperswithcode.com/method/pvt), with [conditional position encodings](https:...
Given the following machine learning model name: ScaleNet, provide a description of the model
**ScaleNet**, or a **Scale Aggregation Network**, is a type of convolutional neural network which learns a neuron allocation for aggregating multi-scale information in different building blocks of a deep network. The most informative output neurons in each block are preserved while others are discarded, and thus neuron...
Given the following machine learning model name: Bi3D, provide a description of the model
**Bi3D** is a stereo depth estimation framework that estimates depth via a series of binary classifications. Rather than testing if objects are at a particular depth *D*, as existing stereo methods do, it classifies them as being closer or farther than *D*. It takes the stereo pair and a disparity $d\_{i}$ and produces...
Given the following machine learning model name: mBERT, provide a description of the model
mBERT
Given the following machine learning model name: Accordion, provide a description of the model
**Accordion** is a gradient communication scheduling algorithm that is generic across models while imposing low computational overheads. Accordion inspects the change in the gradient norms to detect critical regimes and adjusts the communication schedule dynamically. Accordion works for both adjusting the gradient comp...
Given the following machine learning model name: Zoneout, provide a description of the model
**Zoneout** is a method for regularizing [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks). At each timestep, zoneout stochastically forces some hidden units to maintain their previous values. Like [dropout](https://paperswithcode.com/method/dropout), zoneout uses random noise to train a ps...
Given the following machine learning model name: Nouveau VAE, provide a description of the model
**NVAE**, or **Nouveau VAE**, is deep, hierarchical variational autoencoder. It can be trained with the original [VAE](https://paperswithcode.com/method/vae) objective, unlike alternatives such as [VQ-VAE-2](https://paperswithcode.com/method/vq-vae-2). NVAE’s design focuses on tackling two main challenges: (i) designin...
Given the following machine learning model name: DiffPool, provide a description of the model
DiffPool is a differentiable graph pooling module that can generate hierarchical representations of graphs and can be combined with various graph neural network architectures in an end-to-end fashion. DiffPool learns a differentiable soft cluster assignment for nodes at each layer of a deep GNN, mapping nodes to a set ...
Given the following machine learning model name: SPP-Net, provide a description of the model
**SPP-Net** is a convolutional neural architecture that employs [spatial pyramid pooling](https://paperswithcode.com/method/spatial-pyramid-pooling) to remove the fixed-size constraint of the network. Specifically, we add an SPP layer on top of the last convolutional layer. The SPP layer pools the features and generate...
Given the following machine learning model name: MLP-Mixer Layer, provide a description of the model
A Mixer layer is a layer used in the MLP-Mixer architecture proposed by Tolstikhin et. al (2021) for computer vision. Mixer layers consist purely of MLPs, without convolutions or attention. It takes an input of embedded image patches (tokens), with its output having the same shape as its input, similar to that of a Vis...
Given the following machine learning model name: Explanation vs Attention: A Two-Player Game to Obtain Attention for VQA, provide a description of the model
In this paper, we aim to obtain improved attention for a visual question answering (VQA) task. It is challenging to provide supervision for attention. An observation we make is that visual explanations as obtained through class activation mappings (specifically Grad-[CAM](https://paperswithcode.com/method/cam)) that ar...
Given the following machine learning model name: Conditional Position Encoding Vision Transformer, provide a description of the model
**CPVT**, or **Conditional Position Encoding Vision Transformer**, is a type of [vision transformer](https://paperswithcode.com/methods/category/vision-transformer) which utilizes [conditional positional encoding](https://paperswithcode.com/method/conditional-positional-encoding). Other than the new encodings, it follo...
Given the following machine learning model name: DeepDrug, provide a description of the model
**DeepDrug** is a deep learning framework to overcome these shortcomings by using graph convolutional networks to learn the graphical representations of drugs and proteins such as molecular fingerprints and residual structures in order to boost the prediction accuracy.
Given the following machine learning model name: Hit-Detector, provide a description of the model
**Hit-Detector** is a neural architectures search algorithm that simultaneously searches all components of an object detector in an end-to-end manner. It is a hierarchical approach to mine the proper subsearch space from the large volume of operation candidates. It consists of two main procedures. First, given a large ...
Given the following machine learning model name: PRNet+, provide a description of the model
**PRNet+** is a multi-task neural network for outdoor position recovery from measurement record (MR) data. PRNet+ develops a feature extraction module to learn common local-, short- and long-term spatio-temporal locality from heterogeneous MR samples, with a convolutional neural network (CNN), long short-term memory ce...
Given the following machine learning model name: FLAVA, provide a description of the model
FLAVA aims at building a single holistic universal model that targets all modalities at once. FLAVA is a language vision alignment model that learns strong representations from multimodal data (image-text pairs) and unimodal data (unpaired images and text). The model consists of an image encode transformer to capture u...
Given the following machine learning model name: BLOOM, provide a description of the model
**BLOOM** is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total).
Given the following machine learning model name: 3D Convolution, provide a description of the model
A **3D Convolution** is a type of [convolution](https://paperswithcode.com/method/convolution) where the kernel slides in 3 dimensions as opposed to 2 dimensions with 2D convolutions. One example use case is medical imaging where a model is constructed using 3D image slices. Additionally video based data has an additio...
Given the following machine learning model name: Fast-YOLOv4-SmallObj, provide a description of the model
The Fast-YOLOv4-SmallObj model is a modified version of Fast-[YOLOv4](https://paperswithcode.com/method/yolov4) to improve the detection of small objects. Seven layers were added so that it predicts bounding boxes at 3 different scales instead of 2.
Given the following machine learning model name: Segmentation Transformer, provide a description of the model
**Segmentation Transformer**, or **SETR**, is a [Transformer](https://paperswithcode.com/methods/category/transformers)-based segmentation model. The transformer-alone encoder treats an input image as a sequence of image patches represented by learned patch embedding, and transforms the sequence with global self-attent...
Given the following machine learning model name: Residual SRM, provide a description of the model
A **Residual SRM** is a module for convolutional neural networks that uses a [Style-based Recalibration Module](https://paperswithcode.com/method/style-based-recalibration-module) within a [residual block](https://paperswithcode.com/method/residual-block) like structure. The Style-based Recalibration Module (SRM) adapt...
Given the following machine learning model name: RMSProp, provide a description of the model
**RMSProp** is an unpublished adaptive learning rate optimizer [proposed by Geoff Hinton](http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf). The motivation is that the magnitude of gradients can differ for different weights, and can change during learning, making it hard to choose a single global...
Given the following machine learning model name: Factorized Dense Synthesized Attention, provide a description of the model
**Factorized Dense Synthesized Attention** is a synthesized attention mechanism, similar to [dense synthesized attention](https://paperswithcode.com/method/dense-synthesized-attention), but we factorize the outputs to reduce parameters and prevent overfitting. It was proposed as part of the [Synthesizer](https://papers...
Given the following machine learning model name: Pipelined Backpropagation, provide a description of the model
**Pipelined Backpropagation** is an asynchronous pipeline parallel training algorithm. It was first introduced by Petrowski et al (1993). It avoids fill and drain overhead by updating the weights without draining the pipeline first. This results in weight inconsistency, the use of different weights on the forward and b...
Given the following machine learning model name: Nyströmformer, provide a description of the model
Nyströmformer replaces the self-attention in [BERT](https://paperswithcode.com/method/bert)-small and BERT-base using the proposed Nyström approximation. This reduces self-attention complexity to $O(n)$ and allows the [Transformer](https://paperswithcode.com/method/transformer) to support longer sequences.
Given the following machine learning model name: UNet Transformer, provide a description of the model
**UNETR**, or **UNet Transformer**, is a [Transformer](https://paperswithcode.com/methods/category/transformers)-based architecture for [medical image segmentation](https://paperswithcode.com/task/medical-image-segmentation) that utilizes a pure [transformer](https://paperswithcode.com/method/transformer) as the encode...
Given the following machine learning model name: Multiplicative RNN, provide a description of the model
A **Multiplicative RNN (mRNN)** is a type of recurrent neural network with multiplicative connections. In a standard RNN, the current input $x\_{t}$ is first transformed via the visible-to-hidden weight matrix $W\_{hx}$ and then contributes additively to the input for the current hidden state. An mRNN allows the curren...
Given the following machine learning model name: Galactica, provide a description of the model
Galactica is a language model which uses a Transformer architecture in a decoder-only setup with the following modifications: - It uses GeLU activations on all model sizes - It uses a 2048 length context window for all model sizes - It does not use biases in any of the dense kernels or layer norms - It uses learn...
Given the following machine learning model name: Deep LSTM Reader, provide a description of the model
The **Deep LSTM Reader** is a neural network for reading comprehension. We feed documents one word at a time into a Deep [LSTM](https://paperswithcode.com/method/lstm) encoder, after a delimiter we then also feed the query into the encoder. The model therefore processes each document query pair as a single long sequenc...
Given the following machine learning model name: Generalized ELBO with Constrained Optimization, provide a description of the model
Given the following machine learning model name: Invertible 1x1 Convolution, provide a description of the model
The **Invertible 1x1 Convolution** is a type of [convolution](https://paperswithcode.com/method/convolution) used in flow-based generative models that reverses the ordering of channels. The weight matrix is initialized as a random rotation matrix. The log-determinant of an invertible 1 × 1 convolution of a $h \times w ...
Given the following machine learning model name: PointASNL, provide a description of the model
**PointASNL** is a non-local neural network for point clouds processing It consists of two general modules: adaptive sampling (AS) module and local-Nonlocal (L-NL) module. The AS module first re-weights the neighbors around the initial sampled points from farthest point sampling (FPS), and then adaptively adjusts the s...
Given the following machine learning model name: Kaiming Initialization, provide a description of the model
**Kaiming Initialization**, or **He Initialization**, is an initialization method for neural networks that takes into account the non-linearity of activation functions, such as [ReLU](https://paperswithcode.com/method/relu) activations. A proper initialization method should avoid reducing or magnifying the magnitude...
Given the following machine learning model name: Crystal Graph Neural Network, provide a description of the model
The full architecture of CGNN is presented at [CGNN's official site](https://tony-y.github.io/cgnn/architectures/).
Given the following machine learning model name: Semantic Reasoning Network, provide a description of the model
**Semantic reasoning network**, or **SRN**, is an end-to-end trainable framework for scene text recognition that consists of four parts: backbone network, parallel [visual attention](https://paperswithcode.com/method/visual-attention) module (PVAM), global semantic reasoning module (GSRM), and visual-semantic fusion de...
Given the following machine learning model name: ACTKR, provide a description of the model
**ACKTR**, or **Actor Critic with Kronecker-factored Trust Region**, is an actor-critic method for reinforcement learning that applies [trust region optimization](https://paperswithcode.com/method/trpo) using a recently proposed Kronecker-factored approximation to the curvature. The method extends the framework of natu...
Given the following machine learning model name: RegNetY, provide a description of the model
**RegNetY** is a convolutional network design space with simple, regular models with parameters: depth $d$, initial width $w\_{0} > 0$, and slope $w\_{a} > 0$, and generates a different block width $u\_{j}$ for each block $j < d$. The key restriction for the RegNet types of model is that there is a linear parameterisat...
Given the following machine learning model name: R1 Regularization, provide a description of the model
**R_INLINE_MATH_1 Regularization** is a regularization technique and gradient penalty for training [generative adversarial networks](https://paperswithcode.com/methods/category/generative-adversarial-networks). It penalizes the discriminator from deviating from the Nash Equilibrium via penalizing the gradient on real d...
Given the following machine learning model name: Replacing Eligibility Trace, provide a description of the model
In a **Replacing Eligibility Trace**, each time the state is revisited, the trace is reset to $1$ regardless of the presence of a prior trace.. For the memory vector $\textbf{e}\_{t} \in \mathbb{R}^{b} \geq \textbf{0}$: $$\mathbf{e\_{0}} = \textbf{0}$$ $$\textbf{e}\_{t} = \gamma\lambda{e}\_{t-1}\left(s\right) \te...
Given the following machine learning model name: Grammatical evolution and Q-learning, provide a description of the model
This method works as a two-levels optimization algorithm. The outmost layer uses Grammatical evolution to evolve a grammar to build the agent. Then, [Q-learning](https://paperswithcode.com/method/q-learning) is used the fitness evaluation phase to allow the agent to learn to perform online learning.
Given the following machine learning model name: 3-Augment, provide a description of the model
Given the following machine learning model name: Disentangled Attention Mechanism, provide a description of the model
**Disentangled Attention Mechanism** is an attention mechanism used in the [DeBERTa](https://paperswithcode.com/method/deberta) architecture. Unlike [BERT](https://paperswithcode.com/method/bert) where each word in the input layer is represented using a vector which is the sum of its word (content) embedding and positi...
Given the following machine learning model name: Rational Activation function, provide a description of the model
Given the following machine learning model name: Collaborative Preference Embedding, provide a description of the model
CPE is an effective collaborative metric learning to effectively address the problem of sparse and insufficient preference supervision from the margin distribution point-of-view.
Given the following machine learning model name: Chinchilla, provide a description of the model
Chinchilla is a 70B parameters model trained as a compute-optimal model with 1.4 trillion tokens. Findings suggest that these types of models are trained optimally by equally scaling both model size and training tokens. It uses the same compute budget as Gopher but with 4x more training data. Chinchilla and Gopher are ...
Given the following machine learning model name: Displaced Aggregation Units, provide a description of the model
**Displaced Aggregation Unit** replaces classic [convolution](https://paperswithcode.com/method/convolution) layer in ConvNets with learnable positions of units. This introduces explicit structure of hierarchical compositions and results in several benefits: * fully adjustable and **learnable receptive fields** thr...
Given the following machine learning model name: Convolutional LSTM based Residual Network, provide a description of the model
Given the following machine learning model name: VirTex, provide a description of the model
**VirText**, or **Visual representations from Textual annotations** is a pretraining approach using semantically dense captions to learn visual representations. First a ConvNet and [Transformer](https://paperswithcode.com/method/transformer) are jointly trained from scratch to generate natural language captions for ima...
Given the following machine learning model name: Deterministic Policy Gradient, provide a description of the model
**Deterministic Policy Gradient**, or **DPG**, is a policy gradient method for reinforcement learning. Instead of the policy function $\pi\left(.\mid{s}\right)$ being modeled as a probability distribution, DPG considers and calculates gradients for a deterministic policy $a = \mu\_{theta}\left(s\right)$.
Given the following machine learning model name: Regularized Autoencoders, provide a description of the model
This method introduces several regularization schemes that can be applied to an Autoencoder. To make the model generative *ex-post* density estimation is proposed and consists in fitting a Mixture of Gaussian distribution on the train data embeddings after the model is trained.
Given the following machine learning model name: MotionNet, provide a description of the model
**MotionNet** is a system for joint perception and motion prediction based on a bird's eye view (BEV) map, which encodes the object category and motion information from 3D point clouds in each grid cell. MotionNet takes a sequence of LiDAR sweeps as input and outputs the bird's eye view (BEV) map. The backbone of Motio...
Given the following machine learning model name: Exact Fusion Model, provide a description of the model
**Exact Fusion Model (EFM)** is a method for aggregating a feature pyramid. The EFM is based on [YOLOv3](https://paperswithcode.com/method/yolov3), which assigns exactly one bounding-box prior to each ground truth object. Each ground truth bounding box corresponds to one anchor box that surpasses the threshold IoU. If ...
Given the following machine learning model name: SCARF, provide a description of the model
SCARF is a simple, widely-applicable technique for contrastive learning, where views are formed by corrupting a random subset of features. When applied to pre-train deep neural networks on the 69 real-world, tabular classification datasets from the OpenML-CC18 benchmark, SCARF not only improves classification accuracy ...
Given the following machine learning model name: Adaptive Dropout, provide a description of the model
**Adaptive Dropout** is a regularization technique that extends dropout by allowing the dropout probability to be different for different units. The intuition is that there may be hidden units that can individually make confident predictions for the presence or absence of an important feature or combination of features...
Given the following machine learning model name: Gradient-Based Decision Tree Ensembles, provide a description of the model
Given the following machine learning model name: TGAN, provide a description of the model
**TGAN** is a type of generative adversarial network that is capable of learning representation from an unlabeled video dataset and producing a new video. The generator consists of two sub networks called a temporal generator and an image generator. Specifically, the temporal generator first yields a set of latent var...
Given the following machine learning model name: Metropolis Hastings, provide a description of the model
**Metropolis-Hastings** is a Markov Chain Monte Carlo (MCMC) algorithm for approximate inference. It allows for sampling from a probability distribution where direct sampling is difficult - usually owing to the presence of an intractable integral. M-H consists of a proposal distribution $q\left(\theta^{'}\mid\theta\...
Given the following machine learning model name: Confidence Calibration with an Auxiliary Class), provide a description of the model
**Confidence Calibration with an Auxiliary Class**, or **CCAC**, is a post-hoc confidence calibration method for DNN classifiers on OOD datasets. The key feature of CCAC is an auxiliary class in the calibration model which separates mis-classified samples from correctly classified ones, thus effectively mitigating the ...
Given the following machine learning model name: QHM, provide a description of the model
**Quasi-Hyperbolic Momentum (QHM)** is a stochastic optimization technique that alters [momentum SGD](https://paperswithcode.com/method/sgd-with-momentum) with a momentum step, averaging an [SGD](https://paperswithcode.com/method/sgd) step with a momentum step: $$ g\_{t+1} = \beta{g\_{t}} + \left(1-\beta\right)\cdot...
Given the following machine learning model name: Margin Rectified Linear Unit, provide a description of the model
**Margin Rectified Linear Unit**, or **Margin ReLU**, is a type of activation function based on a [ReLU](https://paperswithcode.com/method/relu), but it has a negative threshold for negative values instead of a zero threshhold.
Given the following machine learning model name: Learning to Match, provide a description of the model
**L2M** is a learning algorithm that can work for most cross-domain distribution matching tasks. It automatically learns the cross-domain distribution matching without relying on hand-crafted priors on the matching loss. Instead, L2M reduces the inductive bias by using a meta-network to learn the distribution matching ...
Given the following machine learning model name: Segment Sorting, provide a description of the model
Given the following machine learning model name: Concatenation Affinity, provide a description of the model
**Concatenation Affinity** is a type of affinity or self-similarity function between two points $\mathbb{x\_{i}}$ and $\mathbb{x\_{j}}$ that uses a concatenation function: $$ f\left(\mathbb{x\_{i}}, \mathbb{x\_{j}}\right) = \text{ReLU}\left(\mathbb{w}^{T}\_{f}\left[\theta\left(\mathbb{x}\_{i}\right), \phi\left(\math...
Given the following machine learning model name: Graph Transformer, provide a description of the model
This is **Graph Transformer** method, proposed as a generalization of [Transformer](https://paperswithcode.com/method/transformer) Neural Network architectures, for arbitrary graphs. Compared to the original Transformer, the highlights of the presented architecture are: - The attention mechanism is a function of ...
Given the following machine learning model name: All-Attention Layer, provide a description of the model
An **All-Attention Layer** is an attention module and layer for transformers that merges the self-attention and feedforward sublayers into a single unified attention layer. As opposed to the two-step mechanism of the [Transformer](https://paperswithcode.com/method/transformer) layer, it directly builds its representati...
Given the following machine learning model name: Variational Inference, provide a description of the model
Given the following machine learning model name: Minimum Description Length, provide a description of the model
**Minimum Description Length** provides a criterion for the selection of models, regardless of their complexity, without the restrictive assumption that the data form a sample from a 'true' distribution. Extracted from [scholarpedia](http://scholarpedia.org/article/Minimum_description_length) **Source**: Paper...
Given the following machine learning model name: VQ-VAE, provide a description of the model
**VQ-VAE** is a type of variational autoencoder that uses vector quantisation to obtain a discrete latent representation. It differs from [VAEs](https://paperswithcode.com/method/vae) in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In ord...
Given the following machine learning model name: DFDNet, provide a description of the model
**DFDNet**, or **DFDNet**, is a deep face dictionary network for face restoration to guide the restoration process of degraded observations. Given a LQ image $I\_{d}$, the DFDNet selects the dictionary features that have the most similar structure with the input. Specially, we re-norm the whole dictionaries via compone...
Given the following machine learning model name: FractalNet, provide a description of the model
**FractalNet** is a type of convolutional neural network that eschews [residual connections](https://paperswithcode.com/method/residual-connection) in favour of a "fractal" design. They involve repeated application of a simple expansion rule to generate deep networks whose structural layouts are precisely truncated fra...
Given the following machine learning model name: CodeBERT, provide a description of the model
**CodeBERT** is a bimodal pre-trained model for programming language (PL) and natural language (NL). CodeBERT learns general-purpose representations that support downstream NL-PL applications such as natural language code search, code documentation generation, etc. CodeBERT is developed with a [Transformer](https://pap...
Given the following machine learning model name: NoisyNet-A3C, provide a description of the model
**NoisyNet-A3C** is a modification of [A3C](https://paperswithcode.com/method/a3c) that utilises noisy linear layers for exploration instead of $\epsilon$-greedy exploration as in the original [DQN](https://paperswithcode.com/method/dqn) formulation.
Given the following machine learning model name: Back to the Feature, provide a description of the model
Given the following machine learning model name: ShuffleNet V2 Downsampling Block, provide a description of the model
**ShuffleNet V2 Downsampling Block** is a block for spatial downsampling used in the [ShuffleNet V2](https://paperswithcode.com/method/shufflenet-v2) architecture. Unlike the regular [ShuffleNet](https://paperswithcode.com/method/shufflenet) V2 block, the channel split operator is removed so the number of output channe...
Given the following machine learning model name: Randomized Deletion, provide a description of the model
Given the following machine learning model name: RIFE, provide a description of the model
**RIFE**, or **Real-time Intermediate Flow Estimation** is an intermediate flow estimation algorithm for Video Frame Interpolation (VFI). Many recent flow-based VFI methods first estimate the bi-directional optical flows, then scale and reverse them to approximate intermediate flows, leading to artifacts on motion boun...
Given the following machine learning model name: ControlVAE, provide a description of the model
**ControlVAE** is a [variational autoencoder](https://paperswithcode.com/method/vae) (VAE) framework that combines the automatic control theory with the basic VAE to stabilize the KL-divergence of VAE models to a specified value. It leverages a non-linear PI controller, a variant of the proportional-integral-derivative...
Given the following machine learning model name: Parametric UMAP, provide a description of the model
**Parametric UMAP** is a non-parametric graph-based dimensionality reduction algorithm that extends the second step of [UMAP](https://www.paperswithcode.com/method/umap) to a parametric optimization over neural network weights, learning a parametric relationship between data and embedding.
Given the following machine learning model name: ZFNet, provide a description of the model
**ZFNet** is a classic convolutional neural network. The design was motivated by visualizing intermediate feature layers and the operation of the classifier. Compared to [AlexNet](https://paperswithcode.com/method/alexnet), the filter sizes are reduced and the stride of the convolutions are reduced.
Given the following machine learning model name: AdaMod, provide a description of the model
**AdaMod** is a stochastic optimizer that restricts adaptive learning rates with adaptive and momental upper bounds. The dynamic learning rate bounds are based on the exponential moving averages of the adaptive learning rates themselves, which smooth out unexpected large learning rates and stabilize the training of dee...
Given the following machine learning model name: VGG, provide a description of the model
**VGG** is a classical convolutional neural network architecture. It was based on an analysis of how to increase the depth of such networks. The network utilises small 3 x 3 filters. Otherwise the network is characterized by its simplicity: the only other components being pooling layers and a fully connected layer. ...
Given the following machine learning model name: Shape Adaptor, provide a description of the model
**Shape Adaptor** is a novel resizing module for neural networks. It is a drop-in enhancement built on top of traditional resizing layers, such as pooling, bilinear sampling, and strided [convolution](https://paperswithcode.com/method/convolution). This module allows for a learnable shaping factor which differs from th...