prompts
stringlengths
87
212
description
stringlengths
0
6.76k
Given the following machine learning model name: building to building transfer learning, provide a description of the model
using transfer learning to transfer knowledge from one building to predict the energy consumption of another building with scarce data
Given the following machine learning model name: TABBIE, provide a description of the model
**TABBIE** is a pretraining objective (*corrupt cell detection*) that learns exclusively from tabular data. Unlike other approaches, TABBIE provides embeddings of all table substructures (cells, rows, and columns). TABBIE can be seen as a table embedding model trained to detect corrupted cells, inspired by the [ELECTRA...
Given the following machine learning model name: Affine Coupling, provide a description of the model
**Affine Coupling** is a method for implementing a normalizing flow (where we stack a sequence of invertible bijective transformation functions). Affine coupling is one of these bijective transformation functions. Specifically, it is an example of a reversible transformation where the forward function, the reverse func...
Given the following machine learning model name: Shrink and Fine-Tune, provide a description of the model
**Shrink and Fine-Tune**, or **SFT**, is a type of distillation that avoids explicit distillation by copying parameters to a student student model and then fine-tuning. Specifically it extracts a student model from the maximally spaced layers of a fine-tuned teacher. Each layer $l \in L'$ is copied fully from $L$. For ...
Given the following machine learning model name: AdaMax, provide a description of the model
**AdaMax** is a generalisation of [Adam](https://paperswithcode.com/method/adam) from the $l\_{2}$ norm to the $l\_{\infty}$ norm. Define: $$ u\_{t} = \beta^{\infty}\_{2}v\_{t-1} + \left(1-\beta^{\infty}\_{2}\right)|g\_{t}|^{\infty}$$ $$ = \max\left(\beta\_{2}\cdot{v}\_{t-1}, |g\_{t}|\right)$$ We can plug into...
Given the following machine learning model name: Random Gaussian Blur, provide a description of the model
**Random Gaussian Blur** is an image data augmentation technique where we randomly blur the image using a Gaussian distribution. Image Source: [Wikipedia](https://en.wikipedia.org/wiki/Gaussian_blur)
Given the following machine learning model name: DALL·E 2, provide a description of the model
**DALL·E 2** is a generative text-to-image model made up of two main components: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding.
Given the following machine learning model name: Sigmoid Linear Unit, provide a description of the model
** Sigmoid Linear Units**, or **SiLUs**, are activation functions for neural networks. The activation of the SiLU is computed by the sigmoid function multiplied by its input, or $$ x\sigma(x).$$ See [Gaussian Error Linear Units](https://arxiv.org/abs/1606.08415) ([GELUs](https://paperswithcode.com/method/gelu)) whe...
Given the following machine learning model name: CARLA: An Open Urban Driving Simulator, provide a description of the model
CARLA is an open-source simulator for autonomous driving research. CARLA has been developed from the ground up to support development, training, and validation of autonomous urban driving systems. In addition to open-source code and protocols, CARLA provides open digital assets (urban layouts, buildings, vehicles) that...
Given the following machine learning model name: Fully Convolutional Network, provide a description of the model
**Fully Convolutional Networks**, or **FCNs**, are an architecture used mainly for semantic segmentation. They employ solely locally connected layers, such as [convolution](https://paperswithcode.com/method/convolution), pooling and upsampling. Avoiding the use of dense layers means less parameters (making the networks...
Given the following machine learning model name: Variational Dropout, provide a description of the model
**Variational Dropout** is a regularization technique based on [dropout](https://paperswithcode.com/method/dropout), but uses a variational inference grounded approach. In Variational Dropout, we repeat the same dropout mask at each time step for both inputs, outputs, and recurrent layers (drop the same network units a...
Given the following machine learning model name: Hi-LANDER, provide a description of the model
**Hi-LANDER** is a hierarchical [graph neural network](https://paperswithcode.com/methods/category/graph-models) (GNN) model that learns how to cluster a set of images into an unknown number of identities using an image annotated with labels belonging to a disjoint set of identities. The hierarchical GNN uses an approa...
Given the following machine learning model name: Fast-YOLOv2, provide a description of the model
Given the following machine learning model name: TSRUs, provide a description of the model
**TSRUs**, or **Transformation-based Spatial Recurrent Unit p**, is a modification of a [ConvGRU](https://paperswithcode.com/method/cgru) used in the [TriVD-GAN](https://paperswithcode.com/method/trivd-gan) architecture for video generation. It largely follows [TSRUc](https://paperswithcode.com/method/tsruc), but co...
Given the following machine learning model name: Conditional Random Field, provide a description of the model
**Conditional Random Fields** or **CRFs** are a type of probabilistic graph model that take neighboring sample context into account for tasks like classification. Prediction is modeled as a graphical model, which implements dependencies between the predictions. Graph choice depends on the application, for example linea...
Given the following machine learning model name: Generative Adversarial Transformer, provide a description of the model
GANformer is a novel and efficient type of [transformer](https://paperswithcode.com/method/transformer) which can be used for visual generative modeling. The network employs a bipartite structure that enables long-range interactions across an image, while maintaining computation of linearly efficiency, that can readily...
Given the following machine learning model name: YOLOX, provide a description of the model
**YOLOX** is a single-stage object detector that makes several modifications to [YOLOv3](https://paperswithcode.com/method/yolov3) with a [DarkNet53](https://www.paperswithcode.com/method/darknet53) backbone. Specifically, YOLO’s head is replaced with a decoupled one. For each level of [FPN](https://paperswithcode.com...
Given the following machine learning model name: Double DQN, provide a description of the model
A **Double Deep Q-Network**, or **Double DQN** utilises [Double Q-learning](https://paperswithcode.com/method/double-q-learning) to reduce overestimation by decomposing the max operation in the target into action selection and action evaluation. We evaluate the greedy policy according to the online network, but we use ...
Given the following machine learning model name: Graph Echo State Network, provide a description of the model
**Graph Echo State Network** (**GraphESN**) model is a generalization of the Echo State Network (ESN) approach to graph domains. GraphESNs allow for an efficient approach to Recursive Neural Networks (RecNNs) modeling extended to deal with cyclic/acyclic, directed/undirected, labeled graphs. The recurrent reservoir of ...
Given the following machine learning model name: AutoSync, provide a description of the model
**AutoSync** is a pipeline for automatically optimizing synchronization strategies, given model structures and resource specifications, in data-parallel distributed machine learning. By factorizing the synchronization strategy with respect to each trainable building block of a DL model, we can construct a valid and lar...
Given the following machine learning model name: ALBEF, provide a description of the model
ALBEF introduces a contrastive loss to align the image and text representations before fusing them through cross-modal attention. This enables more grounded vision and language representation learning. ALBEF also doesn't require bounding box annotations. The model consists of an image encode, a text encoder, and a mult...
Given the following machine learning model name: PipeMare, provide a description of the model
**PipeMare** is an asynchronous (bubble-free) pipeline parallel method for training large neural networks. It involves two main techniques: learning rate rescheduling and discrepancy correction.
Given the following machine learning model name: Multi-partition Embedding Interaction, provide a description of the model
**MEI** introduces the *multi-partition embedding interaction* technique with block term tensor format to systematically address the efficiency--expressiveness trade-off in knowledge graph embedding. It divides the embedding vector into multiple partitions and learns the local interaction patterns from data instead of ...
Given the following machine learning model name: Online Multi-granularity Distillation, provide a description of the model
**OMGD**, or **Online Multi-Granularity Distillation** is a framework for learning efficient [GANs](https://paperswithcode.com/methods/category/generative-adversarial-networks). The student generator is optimized in a discriminator-free and ground-truth-free setting. The scheme trains the teacher and student alternativ...
Given the following machine learning model name: Multiscale Dilated Convolution Block, provide a description of the model
A **Multiscale Dilated Convolution Block** is an Inception-style convolutional block motivated by the ideas that image features naturally occur at multiple scales, that a network’s expressivity is proportional to the range of functions it can represent divided by its total number of parameters, and by the desire to eff...
Given the following machine learning model name: Set Transformer, provide a description of the model
Many machine learning tasks such as multiple instance learning, 3D shape recognition, and few-shot image classification are defined on sets of instances. Since solutions to such problems do not depend on the order of elements of the set, models used to address them should be permutation invariant. We present an attenti...
Given the following machine learning model name: Contrastive Language-Image Pre-training, provide a description of the model
**Contrastive Language-Image Pre-training** (**CLIP**), consisting of a simplified version of ConVIRT trained from scratch, is an efficient method of image representation learning from natural language supervision. , CLIP jointly trains an image encoder and a text encoder to predict the correct pairings of a batch of (...
Given the following machine learning model name: Pixel Recurrent Neural Network, provide a description of the model
**PixelRNNs** are generative neural networks that sequentially predicts the pixels in an image along the two spatial dimensions. They model the discrete probability of the raw pixel values and encode the complete set of dependencies in the image. Variants include the Row [LSTM](https://paperswithcode.com/method/lstm) a...
Given the following machine learning model name: Xception, provide a description of the model
**Xception** is a convolutional neural network architecture that relies solely on [depthwise separable convolution](https://paperswithcode.com/method/depthwise-separable-convolution) layers.
Given the following machine learning model name: PrIme Sample Attention, provide a description of the model
**PrIme Sample Attention (PISA)** directs the training of object detection frameworks towards prime samples. These are samples that play a key role in driving the detection performance. The authors define Hierarchical Local Rank (HLR) as a metric of importance. Specifically, they use IoU-HLR to rank positive samples an...
Given the following machine learning model name: Global Average Pooling, provide a description of the model
**Global Average Pooling** is a pooling operation designed to replace fully connected layers in classical CNNs. The idea is to generate one feature map for each corresponding category of the classification task in the last mlpconv layer. Instead of adding fully connected layers on top of the feature maps, we take the a...
Given the following machine learning model name: Prediction-aware One-To-One, provide a description of the model
**Prediction-aware One-To-One**, or **POTO**, is an assignment rule for object detection which dynamically assigns the foreground samples according to the quality of classification and regression simultaneously.
Given the following machine learning model name: ALDA, provide a description of the model
**Adversarial-Learned Loss for Domain Adaptation** is a method for domain adaptation that combines adversarial learning with self-training. Specifically, the domain discriminator has to produce different corrected labels for different domains, while the feature generator aims to confuse the domain discriminator. The ad...
Given the following machine learning model name: Firefly algorithm, provide a description of the model
Metaheuristic algorithm
Given the following machine learning model name: Graph Convolutional Networks for Fake News Detection, provide a description of the model
Social media are nowadays one of the main news sources for millions of people around the globe due to their low cost, easy access and rapid dissemination. This however comes at the cost of dubious trustworthiness and significant risk of exposure to 'fake news', intentionally written to mislead the readers. Automaticall...
Given the following machine learning model name: GAN Hinge Loss, provide a description of the model
The **GAN Hinge Loss** is a hinge loss based loss function for [generative adversarial networks](https://paperswithcode.com/methods/category/generative-adversarial-networks): $$ L\_{D} = -\mathbb{E}\_{\left(x, y\right)\sim{p}\_{data}}\left[\min\left(0, -1 + D\left(x, y\right)\right)\right] -\mathbb{E}\_{z\sim{p\_{z}...
Given the following machine learning model name: GoogLeNet, provide a description of the model
**GoogLeNet** is a type of convolutional neural network based on the [Inception](https://paperswithcode.com/method/inception-module) architecture. It utilises Inception modules, which allow the network to choose between multiple convolutional filter sizes in each block. An Inception network stacks these modules on top ...
Given the following machine learning model name: Differentiable Neural Architecture Search, provide a description of the model
Given the following machine learning model name: Variational Graph Auto Encoder, provide a description of the model
Given the following machine learning model name: Diffusion, provide a description of the model
Diffusion models generate samples by gradually removing noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).
Given the following machine learning model name: COCO-FUNIT, provide a description of the model
**COCO-FUNIT** is few-shot image translation model which computes the style embedding of the example images conditioned on the input image and a new module called the constant style bias. It builds on top of [FUNIT](https://arxiv.org/abs/1905.01723) by identifying the content loss problem and then addressing it with a ...
Given the following machine learning model name: Lookahead, provide a description of the model
**Lookahead** is a type of stochastic optimizer that iteratively updates two sets of weights: "fast" and "slow". Intuitively, the algorithm chooses a search direction by looking ahead at the sequence of *fast weights* generated by another optimizer. **Algorithm 1** Lookahead Optimizer **Require** Initial para...
Given the following machine learning model name: REINFORCE, provide a description of the model
**REINFORCE** is a Monte Carlo variant of a policy gradient algorithm in reinforcement learning. The agent collects samples of an episode using its current policy, and uses it to update the policy parameter $\theta$. Since one full trajectory must be completed to construct a sample space, it is updated as an off-policy...
Given the following machine learning model name: DNN2LR, provide a description of the model
**DNN2LR** is an automatic feature crossing method to find feature interactions in a deep neural network, and use them as cross features in logistic regression. In general, DNN2LR consists of two steps: (1) generating a compact and accurate candidate set of cross feature fields; (2) searching in the candidate set for t...
Given the following machine learning model name: Synthesizer, provide a description of the model
The **Synthesizer** is a model that learns synthetic attention weights without token-token interactions. Unlike [Transformers](https://paperswithcode.com/method/transformer), the model eschews dot product self-attention but also content-based self-attention altogether. Synthesizer learns to synthesize the self-alignme...
Given the following machine learning model name: Auxiliary Batch Normalization, provide a description of the model
**Auxiliary Batch Normalization** is a type of regularization used in adversarial training schemes. The idea is that adversarial examples should have a separate [batch normalization](https://paperswithcode.com/method/batch-normalization) components to the clean examples, as they have different underlying statistics.
Given the following machine learning model name: Universal Language Model Fine-tuning, provide a description of the model
**Universal Language Model Fine-tuning**, or **ULMFiT**, is an architecture and transfer learning method that can be applied to NLP tasks. It involves a 3-layer [AWD-LSTM](https://paperswithcode.com/method/awd-lstm) architecture for its representations. The training consists of three steps: 1) general language model pr...
Given the following machine learning model name: Trust Region Policy Optimization, provide a description of the model
**Trust Region Policy Optimization**, or **TRPO**, is a policy gradient method in reinforcement learning that avoids parameter updates that change the policy too much with a KL divergence constraint on the size of the policy update at each iteration. Take the case of off-policy reinforcement learning, where the poli...
Given the following machine learning model name: Gated Convolution, provide a description of the model
A **Gated Convolution** is a type of temporal [convolution](https://paperswithcode.com/method/convolution) with a gating mechanism. Zero-padding is used to ensure that future context can not be seen.
Given the following machine learning model name: Medical Entity Disambiguation using Graph Neural Networks, provide a description of the model
Given the following machine learning model name: Faster R-CNN, provide a description of the model
**Faster R-CNN** is an object detection model that improves on [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) by utilising a region proposal network ([RPN](https://paperswithcode.com/method/rpn)) with the CNN model. The RPN shares full-image convolutional features with the detection network, enabling nearly...
Given the following machine learning model name: Collaborative Distillation, provide a description of the model
**Collaborative Distillation** is a new knowledge distillation method (named Collaborative Distillation) for encoder-decoder based neural style transfer to reduce the number of convolutional filters. The main idea is underpinned by a finding that the encoder-decoder pairs construct an exclusive collaborative relationsh...
Given the following machine learning model name: Activation Normalization, provide a description of the model
**Activation Normalization** is a type of normalization used for flow-based generative models; specifically it was introduced in the [GLOW](https://paperswithcode.com/method/glow) architecture. An ActNorm layer performs an affine transformation of the activations using a scale and bias parameter per channel, similar to...
Given the following machine learning model name: Gaussian Mixture Variational Autoencoder, provide a description of the model
**GMVAE**, or **Gaussian Mixture Variational Autoencoder**, is a stochastic regularization layer for [transformers](https://paperswithcode.com/methods/category/transformers). A GMVAE layer is trained using a 700-dimensional internal representation of the first MLP layer. For every output from the first MLP layer, the G...
Given the following machine learning model name: Vision Transformer, provide a description of the model
The **Vision Transformer**, or **ViT**, is a model for image classification that employs a [Transformer](https://paperswithcode.com/method/transformer)-like architecture over patches of the image. An image is split into fixed-size patches, each of them are then linearly embedded, position embeddings are added, and the...
Given the following machine learning model name: Differentiable Architecture Search Max-W, provide a description of the model
Like [DARTS](https://paperswithcode.com/method/darts), except subtract the max weight gradients. Max-W Weighting: \begin{equation} output_i = (1 - max(w) + w_i) * op_i(input_i) \label{eqn:max_w} \end{equation}
Given the following machine learning model name: LeVIT, provide a description of the model
**LeVIT** is a hybrid neural network for fast inference image classification. LeViT is a stack of [transformer blocks](https://paperswithcode.com/method/transformer), with [pooling steps](https://paperswithcode.com/methods/category/pooling-operation) to reduce the resolution of the activation maps as in classical [conv...
Given the following machine learning model name: Randomized Leaky Rectified Linear Units, provide a description of the model
**Randomized Leaky Rectified Linear Units**, or **RReLU**, are an activation function that randomly samples the negative slope for activation values. It was first proposed and used in the Kaggle NDSB Competition. During training, $a\_{ji}$ is a random number sampled from a uniform distribution $U\left(l, u\right)$. For...
Given the following machine learning model name: Pathways Language Model, provide a description of the model
**PaLM** (**Pathways Language Model**) uses a standard Transformer model architecture (Vaswani et al., 2017) in a decoder-only setup (i.e., each timestep can only attend to itself and past timesteps), with several modifications. PaLM is trained as a 540 billion parameter, densely activated, autoregressive Transformer o...
Given the following machine learning model name: TraDeS, provide a description of the model
**TradeS** is an online joint detection and tracking model, coined as TRACK to DEtect and Segment, exploiting tracking clues to assist detection end-to-end. TraDeS infers object tracking offset by a cost volume, which is used to propagate previous object features for improving current object detection and segmentation.
Given the following machine learning model name: Policy Similarity Metric, provide a description of the model
**Policy Similarity Metric**, or **PSM**, is a similarity metric for measuring behavioral similarity between states in reinforcement learning. It assigns high similarity to states for which the optimal policies in those states as well as in future states are similar. PSM is reward-agnostic, making it more robust for ge...
Given the following machine learning model name: E-swish, provide a description of the model
Given the following machine learning model name: Geometric Manifold Component Estimator, provide a description of the model
**Geomancer** is a nonparametric algorithm for symmetry-based disentangling of data manifolds. It learns a set of subspaces to assign to each point in the dataset, where each subspace is the tangent space of one disentangled submanifold. This means that geomancer can be used to disentangle manifolds for which there may...
Given the following machine learning model name: SKNet, provide a description of the model
**SKNet** is a type of convolutional neural network that employs [selective kernel](https://paperswithcode.com/method/selective-kernel) units, with selective kernel convolutions, in its architecture. This allows for a type of attention where the network can learn to attend to different receptive fields.
Given the following machine learning model name: Vision-aided GAN, provide a description of the model
Vision-aided GAN training involves using pretrained computer vision models in an ensemble of discriminators to improve GAN performance. Linear separability between real and fake samples in pretrained model embeddings is used as a measure to choose the most accurate pretrained models for a dataset.
Given the following machine learning model name: SNIP, provide a description of the model
**SNIP**, or **Scale Normalization for Image Pyramids**, is a multi-scale training scheme that selectively back-propagates the gradients of object instances of different sizes as a function of the image scale. SNIP is a modified version of MST where only the object instances that have a resolution close to the pre-trai...
Given the following machine learning model name: LocalViT, provide a description of the model
**LocalViT** aims to introduce depthwise convolutions to enhance local features modeling capability of ViTs. The network, as shown in Figure (c), brings localist mechanism into transformers through the depth-wise convolution (denoted by "DW"). To cope with the convolution operation, the conversation between sequence an...
Given the following machine learning model name: TILDEv2, provide a description of the model
**TILDEv2** is a [BERT](https://paperswithcode.com/method/bert)-based re-ranking method that stems from [TILDE](https://dl.acm.org/doi/abs/10.1145/3404835.3462922) but that addresses its limitations. It relies on contextualized exact term matching with expanded passages. This requires to only store in the index the sco...
Given the following machine learning model name: Population Based Training, provide a description of the model
**Population Based Training**, or **PBT**, is an optimization method for finding parameters and hyperparameters, and extends upon parallel search methods and sequential optimisation methods. It leverages information sharing across a population of concurrently running optimisation processes, and allows for online propa...
Given the following machine learning model name: Scatter Connection, provide a description of the model
A **Scatter Connection** is a type of connection that allows a vector to be "scattered" onto a layer representing a map, so that a vector at a specific location corresponds to objects of interest at that location (e.g. units in Starcraft II). This allows for the integration of spatial and non-spatial features.
Given the following machine learning model name: Encoder-Decoder model with local and pairwise loss along with shared encoder and discriminator network (EDLPS), provide a description of the model
In this paper, we propose a method for obtaining sentence-level embeddings. While the problem of obtaining word-level embeddings is very well studied, we propose a novel method for obtaining sentence-level embeddings. This is obtained by a simple method in the context of solving the paraphrase generation task. If we us...
Given the following machine learning model name: ReZero, provide a description of the model
**ReZero** is a [normalization](https://paperswithcode.com/methods/category/normalization) approach that dynamically facilitates well-behaved gradients and arbitrarily deep signal propagation. The idea is simple: ReZero initializes each layer to perform the identity operation. For each layer, a [residual connection](h...
Given the following machine learning model name: BERT, provide a description of the model
**BERT**, or Bidirectional Encoder Representations from Transformers, improves upon standard [Transformers](http://paperswithcode.com/method/transformer) by removing the unidirectionality constraint by using a *masked language model* (MLM) pre-training objective. The masked language model randomly masks some of the tok...
Given the following machine learning model name: Disp R-CNN, provide a description of the model
**Disp R-CNN** is a 3D object detection system for stereo images. It utilizes an instance disparity estimation network (iDispNet) that predicts disparity only for pixels on objects of interest and learns a category-specific shape prior for more accurate disparity estimation. To address the challenge from scarcity of di...
Given the following machine learning model name: Dutch Eligibility Trace, provide a description of the model
A **Dutch Eligibility Trace** is a type of [eligibility trace](https://paperswithcode.com/method/eligibility-trace) where the trace increments grow less quickly than the accumulative eligibility trace (helping avoid large variance updates). For the memory vector $\textbf{e}\_{t} \in \mathbb{R}^{b} \geq \textbf{0}$: ...
Given the following machine learning model name: ALBERT, provide a description of the model
**ALBERT** is a [Transformer](https://paperswithcode.com/method/transformer) architecture based on [BERT](https://paperswithcode.com/method/bert) but with much fewer parameters. It achieves this through two parameter reduction techniques. The first is a factorized embeddings parameterization. By decomposing the large v...
Given the following machine learning model name: GhostNet, provide a description of the model
A **GhostNet** is a type of convolutional neural network that is built using Ghost modules, which aim to generate more features by using fewer parameters (allowing for greater efficiency). GhostNet mainly consists of a stack of Ghost bottlenecks with the Ghost modules as the building block. The first layer is a sta...
Given the following machine learning model name: Target Policy Smoothing, provide a description of the model
**Target Policy Smoothing** is a regularization strategy for the value function in reinforcement learning. Deterministic policies can overfit to narrow peaks in the value estimate, making them highly susceptible to functional approximation error, increasing the variance of the target. To reduce this variance, target po...
Given the following machine learning model name: Mixed Attention Block, provide a description of the model
**Mixed Attention Block** is an attention module used in the [ConvBERT](https://paperswithcode.com/method/convbert) architecture. It is a mixture of [self-attention](https://paperswithcode.com/method/scaled) and [span-based dynamic convolution](https://paperswithcode.com/method/span-based-dynamic-convolution) (highligh...
Given the following machine learning model name: IMPALA, provide a description of the model
**IMPALA**, or the **Importance Weighted Actor Learner Architecture**, is an off-policy actor-critic framework that decouples acting from learning and learns from experience trajectories using [V-trace](https://paperswithcode.com/method/v-trace). Unlike the popular [A3C](https://paperswithcode.com/method/a3c)-based age...
Given the following machine learning model name: Conditional / Rectified flow matching, provide a description of the model
Conditional Flow Matching (CFM) is a fast way to train continuous normalizing flow (CNF) models. CFM is a simulation-free training objective for continuous normalizing flows that allows conditional generative modelling and speeds up training and inference.
Given the following machine learning model name: Adaptive Locally Connected Neuron, provide a description of the model
The **Adaptive Locally Connected Neuron (ALCN)** is a topology aware, and locally adaptive -focusing neuron: $$a = f\:\Bigg( \sum_{i=1}^{m} w_{i}\phi\left( \tau\left(i\right),\Theta\right) x_{i} + b \Bigg) %f\:\Bigg(\mathbf{X(W \circ \Phi) + b} \Bigg) $$
Given the following machine learning model name: Visual Commonsense Region-based Convolutional Neural Network, provide a description of the model
**VC R-CNN** is an unsupervised feature representation learning method, which uses Region-based Convolutional Neural Network ([R-CNN](https://paperswithcode.com/method/r-cnn)) as the visual backbone, and the causal intervention as the training objective. Given a set of detected object regions in an image (e.g., using [...
Given the following machine learning model name: Strip Pooling, provide a description of the model
**Strip Pooling** is a pooling strategy for scene parsing which considers a long but narrow kernel, i.e., $1\times{N}$ or $N\times{1}$. As an alternative to global pooling, strip pooling offers two advantages. First, it deploys a long kernel shape along one spatial dimension and hence enables capturing long-range relat...
Given the following machine learning model name: Linear Combination of Activations, provide a description of the model
The **Linear Combination of Activations**, or **LinComb**, is a type of activation function that has trainable parameters and uses the linear combination of other activation functions. $$LinComb(x) = \sum\limits_{i=0}^{n} w_i \mathcal{F}_i(x)$$
Given the following machine learning model name: NoisyNet-Dueling, provide a description of the model
**NoisyNet-Dueling** is a modification of a [Dueling Network](https://paperswithcode.com/method/dueling-network) that utilises noisy linear layers for exploration instead of $\epsilon$-greedy exploration as in the original Dueling formulation.
Given the following machine learning model name: Temporal Graph Network, provide a description of the model
**Temporal Graph Network**, or **TGN**, is a framework for deep learning on dynamic graphs represented as sequences of timed events. The memory (state) of the model at time $t$ consists of a vector $\mathbf{s}_i(t)$ for each node $i$ the model has seen so far. The memory of a node is updated after an event (e.g. intera...
Given the following machine learning model name: RAG, provide a description of the model
**Retriever-Augmented Generation**, or **RAG**, is a type of language generation model that combines pre-trained parametric and non-parametric memory for language generation. Specifically, the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed w...
Given the following machine learning model name: Exponential Decay, provide a description of the model
**Exponential Decay** is a learning rate schedule where we decay the learning rate with more iterations using an exponential function: $$ \text{lr} = \text{lr}\_{0}\exp\left(-kt\right) $$ Image Credit: [Suki Lau](https://towardsdatascience.com/learning-rate-schedules-and-adaptive-learning-rate-methods-for-deep-le...
Given the following machine learning model name: CTAL, provide a description of the model
**CTAL** is a pre-training framework for strong audio-and-language representations with a [Transformer](https://paperswithcode.com/method/transformer), which aims to learn the intra-modality and inter-modalities connections between audio and language through two proxy tasks on a large amount of audio- and-language pair...
Given the following machine learning model name: Neural Image Assessment, provide a description of the model
In the context of image enhancement, maximizing NIMA score as a prior can increase the likelihood of enhancing perceptual quality of an image.
Given the following machine learning model name: Randomized Smoothing, provide a description of the model
Given the following machine learning model name: DenseNAS-C, provide a description of the model
**DenseNAS-C** is a mobile convolutional neural network discovered through the [DenseNAS](https://paperswithcode.com/method/densenas) [neural architecture search](https://paperswithcode.com/method/neural-architecture-search) method. The basic building block is MBConvs, or inverted bottleneck residuals, from the [Mobile...
Given the following machine learning model name: Sliding Window Attention, provide a description of the model
**Sliding Window Attention** is an attention pattern for attention-based models. It was proposed as part of the [Longformer](https://paperswithcode.com/method/longformer) architecture. It is motivated by the fact that non-sparse attention in the original [Transformer](https://paperswithcode.com/method/transformer) form...
Given the following machine learning model name: Denoised Smoothing, provide a description of the model
**Denoised Smoothing** is a method for obtaining a provably robust classifier from a fixed pretrained one, without any additional training or fine-tuning of the latter. The basic idea is to prepend a custom-trained denoiser before the pretrained classifier, and then apply randomized smoothing. Randomized smoothing is a...
Given the following machine learning model name: Dilated Bottleneck with Projection Block, provide a description of the model
**Dilated Bottleneck with Projection Block** is an image model block used in the [DetNet](https://paperswithcode.com/method/detnet) convolutional neural network architecture. It employs a bottleneck structure with dilated convolutions to efficiently enlarge the receptive field. It uses a [1x1 convolution](https://paper...
Given the following machine learning model name: VoVNet, provide a description of the model
**VoVNet** is a convolutional neural network that seeks to make [DenseNet](https://paperswithcode.com/method/densenet) more efficient by concatenating all features only once in the last feature map, which makes input size constant and enables enlarging new output channel. In the Figure to the right, $F$ represents a [c...
Given the following machine learning model name: 1x1 Convolution, provide a description of the model
A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel...
Given the following machine learning model name: Guided Anchoring, provide a description of the model
**Guided Anchoring** is an anchoring scheme for object detection which leverages semantic features to guide the anchoring. The method is motivated by the observation that objects are not distributed evenly over the image. The scale of an object is also closely related to the imagery content, its location and geometry o...
Given the following machine learning model name: Self-Attention Guidance, provide a description of the model