prompts
stringlengths
87
212
description
stringlengths
0
6.76k
Given the following machine learning model name: Graph Contrastive Coding, provide a description of the model
**Graph Contrastive Coding** is a self-supervised graph neural network pre-training framework to capture the universal network topological properties across multiple networks. GCC's pre-training task is designed as subgraph instance discrimination in and across networks and leverages contrastive learning to empower gra...
Given the following machine learning model name: SKEP, provide a description of the model
**SKEP** is a self-supervised pre-training method for sentiment analysis. With the help of automatically-mined knowledge, SKEP conducts sentiment masking and constructs three sentiment knowledge prediction objectives, so as to embed sentiment information at the word, polarity and aspect level into pre-trained sentiment...
Given the following machine learning model name: SimCLRv2, provide a description of the model
**SimCLRv2** is a semi-supervised learning method for learning from few labeled examples while making best use of a large amount of unlabeled data. It is a modification of a recently proposed contrastive learning framework, [SimCLR](https://www.paperswithcode.com/method/simclr). It improves upon it in three major ways:...
Given the following machine learning model name: Root Mean Square Layer Normalization, provide a description of the model
RMSNorm regularizes the summed inputs to a neuron in one layer according to root mean square (RMS), giving the model re-scaling invariance property and implicit learning rate adaptation ability. RMSNorm is computationally simpler and thus more efficient than LayerNorm.
Given the following machine learning model name: Fourier Contour Embedding, provide a description of the model
**Fourier Contour Embedding** is a text instance representation that allows networks to learn diverse text geometry variances. Most of existing methods model text instances in image spatial domain via masks or contour point sequences in the Cartesian or the polar coordinate system. However, the mask representation migh...
Given the following machine learning model name: FBNet, provide a description of the model
**FBNet** is a type of convolutional neural architectures discovered through [DNAS](https://paperswithcode.com/method/dnas) [neural architecture search](https://paperswithcode.com/method/neural-architecture-search). It utilises a basic type of image model block inspired by [MobileNetv2](https://paperswithcode.com/metho...
Given the following machine learning model name: DeLighT Block, provide a description of the model
A **DeLighT Block** is a block used in the [DeLighT](https://paperswithcode.com/method/delight) [transformer](https://paperswithcode.com/method/transformer) architecture. It uses a [DExTra](https://paperswithcode.com/method/dextra) transformation to reduce the dimensionality of the vectors entered into the attention la...
Given the following machine learning model name: CTRL, provide a description of the model
**CTRL** is conditional [transformer](https://paperswithcode.com/method/transformer) language model, trained to condition on control codes that govern style, content, and task-specific behavior. Control codes were derived from structure that naturally co-occurs with raw text, preserving the advantages of unsupervised...
Given the following machine learning model name: Random elastic image morphing, provide a description of the model
M. Bulacu, A. Brink, T. v. d. Zant and L. Schomaker, "Recognition of Handwritten Numerical Fields in a Large Single-Writer Historical Collection," 2009 10th International Conference on Document Analysis and Recognition, Barcelona, Spain, 2009, pp. 808-812, doi: 10.1109/ICDAR.2009.8. Code: https://github.com/GrHound/...
Given the following machine learning model name: Blind Image Decomposition Network, provide a description of the model
**BIDeN**, or **Blind Image Decomposition Network**, is a model for blind image decomposition, which requires separating a superimposed image into constituent underlying images in a blind setting, that is, both the source components involved in mixing as well as the mixing mechanism are unknown. For example, rain may ...
Given the following machine learning model name: Center-pivot convolution, provide a description of the model
Given the following machine learning model name: FASFA: A Novel Next-Generation Backpropagation Optimizer, provide a description of the model
This paper introduces the fast adaptive stochastic function accelerator (FASFA) for gradient-based optimization of stochastic objective functions. It works based on Nesterov-enhanced first and second momentum estimates. The method is simple and effective during implementation because it has intuitive/familiar hyperpara...
Given the following machine learning model name: Laplacian Pyramid Network, provide a description of the model
**LapStyle**, or **Laplacian Pyramid Network**, is a feed-forward style transfer method. It uses a [Drafting Network](https://paperswithcode.com/method/drafting-network) to transfer global style patterns in low-resolution, and adopts higher resolution [Revision Networks](https://paperswithcode.com/method/revision-netwo...
Given the following machine learning model name: rnnDrop, provide a description of the model
**rnnDrop** is a [dropout](https://paperswithcode.com/method/dropout) based regularization technique for [recurrent neural networks](https://paperswithcode.com/methods/category/recurrent-neural-networks). It amounts to using the same dropout mask at every timestep. It drops both the non-recurrent and recurrent connecti...
Given the following machine learning model name: VideoBERT, provide a description of the model
VideoBERT adapts the powerful [BERT](https://paperswithcode.com/method/bert) model to learn a joint visual-linguistic representation for video. It is used in numerous tasks, including action classification and video captioning.
Given the following machine learning model name: Gated Attention Networks, provide a description of the model
Gated Attention Networks (GaAN) is a new architecture for learning on graphs. Unlike the traditional multi-head attention mechanism, which equally consumes all attention heads, GaAN uses a convolutional sub-network to control each attention head’s importance. Image credit: [GaAN: Gated Attention Networks for Learnin...
Given the following machine learning model name: High-level backbone, provide a description of the model
Given the following machine learning model name: DSelect-k, provide a description of the model
**DSelect-k** is a continuously differentiable and sparse gate for Mixture-of-experts (MoE), based on a novel binary encoding formulation. Given a user-specified parameter $k$, the gate selects at most $k$ out of the $n$ experts. The gate can be trained using first-order methods, such as stochastic gradient descent, an...
Given the following machine learning model name: RFB Net, provide a description of the model
**RFB Net** is a one-stage object detector that utilises a receptive field block module. It utilises a VGG16 backbone, and is otherwise quite similar to the [SSD](https://paperswithcode.com/method/ssd) architecture.
Given the following machine learning model name: VOS, provide a description of the model
**VOS** is a type of video object segmentation model consisting of two network components. The target appearance model consists of a light-weight module, which is learned during the inference stage using fast optimization techniques to predict a coarse but robust target segmentation. The segmentation model is exclusive...
Given the following machine learning model name: Extremely Efficient Spatial Pyramid of Depth-wise Dilated Separable Convolutions, provide a description of the model
An **EESP Unit**, or Extremely Efficient Spatial Pyramid of Depth-wise Dilated Separable Convolutions, is an image model block designed for edge devices. It was proposed as part of the [ESPNetv2](https://paperswithcode.com/method/espnetv2) CNN architecture. This building block is based on a reduce-split-transform-...
Given the following machine learning model name: DifferNet, provide a description of the model
Given the following machine learning model name: Focal Transformers, provide a description of the model
The **focal self-attention** is built to make Transformer layers scalable to high-resolution inputs. Instead of attending all tokens at fine-grain, the approach attends the fine-grain tokens only locally, but the summarized ones globally. As such, it can cover as many regions as standard self-attention but with much l...
Given the following machine learning model name: Mixture Normalization, provide a description of the model
**Mixture Normalization** is normalization technique that relies on an approximation of the probability density function of the internal representations. Any continuous distribution can be approximated with arbitrary precision using a Gaussian Mixture Model (GMM). Hence, instead of computing one set of statistical meas...
Given the following machine learning model name: Synthetic Minority Over-sampling Technique., provide a description of the model
Perhaps the most widely used approach to synthesizing new examples is called the Synthetic Minority Oversampling Technique, or SMOTE for short. This technique was described by Nitesh Chawla, et al. in their 2002 paper named for the technique titled “SMOTE: Synthetic Minority Over-sampling Technique.” SMOTE works by ...
Given the following machine learning model name: Trans-Encoder, provide a description of the model
Unsupervised knowledge distillation from a pretrained language model to *itself*, by alternating between its bi- and cross-encoder forms.
Given the following machine learning model name: Class-activation map, provide a description of the model
Class activation maps could be used to interpret the prediction decision made by the convolutional neural network (CNN). Image source: [Learning Deep Features for Discriminative Localization](https://paperswithcode.com/paper/learning-deep-features-for-discriminative)
Given the following machine learning model name: DenseNet, provide a description of the model
A **DenseNet** is a type of convolutional neural network that utilises [dense connections](https://paperswithcode.com/method/dense-connections) between layers, through [Dense Blocks](http://www.paperswithcode.com/method/dense-block), where we connect *all layers* (with matching feature-map sizes) directly with each oth...
Given the following machine learning model name: AMSGrad, provide a description of the model
**AMSGrad** is a stochastic optimization method that seeks to fix a convergence issue with [Adam](https://paperswithcode.com/method/adam) based optimizers. AMSGrad uses the maximum of past squared gradients $v\_{t}$ rather than the exponential average to update the parameters: $$m\_{t} = \beta\_{1}m\_{t-1} + \left...
Given the following machine learning model name: Herring, provide a description of the model
**Herring** is a parameter server based distributed training method. It combines AWS's Elastic Fabric [Adapter](https://paperswithcode.com/method/adapter) (EFA) with a novel parameter sharding technique that makes better use of the available network bandwidth. Herring uses EFA and balanced fusion buffer to optimally u...
Given the following machine learning model name: Self-Calibrated Convolutions, provide a description of the model
Liu et al. presented self-calibrated convolution as a means to enlarge the receptive field at each spatial location. Self-calibrated convolution is used together with a standard convolution. It first divides the input feature $X$ into $X_{1}$ and $X_{2}$ in the channel domain. The self-calibrated convolution first ...
Given the following machine learning model name: Spatial-Reduction Attention, provide a description of the model
**Spatial-Reduction Attention**, or **SRA**, is a [multi-head attention](https://paperswithcode.com/method/multi-head-attention) module used in the [Pyramid Vision Transformer](https://paperswithcode.com/method/pvt) architecture which reduces the spatial scale of the key $K$ and value $V$ before the attention operation...
Given the following machine learning model name: Tokens-To-Token Vision Transformer, provide a description of the model
**T2T-ViT** (Tokens-To-Token Vision Transformer) is a type of [Vision Transformer](https://paperswithcode.com/method/vision-transformer) which incorporates 1) a layerwise Tokens-to-Token (T2T) transformation to progressively structurize the image to tokens by recursively aggregating neighboring Tokens into one Token (T...
Given the following machine learning model name: Tacotron2, provide a description of the model
**Tacotron 2** is a neural network architecture for speech synthesis directly from text. It consists of two components: - a recurrent sequence-to-sequence feature prediction network with attention which predicts a sequence of mel spectrogram frames from an input character sequence - a modified version of [WaveNet...
Given the following machine learning model name: Hierarchical Feature Fusion, provide a description of the model
**Hierarchical Feature Fusion (HFF)** is a feature fusion method employed in [ESP](https://paperswithcode.com/method/esp) and [EESP](https://paperswithcode.com/method/eesp) image model blocks for degridding. In the ESP module, concatenating the outputs of dilated convolutions gives the ESP module a large effective rece...
Given the following machine learning model name: Bottleneck Attention Module, provide a description of the model
Park et al. proposed the bottleneck attention module (BAM), aiming to efficiently improve the representational capability of networks. It uses dilated convolution to enlarge the receptive field of the spatial attention sub-module, and build a bottleneck structure as suggested by ResNet to save computational cost. ...
Given the following machine learning model name: Contrastive Predictive Coding, provide a description of the model
**Contrastive Predictive Coding (CPC)** learns self-supervised representations by predicting the future in latent space by using powerful autoregressive models. The model uses a probabilistic contrastive loss which induces the latent space to capture information that is maximally useful to predict future samples. F...
Given the following machine learning model name: WaveRNN, provide a description of the model
**WaveRNN** is a single-layer recurrent neural network for audio generation that is designed efficiently predict 16-bit raw audio samples. The overall computation in the WaveRNN is as follows (biases omitted for brevity): $$ \mathbf{x}\_{t} = \left[\mathbf{c}\_{t−1},\mathbf{f}\_{t−1}, \mathbf{c}\_{t}\right] $$ ...
Given the following machine learning model name: Grouped-query attention, provide a description of the model
**Grouped-query attention** an interpolation of multi-query and multi-head attention that achieves quality close to multi-head at comparable speed to multi-query attention.
Given the following machine learning model name: StyleGAN, provide a description of the model
**StyleGAN** is a type of generative adversarial network. It uses an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature; in particular, the use of [adaptive instance normalization](https://paperswithcode.com/method/adaptive-instance-normalization). Otherwise...
Given the following machine learning model name: Multi-band MelGAN, provide a description of the model
**Multi-band MelGAN**, or **MB-MelGAN**, is a waveform generation model focusing on high-quality text-to-speech. It improves the original [MelGAN](https://paperswithcode.com/method/melgan) in several ways. First, it increases the receptive field of the generator, which is proven to be beneficial to speech generation. S...
Given the following machine learning model name: pixel2style2pixel, provide a description of the model
**Pixel2Style2Pixel**, or **pSp**, is an image-to-image translation framework that is based on a novel encoder that directly generates a series of style vectors which are fed into a pretrained [StyleGAN](https://paperswithcode.com/method/stylegan) generator, forming the extended $\mathcal{W+}$ latent space. Feature map...
Given the following machine learning model name: Model Rubik's Cube: Twisting Resolution, Depth and Width for TinyNets, provide a description of the model
To obtain excellent deep neural architectures, a series of techniques are carefully designed in EfficientNets. The giant formula for simultaneously enlarging the resolution, depth and width provides us a Rubik's cube for neural networks. So that we can find networks with high efficiency and excellent performance by twi...
Given the following machine learning model name: ResNeXt, provide a description of the model
A **ResNeXt** repeats a building block that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transformations) $C$, as an essential factor in addition to the dimensions of depth...
Given the following machine learning model name: OODformer, provide a description of the model
OODformer is a [transformer](https://paperswithcode.com/method/transformer)-based OOD detection architecture that leverages the contextualization capabilities of the transformer. Incorporating the transformer as the principal feature extractor allows to exploit the object concepts and their discriminate attributes alon...
Given the following machine learning model name: Principal Neighbourhood Aggregation, provide a description of the model
**Principal Neighbourhood Aggregation** (PNA) is a general and flexible architecture for graphs combining multiple aggregators with degree-scalers (which generalize the sum aggregator).
Given the following machine learning model name: GradientDICE, provide a description of the model
**GradientDICE** is a density ratio learning method for estimating the density ratio between the state distribution of the target policy and the sampling distribution in off-policy reinforcement learning. It optimizes a different objective from [GenDICE](https://arxiv.org/abs/2002.09072) by using the Perron-Frobenius t...
Given the following machine learning model name: HRNet, provide a description of the model
**HRNet**, or **High-Resolution Net**, is a general purpose convolutional neural network for tasks like semantic segmentation, object detection and image classification. It is able to maintain high resolution representations through the whole process. We start from a high-resolution [convolution](https://paperswithcode...
Given the following machine learning model name: mT5, provide a description of the model
**mt5** is a multilingual variant of [T5](https://paperswithcode.com/method/t5) that was pre-trained on a new Common Crawl-based dataset covering $101$ languages.
Given the following machine learning model name: Data augmentation using Polya-Gamma latent variables., provide a description of the model
This method applies Polya-Gamma latent variables as a way to obtain closed form expressions for full-conditionals of posterior distributions in sampling algorithms like MCMC.
Given the following machine learning model name: WideResNet, provide a description of the model
**Wide Residual Networks** are a variant on [ResNets](https://paperswithcode.com/method/resnet) where we decrease depth and increase the width of residual networks. This is achieved through the use of wide residual blocks.
Given the following machine learning model name: GShard, provide a description of the model
**GShard** is a intra-layer parallel distributed method. It consists of set of simple APIs for annotations, and a compiler extension in XLA for automatic parallelization.
Given the following machine learning model name: GrowNet, provide a description of the model
**GrowNet** is a novel approach to combine the power of gradient boosting to incrementally build complex deep neural networks out of shallow components. It introduces a versatile framework that can readily be adapted for a diverse range of machine learning tasks in a wide variety of domains.
Given the following machine learning model name: Gradient Quantization with Adaptive Levels/Multiplier, provide a description of the model
Many communication-efficient variants of [SGD](https://paperswithcode.com/method/sgd) use gradient quantization schemes. These schemes are often heuristic and fixed over the course of training. We empirically observe that the statistics of gradients of deep models change during the training. Motivated by this observati...
Given the following machine learning model name: Multi-head of Mixed Attention, provide a description of the model
Multi-heads of both self and cross attentions
Given the following machine learning model name: Lightweight Convolution, provide a description of the model
**LightConv** is a type of [depthwise convolution](https://paperswithcode.com/method/depthwise-convolution) for sequential modelling which shares certain output channels and whose weights are normalized across the temporal dimension using a [softmax](https://paperswithcode.com/method/softmax). Compared to self-attentio...
Given the following machine learning model name: PointRend, provide a description of the model
**PointRend** is a module for image segmentation tasks, such as instance and semantic segmentation, that attempts to treat segmentation as image rending problem to efficiently "render" high-quality label maps. It uses a subdivision strategy to adaptively select a non-uniform set of points at which to compute labels. Po...
Given the following machine learning model name: Sample Redistribution, provide a description of the model
**Sample Redistribution** is a [data augmentation](https://paperswithcode.com/methods/category/image-data-augmentation) technique for face detection which augments training samples based on the statistics of benchmark datasets via large-scale cropping. During training data augmentation, square patches are cropped from ...
Given the following machine learning model name: Single Headed Attention RNN, provide a description of the model
**SHA-RNN**, or **Single Headed Attention RNN**, is a recurrent neural network, and language model when combined with an embedding input and [softmax](https://paperswithcode.com/method/softmax) classifier, based on a core [LSTM](https://paperswithcode.com/method/lstm) component and a [single-headed attention](https://p...
Given the following machine learning model name: Flan-T5, provide a description of the model
**Flan-T5** is the instruction fine-tuned version of **T5** or **Text-to-Text Transfer Transformer** Language Model.
Given the following machine learning model name: Logistic Regression, provide a description of the model
**Logistic Regression**, despite its name, is a linear model for classification rather than regression. Logistic regression is also known in the literature as logit regression, maximum-entropy classification (MaxEnt) or the log-linear classifier. In this model, the probabilities describing the possible outcomes of a si...
Given the following machine learning model name: DeepLab, provide a description of the model
**DeepLab** is a semantic segmentation architecture. First, the input image goes through the network with the use of dilated convolutions. Then the output from the network is bilinearly interpolated and goes through the fully connected [CRF](https://paperswithcode.com/method/crf) to fine tune the result we obtain the f...
Given the following machine learning model name: Growing Cosine Unit, provide a description of the model
An oscillatory function defined as $x \cdot cos(x)$ that reports better performance than Sigmoid, Mish, Swish, and ReLU on several benchmarks.
Given the following machine learning model name: Gradient Checkpointing, provide a description of the model
**Gradient Checkpointing** is a method used for reducing the memory footprint when training deep neural networks, at the cost of having a small increase in computation time.
Given the following machine learning model name: SRGAN Residual Block, provide a description of the model
**SRGAN Residual Block** is a residual block used in the [SRGAN](https://paperswithcode.com/method/srgan) generator for image super-resolution. It is similar to standard [residual blocks](https://paperswithcode.com/method/residual-block), although it uses a [PReLU](https://paperswithcode.com/method/prelu) activation fu...
Given the following machine learning model name: NoisyNet-DQN, provide a description of the model
**NoisyNet-DQN** is a modification of a [DQN](https://paperswithcode.com/method/dqn) that utilises noisy linear layers for exploration instead of $\epsilon$-greedy exploration as in the original DQN formulation.
Given the following machine learning model name: Wasserstein GAN, provide a description of the model
**Wasserstein GAN**, or **WGAN**, is a type of generative adversarial network that minimizes an approximation of the Earth-Mover's distance (EM) rather than the Jensen-Shannon divergence as in the original [GAN](https://paperswithcode.com/method/gan) formulation. It leads to more stable training than original GANs with...
Given the following machine learning model name: Mutual Information Machine/Mask Image Modeling, provide a description of the model
Given the following machine learning model name: Hardtanh Activation, provide a description of the model
**Hardtanh** is an activation function used for neural networks: $$ f\left(x\right) = -1 \text{ if } x < - 1 $$ $$ f\left(x\right) = x \text{ if } -1 \leq x \leq 1 $$ $$ f\left(x\right) = 1 \text{ if } x > 1 $$ It is a cheaper and more computationally efficient version of the [tanh activation](https://paperswit...
Given the following machine learning model name: MoBY, provide a description of the model
**MoBY** is a self-supervised learning approach for [Vision Transformers](methods/category/vision-transformer). The approach is basically a combination of [MoCo v2](https://paperswithcode.com/method/moco-v2) and [BYOL](https://paperswithcode.com/method/byol). It inherits the momentum design, the key queue, and the cont...
Given the following machine learning model name: GPT-2, provide a description of the model
**GPT-2** is a [Transformer](https://paperswithcode.com/methods/category/transformers) architecture that was notable for its size (1.5 billion parameters) on its release. The model is pretrained on a WebText dataset - text from 45 million website links. It largely follows the previous [GPT](https://paperswithcode.com/m...
Given the following machine learning model name: Cosine Annealing, provide a description of the model
**Cosine Annealing** is a type of learning rate schedule that has the effect of starting with a large learning rate that is relatively rapidly decreased to a minimum value before being increased rapidly again. The resetting of the learning rate acts like a simulated restart of the learning process and the re-use of goo...
Given the following machine learning model name: ParaNet, provide a description of the model
**ParaNet** is a non-autoregressive attention-based architecture for text-to-speech, which is fully convolutional and converts text to mel spectrogram. ParaNet distills the attention from the autoregressive text-to-spectrogram model, and iteratively refines the alignment between text and spectrogram in a layer-by-layer...
Given the following machine learning model name: Expected Sarsa, provide a description of the model
**Expected Sarsa** is like [Q-learning](https://paperswithcode.com/method/q-learning) but instead of taking the maximum over next state-action pairs, we use the expected value, taking into account how likely each action is under the current policy. $$Q\left(S\_{t}, A\_{t}\right) \leftarrow Q\left(S\_{t}, A\_{t}\righ...
Given the following machine learning model name: UNIMO, provide a description of the model
**UNIMO** is a multi-modal pre-training architecture that can effectively adapt to both single modal and multimodal understanding and generation tasks. UNIMO learns visual representations and textual representations simultaneously, and unifies them into the same semantic space via [cross-modal contrastive learning](htt...
Given the following machine learning model name: Relational Graph Convolution Network, provide a description of the model
An **RGCN**, or **Relational Graph Convolution Network**, is a an application of the [GCN framework](https://paperswithcode.com/method/gcn) to modeling relational data, specifically to link prediction and entity classification tasks. See [here](https://docs.dgl.ai/en/0.4.x/tutorials/models/1_gnn/4_rgcn.html) for an...
Given the following machine learning model name: K-Maximal Word Allocation, provide a description of the model
Given the following machine learning model name: Stochastic Gradient Variational Bayes, provide a description of the model
Given the following machine learning model name: Overfitting Conditional Diffusion Model, provide a description of the model
Given the following machine learning model name: VGG-16, provide a description of the model
Given the following machine learning model name: ReGLU, provide a description of the model
**ReGLU** is an activation function which is a variant of [GLU](https://paperswithcode.com/method/glu). The definition is as follows: $$ \text{ReGLU}\left(x, W, V, b, c\right) = \max\left(0, xW + b\right) \otimes \left(xV + c\right) $$
Given the following machine learning model name: Metric Pairwise Constrained KMeans, provide a description of the model
Original paper : Integrating Constraints and Metric Learning in Semi-Supervised Clustering, Bilenko et al. 2004
Given the following machine learning model name: IFBlock, provide a description of the model
**IFBlock** is a video model block used in the [IFNet](https://paperswithcode.com/method/ifnet) architecture for video frame interpolation. IFBlocks do not contain expensive operators like cost volume or forward warping and use 3 × 3 convolution and deconvolution as building blocks. Each IFBlock has a feed-forward stru...
Given the following machine learning model name: PrivacyNet, provide a description of the model
**PrivacyNet** is a [GAN](https://paperswithcode.com/method/gan)-based semi-adversarial network (SAN) that modifies an input face image such that it can be used by a face matcher for matching purposes but cannot be reliably used by an attribute classifier. PrivacyNet allows a person to choose specific attributes that h...
Given the following machine learning model name: SongNet, provide a description of the model
**SongNet** is an auto-regressive [Transformer](https://paperswithcode.com/method/transformer)-based language model for rigid format text detection. Sets of symbols are tailor-designed to improve the modeling performance especially on format, rhyme, and sentence integrity. The attention mechanism is improved to impel t...
Given the following machine learning model name: Colorization, provide a description of the model
**Colorization** is a self-supervision approach that relies on colorization as the pretext task in order to learn image representations.
Given the following machine learning model name: Epsilon Greedy Exploration, provide a description of the model
**$\epsilon$-Greedy Exploration** is an exploration strategy in reinforcement learning that takes an exploratory action with probability $\epsilon$ and a greedy action with probability $1-\epsilon$. It tackles the exploration-exploitation tradeoff with reinforcement learning algorithms: the desire to explore the state ...
Given the following machine learning model name: RetinaMask, provide a description of the model
**RetinaMask** is a one-stage object detection method that improves upon [RetinaNet](https://paperswithcode.com/method/retinanet) by adding the task of instance mask prediction during training, as well as an [adaptive loss](https://paperswithcode.com/method/adaptive-loss) that improves robustness to parameter choice du...
Given the following machine learning model name: Spatial & Temporal Attention, provide a description of the model
Spatial & temporal attention combines the advantages of spatial attention and temporal attention as it adaptively selects both important regions and key frames. Some works compute temporal attention and spatial attention separately, while others produce joint spatio & temporal attention maps. Further works focusing on ...
Given the following machine learning model name: SegNet, provide a description of the model
**SegNet** is a semantic segmentation model. This core trainable segmentation architecture consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 networ...
Given the following machine learning model name: PGC-DGCNN, provide a description of the model
PGC-DGCNN provides a new definition of graph convolutional filter. It generalizes the most commonly adopted filter, adding an hyper-parameter controlling the distance of the considered neighborhood. The model extends graph convolutions, following an intuition derived from the well-known convolutional filters over multi...
Given the following machine learning model name: Elastic Weight Consolidation, provide a description of the model
The methon to overcome catastrophic forgetting in neural network while continual learning
Given the following machine learning model name: Multiscale Attention ViT with Late fusion, provide a description of the model
Multiscale Attention ViT with Late fusion (MAVL) is a multi-modal network, trained with aligned image-text pairs, capable of performing targeted detection using human understandable natural language text queries. It utilizes multi-scale image features and uses deformable convolutions with late multi-modal fusion. The a...
Given the following machine learning model name: None, provide a description of the model
Given the following machine learning model name: Soft-NMS, provide a description of the model
Non-maximum suppression is an integral part of the object detection pipeline. First, it sorts all detection boxes on the basis of their scores. The detection box $M$ with the maximum score is selected and all other detection boxes with a significant overlap (using a pre-defined threshold) with $M$ are suppressed. This...
Given the following machine learning model name: AltDiffusion, provide a description of the model
In this work, we present a conceptually simple and effective method to train a strong bilingual multimodal representation model. Starting from the pretrained multimodal representation model CLIP released by OpenAI, we switched its text encoder with a pretrained multilingual text encoder XLM-R, and aligned both language...
Given the following machine learning model name: MobileViT, provide a description of the model
MobileViT is a vision transformer that is tuned to mobile phone
Given the following machine learning model name: Sequential Information Threading, provide a description of the model
Unsupervised machine learning approach for identifying information threads by leveraging answers to 5W1H questions from documents, the temporal relationships between documents and hierarchical agglomerative clustering (HAC).
Given the following machine learning model name: Smooth ReLU, provide a description of the model
Please enter a description about the method here
Given the following machine learning model name: Selective Search, provide a description of the model
**Selective Search** is a region proposal algorithm for object detection tasks. It starts by over-segmenting the image based on intensity of the pixels using a graph-based segmentation method by Felzenszwalb and Huttenlocher. Selective Search then takes these oversegments as initial input and performs the following ste...