prompts
stringlengths
87
212
description
stringlengths
0
6.76k
Given the following machine learning model name: Span-Based Dynamic Convolution, provide a description of the model
**Span-Based Dynamic Convolution** is a type of convolution used in the [ConvBERT](https://paperswithcode.com/method/convbert) architecture to capture local dependencies between tokens. Kernels are generated by taking in a local span of current token, which better utilizes local dependency and discriminates different ...
Given the following machine learning model name: Entropy Regularization, provide a description of the model
**Entropy Regularization** is a type of regularization used in [reinforcement learning](https://paperswithcode.com/methods/area/reinforcement-learning). For on-policy policy gradient based methods like [A3C](https://paperswithcode.com/method/a3c), the same mutual reinforcement behaviour leads to a highly-peaked $\pi\l...
Given the following machine learning model name: Proxy Anchor Loss for Deep Metric Learning, provide a description of the model
Given the following machine learning model name: Learnable Extended Activation Function, provide a description of the model
Given the following machine learning model name: RegNetX, provide a description of the model
**RegNetX** is a convolutional network design space with simple, regular models with parameters: depth $d$, initial width $w\_{0} > 0$, and slope $w\_{a} > 0$, and generates a different block width $u\_{j}$ for each block $j < d$. The key restriction for the RegNet types of model is that there is a linear parameterisat...
Given the following machine learning model name: Improved Gravitational Search algorithm, provide a description of the model
Metaheuristic algorithm
Given the following machine learning model name: Compressed Memory, provide a description of the model
**Compressed Memory** is a secondary FIFO memory component proposed as part of the [Compressive Transformer](https://paperswithcode.com/method/compressive-transformer) model. The Compressive [Transformer](https://paperswithcode.com/method/transformer) keeps a fine-grained memory of past activations, which are then comp...
Given the following machine learning model name: Large-scale Information Network Embedding, provide a description of the model
LINE is a novel network embedding method which is suitable for arbitrary types of information networks: undirected, directed, and/or weighted. The method optimizes a carefully designed objective function that preserves both the local and global network structures. Source: [Tang et al.](https://arxiv.org/pdf/1503.035...
Given the following machine learning model name: Graph Convolutional Network, provide a description of the model
A **Graph Convolutional Network**, or **GCN**, is an approach for semi-supervised learning on graph-structured data. It is based on an efficient variant of [convolutional neural networks](https://paperswithcode.com/methods/category/convolutional-neural-networks) which operate directly on graphs. The choice of convoluti...
Given the following machine learning model name: MobileNetV3, provide a description of the model
**MobileNetV3** is a convolutional neural network that is tuned to mobile phone CPUs through a combination of hardware-aware network architecture search (NAS) complemented by the [NetAdapt](https://paperswithcode.com/method/netadapt) algorithm, and then subsequently improved through novel architecture advances. Advance...
Given the following machine learning model name: QuantTree histograms, provide a description of the model
Given a training set drawn from an unknown $d$-variate probability distribution, QuantTree constructs a histogram by recursively splitting $\mathbb{R}^d$. The splits are defined by a stochastic process so that each bin contains a certain proportion of the training set. These histograms can be used to define test statis...
Given the following machine learning model name: Spatial Transformer, provide a description of the model
A **Spatial Transformer** is an image model block that explicitly allows the spatial manipulation of data within a [convolutional neural network](https://paperswithcode.com/methods/category/convolutional-neural-networks). It gives CNNs the ability to actively spatially transform feature maps, conditional on the feature...
Given the following machine learning model name: Mask R-CNN, provide a description of the model
**Mask R-CNN** extends [Faster R-CNN](http://paperswithcode.com/method/faster-r-cnn) to solve instance segmentation tasks. It achieves this by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition. In principle, Mask R-CNN is an intuitive extension of Faster [R-...
Given the following machine learning model name: Weight Decay, provide a description of the model
**Weight Decay**, or **$L_{2}$ Regularization**, is a regularization technique applied to the weights of a neural network. We minimize a loss function compromising both the primary loss function and a penalty on the $L\_{2}$ Norm of the weights: $$L\_{new}\left(w\right) = L\_{original}\left(w\right) + \lambda{w^{T}w...
Given the following machine learning model name: Deeper Atrous Spatial Pyramid Pooling, provide a description of the model
DASPP is a deeper version of the [ASPP](https://paperswithcode.com/method/aspp) module (the latter from [DeepLabv3](https://paperswithcode.com/method/deeplabv3)) that adds standard 3 × 3 [convolution](https://paperswithcode.com/method/convolution) after 3 × 3 dilated convolutions to refine the features and also fusing ...
Given the following machine learning model name: Hopfield Layer, provide a description of the model
A **Hopfield Layer** is a module that enables a network to associate two sets of vectors. This general functionality allows for [transformer](https://paperswithcode.com/method/transformer)-like self-attention, for decoder-encoder attention, for time series prediction (maybe with positional encoding), for sequence analy...
Given the following machine learning model name: Anti-Alias Downsampling, provide a description of the model
**Anti-Alias Downsampling (AA)** aims to improve the shift-equivariance of deep networks. Max-pooling is inherently composed of two operations. The first operation is to densely evaluate the max operator and second operation is naive subsampling. AA is proposed as a low-pass filter between them to achieve practical ant...
Given the following machine learning model name: GBlock, provide a description of the model
**GBlock** is a type of [residual block](https://paperswithcode.com/method/residual-block) used in the [GAN-TTS](https://paperswithcode.com/method/gan-tts) text-to-speech architecture - it is a stack of two residual blocks. As the generator is producing raw audio (e.g. a 2s training clip corresponds to a sequence of 4...
Given the following machine learning model name: Distributed Any-Batch Mirror Descent, provide a description of the model
**Distributed Any-Batch Mirror Descent** (DABMD) is based on distributed Mirror Descent but uses a fixed per-round computing time to limit the waiting by fast nodes to receive information updates from slow nodes. DABMD is characterized by varying minibatch sizes across nodes. It is applicable to a broader range of prob...
Given the following machine learning model name: IoU-guided NMS, provide a description of the model
**IoU-guided NMS** is a type of non-maximum suppression that help to eliminate the suppression failure caused by the misleading classification confidences. This is achieved through using the predicted IoU instead of the classification confidence as the ranking keyword for bounding boxes.
Given the following machine learning model name: Scale-wise Feature Aggregation Module, provide a description of the model
**SFAM**, or **Scale-wise Feature Aggregation Module**, is a feature extraction block from the [M2Det](https://paperswithcode.com/method/m2det) architecture. It aims to aggregate the multi-level multi-scale features generated by [Thinned U-Shaped Modules](https://paperswithcode.com/method/tum) into a multi-level featur...
Given the following machine learning model name: Genetic Algorithms, provide a description of the model
Genetic Algorithms are search algorithms that mimic Darwinian biological evolution in order to select and propagate better solutions.
Given the following machine learning model name: Segmentation of patchy areas in biomedical images based on local edge density estimation, provide a description of the model
An effective approach to the quantification of patchiness in biomedical images according to their local edge densities.
Given the following machine learning model name: ClassSR, provide a description of the model
**ClassSR** is a framework to accelerate super-resolution (SR) networks on large images (2K-8K). ClassSR combines classification and SR in a unified framework. In particular, it first uses a Class-Module to classify the sub-images into different classes according to restoration difficulties, then applies an SR-Module t...
Given the following machine learning model name: Q-Learning, provide a description of the model
**Q-Learning** is an off-policy temporal difference control algorithm: $$Q\left(S\_{t}, A\_{t}\right) \leftarrow Q\left(S\_{t}, A\_{t}\right) + \alpha\left[R_{t+1} + \gamma\max\_{a}Q\left(S\_{t+1}, a\right) - Q\left(S\_{t}, A\_{t}\right)\right] $$ The learned action-value function $Q$ directly approximates $q\_{*...
Given the following machine learning model name: Dynamic Convolution, provide a description of the model
The extremely low computational cost of lightweight CNNs constrains the depth and width of the networks, further decreasing their representational power. To address the above problem, Chen et al. proposed dynamic convolution, a novel operator design that increases representational power with negligible additional comp...
Given the following machine learning model name: GPipe, provide a description of the model
**GPipe** is a distributed model parallel method for neural networks. With GPipe, each model can be specified as a sequence of layers, and consecutive groups of layers can be partitioned into cells. Each cell is then placed on a separate accelerator. Based on this partitioned setup, batch splitting is applied. A mini-b...
Given the following machine learning model name: Ape-X, provide a description of the model
**Ape-X** is a distributed architecture for deep reinforcement learning. The algorithm decouples acting from learning: the actors interact with their own instances of the environment by selecting actions according to a shared neural network, and accumulate the resulting experience in a shared [experience replay](https:...
Given the following machine learning model name: Switch FFN, provide a description of the model
A **Switch FFN** is a sparse layer that operates independently on tokens within an input sequence. It is shown in the blue block in the figure. We diagram two tokens ($x\_{1}$ = “More” and $x\_{2}$ = “Parameters” below) being routed (solid lines) across four FFN experts, where the router independently routes each token...
Given the following machine learning model name: Negative Face Recognition, provide a description of the model
**Negative Face Recognition**, or **NFR**, is a face recognition approach that enhances the soft-biometric privacy on the template-level by representing face templates in a complementary (negative) domain. While ordinary templates characterize facial properties of an individual, negative templates describe facial prope...
Given the following machine learning model name: High-resolution Deep Convolutional Generative Adversarial Networks, provide a description of the model
**HDCGAN**, or **High-resolution Deep Convolutional Generative Adversarial Networks**, is a [DCGAN](https://paperswithcode.com/method/dcgan) based architecture that achieves high-resolution image generation through the proper use of [SELU](https://paperswithcode.com/method/selu) activations. Glasses, a mechanism to arb...
Given the following machine learning model name: Scaled Exponential Linear Unit, provide a description of the model
**Scaled Exponential Linear Units**, or **SELUs**, are activation functions that induce self-normalizing properties. The SELU activation function is given by $$f\left(x\right) = \lambda{x} \text{ if } x \geq{0}$$ $$f\left(x\right) = \lambda{\alpha\left(\exp\left(x\right) -1 \right)} \text{ if } x < 0 $$ with...
Given the following machine learning model name: MobileNetV2, provide a description of the model
**MobileNetV2** is a convolutional neural network architecture that seeks to perform well on mobile devices. It is based on an inverted residual structure where the residual connections are between the bottleneck layers. The intermediate expansion layer uses lightweight depthwise convolutions to filter features as a s...
Given the following machine learning model name: U-Net, provide a description of the model
**U-Net** is an architecture for semantic segmentation. It consists of a contracting path and an expansive path. The contracting path follows the typical architecture of a convolutional network. It consists of the repeated application of two 3x3 convolutions (unpadded convolutions), each followed by a rectified linear ...
Given the following machine learning model name: Sigmoid Activation, provide a description of the model
**Sigmoid Activations** are a type of activation function for neural networks: $$f\left(x\right) = \frac{1}{\left(1+\exp\left(-x\right)\right)}$$ Some drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient ...
Given the following machine learning model name: ResNeSt, provide a description of the model
A **ResNest** is a variant on a [ResNet](https://paperswithcode.com/method/resnet), which instead stacks Split-Attention blocks. The cardinal group representations are then concatenated along the channel dimension: $V = \text{Concat}${$V^{1},V^{2},\cdots{V}^{K}$}. As in standard residual blocks, the final output $Y$ of...
Given the following machine learning model name: Hierarchical-Split Block, provide a description of the model
**Hierarchical-Split Block** is a representational block for multi-scale feature representations. It contains many hierarchical split and concatenate connections within one single [residual block](https://paperswithcode.com/methods/category/skip-connection-blocks). Specifically, ordinary feature maps in deep neural...
Given the following machine learning model name: MaxUp, provide a description of the model
**MaxUp** is an adversarial data augmentation technique for improving the generalization performance of machine learning models. The idea is to generate a set of augmented data with some random perturbations or transforms, and minimize the maximum, or worst case loss over the augmented data. By doing so, we implicitly...
Given the following machine learning model name: AltCLIP, provide a description of the model
In this work, we present a conceptually simple and effective method to train a strong bilingual multimodal representation model. Starting from the pretrained multimodal representation model CLIP released by OpenAI, we switched its text encoder with a pretrained multilingual text encoder XLM-R, and aligned both language...
Given the following machine learning model name: Linear Discriminant Analysis, provide a description of the model
**Linear discriminant analysis** (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics, pattern recognition, and machine learning to find a linear combination of features that characterizes or separates two or more c...
Given the following machine learning model name: DeepWalk, provide a description of the model
**DeepWalk** learns embeddings (social representations) of a graph's vertices, by modeling a stream of short random walks. Social representations are latent features of the vertices that capture neighborhood similarity and community membership. These latent representations encode social relations in a continuous vector...
Given the following machine learning model name: Channel-wise Cross Fusion Transformer, provide a description of the model
**Channel-wise Cross Fusion Transformer** is a module used in the [UCTransNet](https://paperswithcode.com/method/uctransnet) architecture for semantic segmentation. It fuses the multi-scale encoder features with the advantage of the long dependency modeling in the [Transformer](https://paperswithcode.com/method/transfo...
Given the following machine learning model name: Hierarchical Transferability Calibration Network, provide a description of the model
**Hierarchical Transferability Calibration Network** (HTCN) is an adaptive object detector that hierarchically (local-region/image/instance) calibrates the transferability of feature representations for harmonizing transferability and discriminability. The proposed model consists of three components: (1) Importance Wei...
Given the following machine learning model name: 3 Dimensional Soft Attention, provide a description of the model
Given the following machine learning model name: MobileBERT, provide a description of the model
**MobileBERT** is a type of inverted-bottleneck [BERT](https://paperswithcode.com/method/bert) that compresses and accelerates the popular BERT model. MobileBERT is a thin version of BERT_LARGE, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks....
Given the following machine learning model name: Ape-X DPG, provide a description of the model
**Ape-X DPG** combines [DDPG](https://paperswithcode.com/method/ddpg) with distributed [prioritized experience replay](https://paperswithcode.com/method/prioritized-experience-replay) through the [Ape-X](https://paperswithcode.com/method/ape-x) architecture.
Given the following machine learning model name: Attention Dropout, provide a description of the model
**Attention Dropout** is a type of [dropout](https://paperswithcode.com/method/dropout) used in attention-based architectures, where elements are randomly dropped out of the [softmax](https://paperswithcode.com/method/softmax) in the attention equation. For example, for scaled-dot product attention, we would drop eleme...
Given the following machine learning model name: PyramidNet, provide a description of the model
A **PyramidNet** is a type of convolutional network where the key idea is to concentrate on the feature map dimension by increasing it gradually instead of by increasing it sharply at each residual unit with downsampling. In addition, the network architecture works as a mixture of both plain and residual networks by us...
Given the following machine learning model name: Cosine Power Annealing, provide a description of the model
Interpolation between [exponential decay](https://paperswithcode.com/method/exponential-decay) and [cosine annealing](https://paperswithcode.com/method/cosine-annealing).
Given the following machine learning model name: Self-Supervised Cross View Cross Subject Pose Contrastive Learning, provide a description of the model
Please enter a description about the method here
Given the following machine learning model name: Curvature Regularized Variational Auto-Encoder, provide a description of the model
Given the following machine learning model name: Off-Diagonal Orthogonal Regularization, provide a description of the model
**Off-Diagonal Orthogonal Regularization** is a modified form of [orthogonal regularization](https://paperswithcode.com/method/orthogonal-regularization) originally used in [BigGAN](https://paperswithcode.com/method/biggan). The original orthogonal regularization is known to be limiting so the authors explore several v...
Given the following machine learning model name: Style Transfer Module, provide a description of the model
Modules used in [GAN](https://paperswithcode.com/method/gan)'s style transfer.
Given the following machine learning model name: LARS, provide a description of the model
**Layer-wise Adaptive Rate Scaling**, or **LARS**, is a large batch optimization technique. There are two notable differences between LARS and other adaptive algorithms such as [Adam](https://paperswithcode.com/method/adam) or [RMSProp](https://paperswithcode.com/method/rmsprop): first, LARS uses a separate learning r...
Given the following machine learning model name: DAFNe, provide a description of the model
**DAFNe** is a dense one-stage anchor-free deep model for oriented object detection. It is a deep neural network that performs predictions on a dense grid over the input image, being architecturally simpler in design, as well as easier to optimize than its two-stage counterparts. Furthermore, it reduces the prediction ...
Given the following machine learning model name: MelGAN Residual Block, provide a description of the model
The **MelGAN Residual Block** is a convolutional [residual block](https://paperswithcode.com/method/residual-block) used in the [MelGAN](https://paperswithcode.com/method/melgan) generative audio architecture. It employs residual connections with dilated convolutions. Dilations are used so that temporally far output ac...
Given the following machine learning model name: Instance Colouring Stick-Breaking Process, provide a description of the model
Given the following machine learning model name: BART, provide a description of the model
**BART** is a [denoising autoencoder](https://paperswithcode.com/method/denoising-autoencoder) for pretraining sequence-to-sequence models. It is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard [Transformer](https://papersw...
Given the following machine learning model name: Informative Sample Mining Network, provide a description of the model
**Informative Sample Mining Network** is a multi-stage sample training scheme for GANs to reduce sample hardness while preserving sample informativeness. Adversarial Importance Weighting is proposed to select informative samples and assign them greater weight. The authors also propose Multi-hop Sample Training to avoid...
Given the following machine learning model name: YellowFin, provide a description of the model
**YellowFin** is a learning rate and momentum tuner motivated by robustness properties and analysis of quadratic objectives. It stems from a known but obscure fact: the momentum operator's spectral radius is constant in a large subset of the hyperparameter space. For quadratic objectives, the optimizer tunes both the l...
Given the following machine learning model name: AutoML-Zero, provide a description of the model
**AutoML-Zero** is an AutoML technique that aims to search a fine-grained space simultaneously for the model, optimization procedure, initialization, and so on, permitting much less human-design and even allowing the discovery of non-neural network algorithms. It represents ML algorithms as computer programs comprised ...
Given the following machine learning model name: weighted finite state transducer, provide a description of the model
Given the following machine learning model name: MnasNet, provide a description of the model
**MnasNet** is a type of convolutional neural network optimized for mobile devices that is discovered through mobile [neural architecture search](https://paperswithcode.com/method/neural-architecture-search), which explicitly incorporates model latency into the main objective so that the search can identify a model tha...
Given the following machine learning model name: Knowledge Distillation, provide a description of the model
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to...
Given the following machine learning model name: LayoutReader, provide a description of the model
** LayoutReader** is a sequence-to-sequence model for reading order detection that uses both textual and layout information, where the layout-aware language model [LayoutLM](https://paperswithcode.com/method/layoutlmv2) is leveraged as an encoder. The generation step in the encoder-decoder structure tis modified to gen...
Given the following machine learning model name: Residual Shuffle-Exchange Network, provide a description of the model
**Residual Shuffle-Exchange Network** is an efficient alternative to models using an attention mechanism that allows the modelling of long-range dependencies in sequences in O(n log n) time. This model achieved state-of-the-art performance on the MusicNet dataset for music transcription while being able to run inferenc...
Given the following machine learning model name: Meta Reward Learning, provide a description of the model
**Meta Reward Learning (MeRL)** is a meta-learning method for the problem of learning from sparse and underspecified rewards. For example, an agent receives a complex input, such as a natural language instruction, and needs to generate a complex response, such as an action sequence, while only receiving binary success-...
Given the following machine learning model name: Simple Neural Attention Meta-Learner, provide a description of the model
The **Simple Neural Attention Meta-Learner**, or **SNAIL**, combines the benefits of temporal convolutions and attention to solve meta-learning tasks. They introduce positional dependence through temporal convolutions to make the model applicable to reinforcement tasks - where the observations, actions, and rewards are...
Given the following machine learning model name: PANet, provide a description of the model
**Path Aggregation Network**, or **PANet**, aims to boost information flow in a proposal-based instance segmentation framework. Specifically, the feature hierarchy is enhanced with accurate localization signals in lower layers by [bottom-up path augmentation](https://paperswithcode.com/method/bottom-up-path-augmentatio...
Given the following machine learning model name: Dynamic R-CNN, provide a description of the model
**Dynamic R-CNN** is an object detection method that adjusts the label assignment criteria (IoU threshold) and the shape of regression loss function (parameters of Smooth L1 Loss) automatically based on the statistics of proposals during training. The motivation is that in previous two-stage object detectors, there is ...
Given the following machine learning model name: Dual Contrastive Learning, provide a description of the model
Contrastive learning has achieved remarkable success in representation learning via self-supervision in unsupervised settings. However, effectively adapting contrastive learning to supervised learning tasks remains as a challenge in practice. In this work, we introduce a dual contrastive learning (DualCL) framework tha...
Given the following machine learning model name: Multi-modal Teacher for Masked Modality Learning, provide a description of the model
Given the following machine learning model name: Progressive Neural Architecture Search, provide a description of the model
**Progressive Neural Architecture Search**, or **PNAS**, is a method for learning the structure of convolutional neural networks (CNNs). It uses a sequential model-based optimization (SMBO) strategy, where we search the space of cell structures, starting with simple (shallow) models and progressing to complex ones, pru...
Given the following machine learning model name: Group Decreasing Network, provide a description of the model
**Group Decreasing Network**, or **GroupDNet**, is a type of convolutional neural network for multi-modal image synthesis. GroupDNet contains one encoder and one decoder. Inspired by the idea of [VAE](https://paperswithcode.com/method/vae) and SPADE, the encoder $E$ produces a latent code $Z$ that is supposed to follo...
Given the following machine learning model name: Synergistic Image and Feature Alignment, provide a description of the model
**Synergistic Image and Feature Alignment** is an unsupervised domain adaptation framework that conducts synergistic alignment of domains from both image and feature perspectives. In SIFA, we simultaneously transform the appearance of images across domains and enhance domain-invariance of the extracted features by leve...
Given the following machine learning model name: Corner Pooling, provide a description of the model
**Corner Pooling** is a pooling technique for object detection that seeks to better localize corners by encoding explicit prior knowledge. Suppose we want to determine if a pixel at location $\left(i, j\right)$ is a top-left corner. Let $f\_{t}$ and $f\_{l}$ be the feature maps that are the inputs to the top-left corne...
Given the following machine learning model name: Aggregated Learning, provide a description of the model
**Aggregated Learning (AgrLearn)** is a vector-quantization approach to learning neural network classifiers. It builds on an equivalence between IB learning and IB quantization and exploits the power of vector quantization, which is well known in information theory.
Given the following machine learning model name: Bottom-up Path Augmentation, provide a description of the model
**Bottom-up Path Augmentation** is a feature extraction technique that seeks to shorten the information path and enhance a feature pyramid with accurate localization signals existing in low-levels. This is based on the fact that high response to edges or instance parts is a strong indicator to accurately localize insta...
Given the following machine learning model name: Spatial Group-wise Enhance, provide a description of the model
**Spatial Group-wise Enhance** is a module for convolutional neural networks that can adjust the importance of each sub-feature by generating an attention factor for each spatial location in each semantic group, so that every individual group can autonomously enhance its learnt expression and suppress possible noise ...
Given the following machine learning model name: Cycle-CenterNet, provide a description of the model
**Cycle-CenterNet** is a table structure parsing approach built on [CenterNet](https://paperswithcode.com/method/centernet) that uses a cycle-pairing module to simultaneously detect and group tabular cells into structured tables. It also utilizes a pairing loss which enables the grouping of discrete cells into the stru...
Given the following machine learning model name: Affine Operator, provide a description of the model
The **Affine Operator** is an affine transformation layer introduced in the [ResMLP](https://paperswithcode.com/method/resmlp) architecture. This replaces [layer normalization](https://paperswithcode.com/method/layer-normalization), as in [Transformer based networks](https://paperswithcode.com/methods/category/transfor...
Given the following machine learning model name: Linear Warmup With Cosine Annealing, provide a description of the model
**Linear Warmup With Cosine Annealing** is a learning rate schedule where we increase the learning rate linearly for $n$ updates and then anneal according to a cosine schedule afterwards.
Given the following machine learning model name: CodeSLAM, provide a description of the model
CodeSLAM represents the 3D geometry of a scene using the latent space of a variational autoencoder. The depth thus becomes a function of the RGB image and the unknown code, $D = G_\theta(I,c)$. During training time, the weights of the network $G_\theta$ are learnt by training the generator and encoder using a standard ...
Given the following machine learning model name: FuseFormer, provide a description of the model
**FuseFormer** is a [Transformer](https://paperswithcode.com/method/transformer)-based model designed for video inpainting via fine-grained feature fusion based on novel [Soft Split and Soft Composition](https://paperswithcode.com/method/soft-split-and-soft-composition) operations. The soft split divides feature map in...
Given the following machine learning model name: Ape-X DQN, provide a description of the model
**Ape-X DQN** is a variant of a [DQN](https://paperswithcode.com/method/dqn) with some components of [Rainbow-DQN](https://paperswithcode.com/method/rainbow-dqn) that utilizes distributed [prioritized experience replay](https://paperswithcode.com/method/prioritized-experience-replay) through the [Ape-X](https://papersw...
Given the following machine learning model name: PP-OCR, provide a description of the model
**PP-OCR** is an OCR system that consists of three parts, text detection, detected boxes rectification and text recognition. The purpose of text detection is to locate the text area in the image. In PP-OCR, Differentiable Binarization (DB) is used as text detector which is based on a simple segmentation network. It int...
Given the following machine learning model name: CenterPoint, provide a description of the model
**CenterPoint** is a two-stage 3D detector that finds centers of objects and their properties using a keypoint detector and regresses to other attributes, including 3D size, 3D orientation and velocity. In a second-stage, it refines these estimates using additional point features on the object. CenterPoint uses a stand...
Given the following machine learning model name: Shifted Rectified Linear Unit, provide a description of the model
The **Shifted Rectified Linear Unit**, or **ShiLU**, is a modification of **[ReLU](https://paperswithcode.com/method/relu)** activation function that has trainable parameters. $$ShiLU(x) = \alpha ReLU(x) + \beta$$
Given the following machine learning model name: Selective Kernel Convolution, provide a description of the model
A **Selective Kernel Convolution** is a [convolution](https://paperswithcode.com/method/convolution) that enables neurons to adaptively adjust their RF sizes among multiple kernels with different kernel sizes. Specifically, the SK convolution has three operators – Split, Fuse and Select. Multiple branches with differen...
Given the following machine learning model name: Tree Ensemble to Rules, provide a description of the model
A method to convert a Tree Ensemble model into a Rule list. This makes the AI model more transparent.
Given the following machine learning model name: ResNeXt-Elastic, provide a description of the model
**ResNeXt-Elastic** is a convolutional neural network that is a modification of a [ResNeXt](https://paperswithcode.com/method/resnext) with elastic blocks (extra upsampling and downsampling).
Given the following machine learning model name: Compact Convolutional Transformers, provide a description of the model
**Compact Convolutional Transformers** utilize sequence pooling and replace the patch embedding with a convolutional embedding, allowing for better inductive bias and making positional embeddings optional. CCT achieves better accuracy than ViT-Lite (smaller ViTs) and increases the flexibility of the input parameters.
Given the following machine learning model name: Max Pooling, provide a description of the model
**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not si...
Given the following machine learning model name: Hierarchical BiLSTM Max Pooling, provide a description of the model
HBMP is a hierarchy-like structure of [BiLSTM](https://paperswithcode.com/method/bilstm) layers with [max pooling](https://paperswithcode.com/method/max-pooling). All in all, this model improves the previous state of the art for SciTail and achieves strong results for the SNLI and MultiNLI.
Given the following machine learning model name: Causal inference, provide a description of the model
Causal inference is the process of drawing a conclusion about a causal connection based on the conditions of the occurrence of an effect. The main difference between causal inference and inference of association is that the former analyzes the response of the effect variable when the cause is changed.
Given the following machine learning model name: Conditional Variational Auto Encoder, provide a description of the model
Given the following machine learning model name: Synaptic Neural Network, provide a description of the model
A Synaptic Neural Network (SynaNN) consists of synapses and neurons. Inspired by the synapse research of neuroscience, we built a synapse model with a nonlinear and log-concave synapse function of excitatory and inhibitory probabilities of channels.
Given the following machine learning model name: Adaptively Spatial Feature Fusion, provide a description of the model
**ASFF**, or **Adaptively Spatial Feature Fusion**, is a method for pyramidal feature fusion. It learns the way to spatially filter conflictive information to suppress inconsistency across different feature scales, thus improving the scale-invariance of features. ASFF enables the network to directly learn how to sp...
Given the following machine learning model name: DenseNAS-A, provide a description of the model
**DenseNAS-A** is a mobile convolutional neural network discovered through the [DenseNAS](https://paperswithcode.com/method/densenas) [neural architecture search](https://paperswithcode.com/method/neural-architecture-search) method. The basic building block is MBConvs, or inverted bottleneck residuals, from the MobileN...
Given the following machine learning model name: CSPPeleeNet, provide a description of the model
**CSPPeleeNet** is a convolutional neural network and object detection backbone where we apply the Cross Stage Partial Network (CSPNet) approach to [PeleeNet](https://paperswithcode.com/method/peleenet). The CSPNet partitions the feature map of the base layer into two parts and then merges them through a cross-stage h...