prompts stringlengths 87 212 | description stringlengths 0 6.76k |
|---|---|
Given the following machine learning model name: N-step Returns, provide a description of the model | **$n$-step Returns** are used for value function estimation in reinforcement learning. Specifically, for $n$ steps we can write the complete return as:
$$ R\_{t}^{(n)} = r\_{t+1} + \gamma{r}\_{t+2} + \cdots + \gamma^{n-1}\_{t+n} + \gamma^{n}V\_{t}\left(s\_{t+n}\right) $$
We can then write an $n$-step backup, in t... |
Given the following machine learning model name: PeleeNet, provide a description of the model | **PeleeNet** is a convolutional neural network and object detection backbone that is a variation of [DenseNet](https://paperswithcode.com/method/densenet) with optimizations to meet a memory and computational budget. Unlike competing networks, it does not use depthwise convolutions and instead relies on regular convol... |
Given the following machine learning model name: GreedyNAS-C, provide a description of the model | **GreedyNAS-C** is a convolutional neural network discovered using the [GreedyNAS](https://paperswithcode.com/method/greedynas) [neural architecture search](https://paperswithcode.com/method/neural-architecture-search) method. The basic building blocks used are inverted residual blocks (from [MobileNetV2](https://paper... |
Given the following machine learning model name: Memory-Associated Differential Learning, provide a description of the model | **Memory-Associated Differential** (**MAD**) Learning was developed to inference from the memorized facts that we already know to predict what we want to know.
Image source: [Luo et al.](https://arxiv.org/pdf/2102.05246v1.pdf) |
Given the following machine learning model name: Tunable Network, provide a description of the model | |
Given the following machine learning model name: WaveGrad DBlock, provide a description of the model | **WaveGrad DBlocks** are used to downsample the temporal dimension of noisy waveform in [WaveGrad](https://paperswithcode.com/method/wavegrad). They are similar to UBlocks except that only one [residual block](https://paperswithcode.com/method/residual-block) is included. The dilation factors are 1, 2, 4 in the main br... |
Given the following machine learning model name: SAFRAN - Scalable and fast non-redundant rule application, provide a description of the model | SAFRAN is a rule application framework which aggregates rules through a scalable clustering algorithm. |
Given the following machine learning model name: Self-adaptive Training, provide a description of the model | **Self-adaptive Training** is a training algorithm that dynamically corrects problematic training labels by model predictions to improve generalization of deep learning for potentially corrupted training data. Accumulated predictions are used to augment the training dynamics. The use of an exponential-moving-average sc... |
Given the following machine learning model name: Harmonic Block, provide a description of the model | A **Harmonic Block** is an image model component that utilizes [Discrete Cosine Transform](https://paperswithcode.com/method/discrete-cosine-transform) (DCT) filters. Convolutional neural networks (CNNs) learn filters in order to capture local correlation patterns in feature space. In contrast, DCT has preset spectral ... |
Given the following machine learning model name: Sharpness-Aware Minimization, provide a description of the model | **Sharpness-Aware Minimization**, or **SAM**, is a procedure that improves model generalization by simultaneously minimizing loss value and loss sharpness. SAM functions by seeking parameters that lie in neighborhoods having uniformly low loss value (rather than parameters that only themselves have low loss value). |
Given the following machine learning model name: Sinusoidal Representation Network, provide a description of the model | **Siren**, or **Sinusoidal Representation Network**, is a periodic activation function for implicit neural representations. Specifically it uses the sine as a periodic activation function:
$$ \Phi\left(x\right) = \textbf{W}\_{n}\left(\phi\_{n-1} \circ \phi\_{n-2} \circ \dots \circ \phi\_{0} \right) $$ |
Given the following machine learning model name: CharacterBERT, provide a description of the model | CharacterBERT is a variant of [BERT](https://paperswithcode.com/method/bert) that **drops the wordpiece system** and **replaces it with a CharacterCNN module** just like the one [ELMo](https://paperswithcode.com/method/elmo) uses to produce its first layer representation. This allows CharacterBERT to represent any inpu... |
Given the following machine learning model name: Fast-YOLOv3, provide a description of the model | |
Given the following machine learning model name: Self-Training with Task Augmentation, provide a description of the model | **STraTA**, or **Self-Training with Task Augmentation**, is a self-training approach that builds on two key ideas for effective leverage of unlabeled data. First, STraTA uses task augmentation, a technique that synthesizes a large amount of data for auxiliary-task fine-tuning from target-task unlabeling texts. Second, ... |
Given the following machine learning model name: Involution, provide a description of the model | **Involution** is an atomic operation for deep neural networks that inverts the design principles of convolution. Involution kernels are distinct in the spatial extent but shared across channels. If involution kernels are parameterized as fixed-sized matrices like convolution kernels and updated using the back-propagat... |
Given the following machine learning model name: Lbl2Vec, provide a description of the model | |
Given the following machine learning model name: Scale Aggregation Block, provide a description of the model | A **Scale Aggregation Block** concatenates feature maps at a wide range of scales. Feature maps for each scale are generated by a stack of downsampling, [convolution](https://paperswithcode.com/method/convolution) and upsampling operations. The proposed scale aggregation block is a standard computational module which r... |
Given the following machine learning model name: Implicit Subspace Prior Learning, provide a description of the model | **Implicit Subspace Prior Learning**, or **ISPL**, is a framework to approach dual-blind face restoration, with two major distinctions from previous restoration methods: 1) Instead of assuming an explicit degradation function between LQ and HQ domain, it establishes an implicit correspondence between both domains via a... |
Given the following machine learning model name: Point-GNN, provide a description of the model | **Point-GNN** is a graph neural network for detecting objects from a LiDAR point cloud. It predicts the category and shape of the object that each vertex in the graph belongs to. In Point-GNN, there is an auto-registration mechanism to reduce translation variance, as well as a box merging and scoring operation to combi... |
Given the following machine learning model name: SERLU, provide a description of the model | **SERLU**, or **Scaled Exponentially-Regularized Linear Unit**, is a type of activation function. The new function introduces a bump-shaped function in the region of negative input. The bump-shaped function has approximately zero response to large negative input while being able to push the output of SERLU towards zero... |
Given the following machine learning model name: ClariNet, provide a description of the model | **ClariNet** is an end-to-end text-to-speech architecture. Unlike previous TTS systems which use text-to-spectogram models with a separate waveform [synthesizer](https://paperswithcode.com/method/synthesizer) (vocoder), ClariNet is a text-to-wave architecture that is fully convolutional and can be trained from scratch.... |
Given the following machine learning model name: Step Decay, provide a description of the model | **Step Decay** is a learning rate schedule that drops the learning rate by a factor every few epochs, where the number of epochs is a hyperparameter.
Image Credit: [Suki Lau](https://towardsdatascience.com/learning-rate-schedules-and-adaptive-learning-rate-methods-for-deep-learning-2c8f433990d1) |
Given the following machine learning model name: Panoptic FPN, provide a description of the model | A **Panoptic FPN** is an extension of an [FPN](https://paperswithcode.com/method/fpn) that can generate both instance and semantic segmentations via FPN. The approach starts with an FPN backbone and adds a branch for performing semantic segmentation in parallel with the existing region-based branch for instance segment... |
Given the following machine learning model name: Deep Deterministic Policy Gradient, provide a description of the model | **DDPG**, or **Deep Deterministic Policy Gradient**, is an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. It combines the actor-critic approach with insights from [DQNs](https://paperswithcode.com/method/dqn): in particular, the insights tha... |
Given the following machine learning model name: Hourglass Module, provide a description of the model | An **Hourglass Module** is an image block module used mainly for pose estimation tasks. The design of the hourglass is motivated by the need to capture information at every scale. While local evidence is essential for identifying features like faces and hands, a final pose estimate requires a coherent understanding of ... |
Given the following machine learning model name: Latent Optimisation, provide a description of the model | **Latent Optimisation** is a technique used for generative adversarial networks to refine the sample quality of $z$. Specifically, it exploits knowledge from the discriminator $D$ to refine the latent source $z$. Intuitively, the gradient $\nabla\_{z}f\left(z\right) = \delta{f}\left(z\right)\delta{z}$ points in the dir... |
Given the following machine learning model name: RESCAL, provide a description of the model | RESCAL |
Given the following machine learning model name: Florence, provide a description of the model | Florence is a computer vision foundation model aiming to learn universal visual-language representations that be adapted to various computer vision tasks, visual question answering, image captioning, video retrieval, among other tasks. Florence's workflow consists of data curation, unified learning, Transformer archite... |
Given the following machine learning model name: CuBERT, provide a description of the model | **CuBERT**, or **Code Understanding BERT**, is a [BERT](https://paperswithcode.com/method/bert) based model for code understanding. In order to achieve this, the authors curate a massive corpus of Python programs collected from GitHub. GitHub projects are known to contain a large amount of duplicate code. To avoid bias... |
Given the following machine learning model name: Differentiable Architecture Search, provide a description of the model | **Differentiable Architecture Search** (**DART**) is a method for efficient architecture search. The search space is made continuous so that the architecture can be optimized with respect to its validation set performance through gradient descent. |
Given the following machine learning model name: MARLIN, provide a description of the model | |
Given the following machine learning model name: Fraternal Dropout, provide a description of the model | **Fraternal Dropout** is a regularization method for recurrent neural networks that trains two identical copies of an RNN (that share parameters) with different [dropout](https://paperswithcode.com/method/dropout) masks while minimizing the difference between their (pre-[softmax](https://paperswithcode.com/method/softm... |
Given the following machine learning model name: Stochastic Gradient Descent, provide a description of the model | **Stochastic Gradient Descent** is an iterative optimization technique that uses minibatches of data to form an expectation of the gradient, rather than the full gradient using all available data. That is for weights $w$ and a loss function $L$ we have:
$$ w\_{t+1} = w\_{t} - \eta\hat{\nabla}\_{w}{L(w\_{t})} $$
W... |
Given the following machine learning model name: Segregated Attention Network, provide a description of the model | |
Given the following machine learning model name: Switchable Atrous Convolution, provide a description of the model | **Switchable Atrous Convolution (SAC)** softly switches the convolutional computation between different atrous rates and gathers the results using switch functions. The switch functions are spatially dependent, i.e., each location of the feature map might have different switches to control the outputs of SAC. To use SA... |
Given the following machine learning model name: Pointer Sentinel-LSTM, provide a description of the model | The **Pointer Sentinel-LSTM mixture model** is a type of recurrent neural network that combines the advantages of standard [softmax](https://paperswithcode.com/method/softmax) classifiers with those of a pointer component for effective and efficient language modeling. Rather than relying on the RNN hidden state to deci... |
Given the following machine learning model name: MADDPG, provide a description of the model | **MADDPG**, or **Multi-agent DDPG**, extends [DDPG](https://paperswithcode.com/method/ddpg) into a multi-agent policy gradient algorithm where decentralized agents learn a centralized critic based on the observations and actions of all agents. It leads to learned policies that only use local information (i.e. their own... |
Given the following machine learning model name: RESCAL with Relation Prediction, provide a description of the model | RESCAL model trained with a relation prediction objective on top of the 1vsAll loss |
Given the following machine learning model name: MuZero, provide a description of the model | **MuZero** is a model-based reinforcement learning algorithm. It builds upon [AlphaZero](https://paperswithcode.com/method/alphazero)'s search and search-based policy iteration algorithms, but incorporates a learned model into the training procedure.
The main idea of the algorithm is to predict those aspects of the... |
Given the following machine learning model name: Flow Alignment Module, provide a description of the model | **Flow Alignment Module**, or **FAM**, is a flow-based align module for scene parsing to learn Semantic Flow between feature maps of adjacent levels and broadcast high-level features to high resolution features effectively and efficiently. The concept of Semantic Flow is inspired from optical flow, which is widely used... |
Given the following machine learning model name: StarReLU, provide a description of the model | $s \cdot (\mathrm{ReLU}(x))^2 + b$
where $s \in \mathbb{R}$ and $b \in \mathbb{R}$ are shared for all channels and can be set as constants (s=0.8944, b=-0.4472) or learnable parameters. |
Given the following machine learning model name: GCNII, provide a description of the model | **GCNII** is an extension of a [Graph Convolution Networks](https://www.paperswithcode.com/method/gcn) with two new techniques, initial residual and identify mapping, to tackle the problem of oversmoothing -- where stacking more layers and adding non-linearity tends to degrade performance. At each layer, initial residu... |
Given the following machine learning model name: Generalizable Node Injection Attack, provide a description of the model | **Generalizable Node Injection Attack**, or **G-NIA**, is an attack scenario for graph neural networks where the attacker injects malicious nodes rather than modifying original nodes or edges to affect the performance of GNNs. G-NIA generates the discrete edges also by Gumbel-Top-𝑘 following OPTI and captures the coup... |
Given the following machine learning model name: Gated Linear Network, provide a description of the model | A **Gated Linear Network**, or **GLN**, is a type of backpropagation-free neural architecture. What distinguishes GLNs from contemporary neural networks is the distributed and local nature of their credit assignment mechanism; each neuron directly predicts the target, forgoing the ability to learn feature representatio... |
Given the following machine learning model name: PREDATOR, provide a description of the model | **PREDATOR** is a model for pairwise point-cloud registration with deep attention to the overlap region. Its key novelty is an overlap-attention block for early information exchange between the latent encodings of the two point clouds. In this way the subsequent decoding of the latent representations into per-point fea... |
Given the following machine learning model name: Visformer, provide a description of the model | **Visformer**, or **Vision-friendly Transformer**, is an architecture that combines [Transformer](https://paperswithcode.com/methods/category/transformers)-based architectural features with those from [convolutional neural network](https://paperswithcode.com/methods/category/convolutional-neural-networks) architectures... |
Given the following machine learning model name: AdapTive Meta Optimizer, provide a description of the model | This method combines multiple optimization techniques like [ADAM](https://paperswithcode.com/method/adam) and [SGD](https://paperswithcode.com/method/sgd) or PADAM. This method can be applied to any couple of optimizers.
Image credit: [Combining Optimization Methods Using an Adaptive Meta Optimizer](https://www.mdpi... |
Given the following machine learning model name: Residual Normal Distribution, provide a description of the model | **Residual Normal Distributions** are used to help the optimization of VAEs, preventing optimization from entering an unstable region. This can happen due to sharp gradients caused in situations where the encoder and decoder produce distributions far away from each other. The residual distribution parameterizes $q\left... |
Given the following machine learning model name: ZeRO-Offload, provide a description of the model | ZeRO-Offload is a sharded data parallel method for distributed training. It exploits both CPU memory and compute for offloading, while offering a clear path towards efficiently scaling on multiple GPUs by working with [ZeRO-powered data parallelism](https://www.paperswithcode.com/method/zero). The symbiosis allows ZeRO... |
Given the following machine learning model name: Contextual Decomposition Explanation Penalization, provide a description of the model | **Contextual Decomposition Explanation Penalization (CDEP)** is a method which leverages existing explanation techniques for neural networks in order to prevent a model from learning
unwanted relationships and ultimately improve predictive accuracy. Given particular importance
scores, CDEP works by allowing the user ... |
Given the following machine learning model name: Chinese Pre-trained Unbalanced Transformer, provide a description of the model | **CPT**, or **Chinese Pre-trained Unbalanced Transformer**, is a pre-trained unbalanced [Transformer](https://paperswithcode.com/method/transformer) for Chinese natural language understanding (NLU) and natural language generation (NLG) tasks. CPT consists of three parts: a shared encoder, an understanding decoder, and ... |
Given the following machine learning model name: RealNVP, provide a description of the model | **RealNVP** is a generative model that utilises real-valued non-volume preserving (real NVP) transformations for density estimation. The model can perform efficient and exact inference, sampling and log-density estimation of data points. |
Given the following machine learning model name: Depthwise Convolution, provide a description of the model | **Depthwise Convolution** is a type of convolution where we apply a single convolutional filter for each input channel. In the regular 2D [convolution](https://paperswithcode.com/method/convolution) performed over multiple input channels, the filter is as deep as the input and lets us freely mix channels to generate ea... |
Given the following machine learning model name: Continuous Bag-of-Words Word2Vec, provide a description of the model | **Continuous Bag-of-Words Word2Vec** is an architecture for creating word embeddings that uses $n$ future words as well as $n$ past words to create a word embedding. The objective function for CBOW is:
$$ J\_\theta = \frac{1}{T}\sum^{T}\_{t=1}\log{p}\left(w\_{t}\mid{w}\_{t-n},\ldots,w\_{t-1}, w\_{t+1},\ldots,w\_{t+n... |
Given the following machine learning model name: Dual Attention Network, provide a description of the model | In the field of scene segmentation,
encoder-decoder structures cannot make use of the global relationships
between objects, whereas RNN-based structures
heavily rely on the output of the long-term memorization.
To address the above problems,
Fu et al. proposed a novel framework,
the dual attention network (D... |
Given the following machine learning model name: Semi-Pseudo-Label, provide a description of the model | |
Given the following machine learning model name: Concatenated Skip Connection, provide a description of the model | A **Concatenated Skip Connection** is a type of skip connection that seeks to reuse features by concatenating them to new layers, allowing more information to be retained from previous layers of the network. This contrasts with say, residual connections, where element-wise summation is used instead to incorporate infor... |
Given the following machine learning model name: MeshGraphNet, provide a description of the model | **MeshGraphNet** is a framework for learning mesh-based simulations using [graph neural networks](https://paperswithcode.com/methods/category/graph-models). The model can be trained to pass messages on a mesh graph and to adapt the mesh discretization during forward simulation. The model uses an Encode-Process-Decode a... |
Given the following machine learning model name: Pairwise Constrained KMeans, provide a description of the model | A variant of the popular k-means algorithm that integrates constraint satisfaction into its objective function.
Original paper : Active Semi-Supervision for Pairwise Constrained Clustering, Basu et al. 2004 |
Given the following machine learning model name: Graph Isomorphism Network, provide a description of the model | Per the authors, Graph Isomorphism Network (GIN) generalizes the WL test and hence achieves maximum discriminative power among GNNs. |
Given the following machine learning model name: Low-level backbone, provide a description of the model | |
Given the following machine learning model name: RevNet, provide a description of the model | A **Reversible Residual Network**, or **RevNet**, is a variant of a [ResNet](https://paperswithcode.com/method/resnet) where each layer’s activations can be reconstructed exactly from the next layer’s. Therefore, the activations for most layers need not be stored in memory during backpropagation. The result is a networ... |
Given the following machine learning model name: Softsign Activation, provide a description of the model | **Softsign** is an activation function for neural networks:
$$ f\left(x\right) = \left(\frac{x}{|x|+1}\right)$$
Image Source: [Sefik Ilkin Serengil](https://sefiks.com/2017/11/10/softsign-as-a-neural-networks-activation-function/) |
Given the following machine learning model name: Cyclical Learning Rate Policy, provide a description of the model | A **Cyclical Learning Rate Policy** combines a linear learning rate decay with warm restarts.
Image: [ESPNetv2](https://paperswithcode.com/method/espnetv2) |
Given the following machine learning model name: NPID, provide a description of the model | **NPID** (Non-Parametric Instance Discrimination) is a self-supervision approach that takes a non-parametric classification approach. Noise contrastive estimation is used to learn representations. Specifically, distances (similarity) between instances are calculated directly from the features in a non-parametric way. |
Given the following machine learning model name: Amplifying Sine Unit: An Oscillatory Activation Function for Deep Neural Networks to Recover Nonlinear Oscillations Efficiently, provide a description of the model | 2023 |
Given the following machine learning model name: Quantum Process Tomography, provide a description of the model | |
Given the following machine learning model name: NAS-FPN, provide a description of the model | **NAS-FPN** is a Feature Pyramid Network that is discovered via [Neural Architecture Search](https://paperswithcode.com/method/neural-architecture-search) in a novel scalable search space covering all cross-scale connections. The discovered architecture consists of a combination of top-down and bottom-up connections to... |
Given the following machine learning model name: Gaussian Error Linear Units, provide a description of the model | The **Gaussian Error Linear Unit**, or **GELU**, is an activation function. The GELU activation function is $x\Phi(x)$, where $\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU nonlinearity weights inputs by their percentile, rather than gates inputs by their sign as in [ReLUs](https://paperswi... |
Given the following machine learning model name: CutMix, provide a description of the model | **CutMix** is an image data augmentation strategy. Instead of simply removing pixels as in [Cutout](https://paperswithcode.com/method/cutout), we replace the removed regions with a patch from another image. The ground truth labels are also mixed proportionally to the number of pixels of combined images. The added patch... |
Given the following machine learning model name: Denoising Autoencoder, provide a description of the model | A **Denoising Autoencoder** is a modification on the [autoencoder](https://paperswithcode.com/method/autoencoder) to prevent the network learning the identity function. Specifically, if the autoencoder is too big, then it can just learn the data, so the output equals the input, and does not perform any useful represent... |
Given the following machine learning model name: Feedback Transformer, provide a description of the model | A **Feedback Transformer** is a type of sequential transformer that exposes all previous representations to all future representations, meaning the lowest representation of the current timestep is formed from the highest-level abstract representation of the past. This feedback nature allows this architecture to perform... |
Given the following machine learning model name: BigBiGAN, provide a description of the model | **BigBiGAN** is a type of [BiGAN](https://paperswithcode.com/method/bigan) with a [BigGAN](https://paperswithcode.com/method/biggan) image generator. The authors initially used [ResNet](https://paperswithcode.com/method/resnet) as a baseline for the encoder $\mathcal{E}$ followed by a 4-layer MLP with skip connections,... |
Given the following machine learning model name: BLANC, provide a description of the model | **BLANC** is an automatic estimation approach for document summary quality. The goal is to measure the functional performance of a summary with an objective, reproducible, and fully automated method. BLANC achieves this by measuring the performance boost gained by a pre-trained language model with access to a document ... |
Given the following machine learning model name: Cascade Mask R-CNN, provide a description of the model | **Cascade Mask R-CNN** extends [Cascade R-CNN](https://paperswithcode.com/method/cascade-r-cnn) to instance segmentation, by adding a
mask head to the cascade.
In the [Mask R-CNN](https://paperswithcode.com/method/mask-r-cnn), the segmentation branch is inserted in parallel to the detection branch. However, the Cas... |
Given the following machine learning model name: VATT, provide a description of the model | **Video-Audio-Text Transformer**, or **VATT**, is a framework for learning multimodal representations from unlabeled data using [convolution](https://paperswithcode.com/method/convolution)-free [Transformer](https://paperswithcode.com/method/transformer) architectures. Specifically, it takes raw signals as inputs and e... |
Given the following machine learning model name: SRGAN, provide a description of the model | **SRGAN** is a generative adversarial network for single image super-resolution. It uses a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes the solution to the natural image manifold using a discriminator network that is trained to differentiate between the ... |
Given the following machine learning model name: HRank, provide a description of the model | **HRank** is a filter pruning method that explores the High Rank of the feature map in each layer (HRank). The proposed HRank is inspired by the discovery that the average rank of multiple feature maps generated by a single filter is always the same, regardless of the number of image batches CNNs receive. Based on HRa... |
Given the following machine learning model name: Global Coupled Adaptive Number of Shots, provide a description of the model | **gCANS**, or **Global Coupled Adaptive Number of Shots**, is a variational quantum algorithm for stochastic gradient descent. It adaptively allocates shots for the measurement of each gradient component at each iteration. The optimizer uses a criterion for allocating shots that incorporates information about the overa... |
Given the following machine learning model name: Differential attention for visual question answering, provide a description of the model | In this paper we aim to answer questions based on images when provided with a dataset of question-answer pairs for a number of images during training. A number of methods have focused on solving this problem by using image based attention. This is done by focusing on a specific part of the image while answering the que... |
Given the following machine learning model name: Subformer, provide a description of the model | **Subformer** is a [Transformer](https://paperswithcode.com/method/transformer) that combines sandwich-style parameter sharing, which overcomes naive cross-layer parameter sharing in generative models, and self-attentive embedding factorization (SAFE). In SAFE, a small self-attention layer is used to reduce embedding p... |
Given the following machine learning model name: Byte Pair Encoding, provide a description of the model | **Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional... |
Given the following machine learning model name: Kollen-Pollack Learning, provide a description of the model | |
Given the following machine learning model name: Track objects as points, provide a description of the model | Our tracker, CenterTrack, applies a detection model to a pair of images and detections from the prior frame. Given this minimal input, CenterTrack localizes objects and predicts their associations with the previous frame. That's it. CenterTrack is simple, online (no peeking into the future), and real-time. |
Given the following machine learning model name: TURL: Table Understanding through Representation Learning, provide a description of the model | Relational tables on the Web store a vast amount of knowledge. Owing to the wealth of such tables, there has been tremendous progress on a variety of tasks in the area of table understanding. However, existing work generally relies on heavily-engineered task- specific features and model architectures. In this paper, we... |
Given the following machine learning model name: MODNet, provide a description of the model | **MODNet** is a light-weight matting objective decomposition network that can process portrait matting from a single input image in real time. The design of MODNet benefits from optimizing a series of correlated sub-objectives simultaneously via explicit constraints. To overcome the domain shift problem, MODNet introdu... |
Given the following machine learning model name: TopK Copy, provide a description of the model | **TopK Copy** is a cross-attention guided copy mechanism for entity extraction where only the Top-$k$ important attention heads are used for computing copy distributions. The motivation is that that attention heads may not equally important, and that some heads can be pruned out with a marginal decrease in overall perf... |
Given the following machine learning model name: Streaming Module, provide a description of the model | |
Given the following machine learning model name: Instruction Pointer Attention Graph Neural Network, provide a description of the model | **Instruction Pointer Attention Graph Neural Network**, or **IPA-GNN**, is a learning-interpreter neural network (LNN) based on GNNs for learning to execute programmes. It achieves improved systematic generalization on the task of learning to execute programs using control flow graphs. The model arises by considering R... |
Given the following machine learning model name: Composed Video Retrieval, provide a description of the model | The composed video retrieval (CoVR) task is a new task, where the goal is to find a video that matches both a query image and a query text. The query image represents a visual concept that the user is interested in, and the query text specifies how the concept should be modified or refined. For example, given an image ... |
Given the following machine learning model name: Rank-based loss, provide a description of the model | |
Given the following machine learning model name: Normalized Linear Combination of Activations, provide a description of the model | The **Normalized Linear Combination of Activations**, or **NormLinComb**, is a type of activation function that has trainable parameters and uses the normalized linear combination of other activation functions.
$$NormLinComb(x) = \frac{\sum\limits_{i=0}^{n} w_i \mathcal{F}_i(x)}{\mid \mid W \mid \mid}$$ |
Given the following machine learning model name: Conditional Positional Encoding, provide a description of the model | **Conditional Positional Encoding**, or **CPE**, is a type of positional encoding for [vision transformers](https://paperswithcode.com/methods/category/vision-transformer). Unlike previous fixed or learnable positional encodings, which are predefined and independent of input tokens, CPE is dynamically generated and con... |
Given the following machine learning model name: Adaptive Bezier-Curve Network, provide a description of the model | **Adaptive Bezier-Curve Network**, or **ABCNet**, is an end-to-end framework for arbitrarily-shaped scene text spotting. It adaptively fits arbitrary-shaped text by a parameterized bezier curve. It also utilizes a feature alignment layer, [BezierAlign](https://paperswithcode.com/method/bezieralign), to calculate convol... |
Given the following machine learning model name: Continuous Kernel Convolution, provide a description of the model | |
Given the following machine learning model name: OSCAR, provide a description of the model | OSCAR is a new learning method that uses object tags detected in images as anchor points to ease the learning of image-text alignment. The model take a triple as input (word-tag-region) and pre-trained with two losses (masked token loss over words and tags, and a contrastive loss between tags and others). OSCAR represe... |
Given the following machine learning model name: Vision-and-Language BERT, provide a description of the model | **Vision-and-Language BERT** (**ViLBERT**) is a [BERT](https://paperswithcode.com/method/bert)-based model for learning task-agnostic joint representations of image content and natural language. ViLBERT extend the popular BERT architecture to a multi-modal two-stream model, processing both visual and textual inputs in ... |
Given the following machine learning model name: PnP, provide a description of the model | **PnP**, or **Poll and Pool**, is sampling module extension for [DETR](https://paperswithcode.com/method/detr)-type architectures that adaptively allocates its computation spatially to be more efficient. Concretely, the PnP module abstracts the image feature map into fine foreground object feature vectors and a small n... |
Given the following machine learning model name: Branch attention, provide a description of the model | Branch attention can be seen as a dynamic branch selection mechanism: which to pay attention to, used with a multi-branch structure. |
Given the following machine learning model name: Matrix-power Normalization, provide a description of the model |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.