prompts stringlengths 87 212 | description stringlengths 0 6.76k |
|---|---|
Given the following machine learning model name: Instance-Level Meta Normalization, provide a description of the model | **Instance-Level Meta Normalization** is a normalization method that addresses a learning-to-normalize problem. ILM-Norm learns to predict the normalization parameters via both the feature feed-forward and the gradient back-propagation paths. It uses an auto-encoder to predict the weights $\omega$ and bias $\beta$ as t... |
Given the following machine learning model name: Feature-Aligned Person Search Network, provide a description of the model | **AlignPS**, or **Feature-Aligned Person Search Network**, is an anchor-free framework for efficient person search. The model employs the typical architecture of an anchor-free detection model (i.e., [FCOS](https://paperswithcode.com/method/fcos)). An aligned feature aggregation (AFA) module is designed to make the mod... |
Given the following machine learning model name: Big-Little Module, provide a description of the model | **Big-Little Modules** are blocks for image models that have two branches: each of which represents a separate block from a deep model and a less deep counterpart. They were proposed as part of the [BigLittle-Net](https://paperswithcode.com/method/big-little-net) architecture. The two branches are fused with a linear c... |
Given the following machine learning model name: Rung Kutta optimization, provide a description of the model | The optimization field suffers from the metaphor-based “pseudo-novel” or “fancy” optimizers. Most of these cliché methods mimic animals' searching trends and possess a small contribution to the optimization process itself. Most of these cliché methods suffer from the locally efficient performance, biased verification m... |
Given the following machine learning model name: Autoencoders, provide a description of the model | An **autoencoder** is a type of artificial neural network used to learn efficient data codings in an unsupervised manner. The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for dimensionality reduction, by training the network to ignore signal “noise”. Along with the reductio... |
Given the following machine learning model name: Sparse Layer-wise Adaptive Moments optimizer for large Batch training, provide a description of the model | Please enter a description about the method here |
Given the following machine learning model name: Generalized State-Dependent Exploration, provide a description of the model | **Generalized State-Dependent Exploration**, or **gSDE**, is an exploration method for reinforcement learning that uses more general features and re-sampling the noise periodically.
State-Dependent Exploration (SDE) is an intermediate solution for exploration that consists in adding noise as a function of the state... |
Given the following machine learning model name: Inpainting, provide a description of the model | Train a convolutional neural network to generate the contents of an arbitrary image region conditioned on its surroundings. |
Given the following machine learning model name: Convolutional GRU, provide a description of the model | A **Convolutional Gated Recurrent Unit** is a type of [GRU](https://paperswithcode.com/method/gru) that combines GRUs with the [convolution](https://paperswithcode.com/method/convolution) operation. The update rule for input $x\_{t}$ and the previous output $h\_{t-1}$ is given by the following:
$$ r = \sigma\left(W\... |
Given the following machine learning model name: Gradient Harmonizing Mechanism C, provide a description of the model | **GHM-C** is a loss function designed to balance the gradient flow for anchor classification. The GHM first performs statistics on the number of examples with similar attributes w.r.t their gradient density and then attaches a harmonizing parameter to the gradient of each example according to the density. The modificat... |
Given the following machine learning model name: CBHG, provide a description of the model | **CBHG** is a building block used in the [Tacotron](https://paperswithcode.com/method/tacotron) text-to-speech model. It consists of a bank of 1-D convolutional filters, followed by highway networks and a bidirectional gated recurrent unit ([BiGRU](https://paperswithcode.com/method/bigru)).
The module is used to ex... |
Given the following machine learning model name: BinaryBERT, provide a description of the model | **BinaryBERT** is a [BERT](https://paperswithcode.com/method/bert)-variant that applies quantization in the form of weight binarization. Specifically, ternary weight splitting is proposed which initializes BinaryBERT by equivalently splitting from a half-sized ternary network. To obtain BinaryBERT, we first train a hal... |
Given the following machine learning model name: Big-Little Net, provide a description of the model | **Big-Little Net** is a convolutional neural network architecture for learning multi-scale feature representations. This is achieved by using a multi-branch network, which has different computational complexity at different branches with different resolutions. Through frequent merging of features from branches at disti... |
Given the following machine learning model name: DeCLUTR, provide a description of the model | **DeCLUTR** is an approach for learning universal sentence embeddings that utilizes a self-supervised objective that does not require labelled training data. The objective learns universal sentence embeddings by training an encoder to minimize the distance between the embeddings of textual segments randomly sampled fro... |
Given the following machine learning model name: Highway Layer, provide a description of the model | A **Highway Layer** contains an information highway to other layers that helps with information flow. It is characterised by the use of a gating unit to help this information flow.
A plain feedforward neural network typically consists of $L$ layers where the $l$th layer ($l \in ${$1, 2, \dots, L$}) applies a nonlin... |
Given the following machine learning model name: Generative Emotion Estimator, provide a description of the model | |
Given the following machine learning model name: PolarMask, provide a description of the model | **PolarMask** is an anchor-box free and single-shot instance segmentation method. Specifically, PolarMask takes an image as input and predicts the distance from a sampled positive location (ie a candidate object's center) with respect to the object's contour at each angle, and then assembles the predicted points to pro... |
Given the following machine learning model name: KNN and IOU based verification, provide a description of the model | **KNN and IoU-based Verification** is used to verify detections and choose between multiple detections of the same underlying object. It was originally used within the context of blood cell counting in medical images. To avoid this double counting problem, the KNN algorithm is applied in each platelet to determine its ... |
Given the following machine learning model name: ByteScheduler, provide a description of the model | **ByteScheduler** is a generic communication scheduler for distributed DNN training acceleration. It is based on analysis that partitioning and rearranging the tensor transmissions can result in optimal results in theory and good performance in real-world even with scheduling overhead. |
Given the following machine learning model name: Transductive Inference, provide a description of the model | |
Given the following machine learning model name: GAN Feature Matching, provide a description of the model | **Feature Matching** is a regularizing objective for a generator in [generative adversarial networks](https://paperswithcode.com/methods/category/generative-adversarial-networks) that prevents it from overtraining on the current discriminator. Instead of directly maximizing the output of the discriminator, the new obje... |
Given the following machine learning model name: BiFPN, provide a description of the model | A **BiFPN**, or **Weighted Bi-directional Feature Pyramid Network**, is a type of feature pyramid network which allows easy and fast multi-scale feature fusion. It incorporates the multi-level feature fusion idea from [FPN](https://paperswithcode.com/method/fpn), [PANet](https://paperswithcode.com/method/panet) and [NA... |
Given the following machine learning model name: NeuroTactic, provide a description of the model | **NeuroTactic** is a model for theorem proving which leverages [graph neural networks](https://paperswithcode.com/methods/category/graph-models) to represent the theorem and premises, and applies graph contrastive learning for pre-training. Specifically, premise selection is designed as a pretext task for the graph con... |
Given the following machine learning model name: ENIGMA, provide a description of the model | **ENIGMA** is an evaluation framework for dialog systems based on Pearson and Spearman's rank correlations between the estimated rewards and the true rewards. ENIGMA only requires a handful of pre-collected experience data, and therefore does not involve human interaction with the target policy during the evaluation, ... |
Given the following machine learning model name: Stable Rank Normalization, provide a description of the model | **Stable Rank Normalization (SRN)** is a weight-normalization scheme which minimizes the
stable rank of a linear operator. It simultaneously controls the Lipschitz constant and the stable rank of a linear operator. Stable rank is a softer version of the rank operator and is defined as the squared ratio of the Frobeniu... |
Given the following machine learning model name: Bidirectional GAN, provide a description of the model | A **BiGAN**, or **Bidirectional GAN**, is a type of generative adversarial network where the generator not only maps latent samples to generated data, but also has an inverse mapping from data to the latent representation. The motivation is to make a type of GAN that can learn rich representations for us in applicatio... |
Given the following machine learning model name: Factorized Random Synthesized Attention, provide a description of the model | **Factorized Random Synthesized Attention**, introduced with the [Synthesizer](https://paperswithcode.com/method/synthesizer) architecture, is similar to [factorized dense synthesized attention](https://paperswithcode.com/method/factorized-dense-synthesized-attention) but for random synthesizers. Letting $R$ being a ra... |
Given the following machine learning model name: PAFPN, provide a description of the model | **PAFPN** is a feature pyramid module used in Path Aggregation networks ([PANet](https://paperswithcode.com/method/panet)) that combines FPNs with [bottom-up path augmentation](https://paperswithcode.com/method/bottom-up-path-augmentation), which shortens the information path between lower layers and topmost feature. |
Given the following machine learning model name: Go-Explore, provide a description of the model | **Go-Explore** is a family of algorithms aiming to tackle two challenges with effective exploration in reinforcement learning: algorithms forgetting how to reach previously visited states ("detachment") and from failing to first return to a state before exploring from it ("derailment").
To avoid detachment, Go-Explo... |
Given the following machine learning model name: Prioritized Experience Replay, provide a description of the model | **Prioritized Experience Replay** is a type of [experience replay](https://paperswithcode.com/method/experience-replay) in reinforcement learning where we more frequently replay transitions with high expected learning progress, as measured by the magnitude of their temporal-difference (TD) error. This prioritization ca... |
Given the following machine learning model name: Siamese Network, provide a description of the model | A **Siamese Network** consists of twin networks which accept distinct inputs but are joined by an energy function at the top. This function computes a metric between the highest level feature representation on each side. The parameters between the twin networks are tied. [Weight tying](https://paperswithcode.com/method... |
Given the following machine learning model name: Deep Residual Pansharpening Neural Network, provide a description of the model | In the field of fusing multi-spectral and panchromatic images (Pan-sharpening), the impressive effectiveness of deep neural networks has been recently employed to overcome the drawbacks of traditional linear models and boost the fusing accuracy. However, to the best of our knowledge, existing research works are mainly ... |
Given the following machine learning model name: PipeDream-2BW, provide a description of the model | **PipeDream-2BW** is an asynchronous pipeline parallel method that supports memory-efficient pipeline parallelism, a hybrid form of parallelism that combines data and model parallelism with input pipelining. PipeDream-2BW uses a novel pipelining and weight gradient coalescing strategy, combined with the double bufferin... |
Given the following machine learning model name: XLSR, provide a description of the model | **XLSR** is a multilingual speech recognition model built on wav2vec 2.0 which is trained by solving a contrastive task over masked latent speech representations and jointly learns a quantization of the latents shared across languages. The model is fine-tuned on labeled data and experiments show that cross-lingual pret... |
Given the following machine learning model name: Prescribed Generative Adversarial Network, provide a description of the model | **Prescribed GANs** add noise to the output of a density network and optimize an entropy-regularized adversarial loss. The added noise renders tractable approximations of the predictive log-likelihood and stabilizes the training procedure. The entropy regularizer encourages PresGANs to capture all the modes of the data... |
Given the following machine learning model name: Closed-loop Weighted Empirical Risk Minimization, provide a description of the model | A closed-loop evaluation procedure is first used in a simulator to identify training data samples that are important for practical driving performance and then we these samples to help debias the policy network. |
Given the following machine learning model name: Feature Pyramid Network, provide a description of the model | A **Feature Pyramid Network**, or **FPN**, is a feature extractor that takes a single-scale image of an arbitrary size as input, and outputs proportionally sized feature maps at multiple levels, in a fully convolutional fashion. This process is independent of the backbone convolutional architectures. It therefore acts ... |
Given the following machine learning model name: Accumulating Eligibility Trace, provide a description of the model | An **Accumulating Eligibility Trace** is a type of [eligibility trace](https://paperswithcode.com/method/eligibility-trace) where the trace increments in an accumulative way. For the memory vector $\textbf{e}\_{t} \in \mathbb{R}^{b} \geq \textbf{0}$:
$$\mathbf{e\_{0}} = \textbf{0}$$
$$\textbf{e}\_{t} = \nabla{\ha... |
Given the following machine learning model name: classifier-guidance, provide a description of the model | |
Given the following machine learning model name: mBART, provide a description of the model | **mBART** is a sequence-to-sequence denoising auto-encoder pre-trained on large-scale monolingual corpora in many languages using the [BART objective](https://paperswithcode.com/method/bart). The input texts are noised by masking phrases and permuting sentences, and a single [Transformer model](https://paperswithcode.c... |
Given the following machine learning model name: SC-GPT, provide a description of the model | **SC-GPT** is a multi-layer [Transformer](http://paperswithcode.com/method/transformer) neural language model, trained in three steps: (i) Pre-trained on plain text, similar to [GPT-2](http://paperswithcode.com/method/gpt-2); (ii) Continuously pretrained on large amounts of dialog-act labeled utterances corpora to acqu... |
Given the following machine learning model name: HyperNetwork, provide a description of the model | A **HyperNetwork** is a network that generates weights for a main network. The behavior of the main network is the same with any usual neural network: it learns to map some raw inputs to their desired targets; whereas the hypernetwork takes a set of inputs that contain information about the structure of the weights an... |
Given the following machine learning model name: Proximity Regularization, provide a description of the model | |
Given the following machine learning model name: LR-Net, provide a description of the model | An **LR-Net** is a type of non-convolutional neural network that utilises local relation layers instead of convolutions for image feature extraction. Otherwise, the architecture follows the same design as a [ResNet](https://paperswithcode.com/method/resnet). |
Given the following machine learning model name: Contextual Graph Markov Model, provide a description of the model | Contextual Graph Markov Model (CGMM) is an approach combining ideas from generative models and neural networks for the processing of graph data. It founds on a constructive methodology to build a deep architecture comprising layers of probabilistic models that learn to encode the structured information in an incrementa... |
Given the following machine learning model name: Elastic Dense Block, provide a description of the model | **Elastic Dense Block** is a skip connection block that modifies the [Dense Block](https://paperswithcode.com/method/dense-block) with downsamplings and upsamplings in parallel branches at each layer to let the network learn from a data scaling policy in which inputs are processed at different resolutions in each layer... |
Given the following machine learning model name: Internet Explorer, provide a description of the model | Internet Explorer explores the web in a self-supervised manner to progressively find relevant examples that improve performance on a desired target dataset. It cycles between searching for images on the Internet with text queries, self-supervised training on downloaded images, determining which images were useful, and ... |
Given the following machine learning model name: WordPiece, provide a description of the model | **WordPiece** is a subword segmentation algorithm used in natural language processing. The vocabulary is initialized with individual characters in the language, then the most frequent combinations of symbols in the vocabulary are iteratively added to the vocabulary. The process is:
1. Initialize the word unit inven... |
Given the following machine learning model name: Wavelet Distributed Training, provide a description of the model | **Wavelet** is an asynchronous data parallel approach that interleaves waves of training tasks on the same group of GPUs, such that tasks belong to one wave can leverage on-device memory from tasks in another wave during their memory valley period, thus boost-up the training throughput. As shown in the Figure, Wavelet ... |
Given the following machine learning model name: Transformer in Transformer, provide a description of the model | [Transformer](https://paperswithcode.com/method/transformer) is a type of self-attention-based neural networks originally applied for NLP tasks. Recently, pure transformer-based models are proposed to solve computer vision problems. These visual transformers usually view an image as a sequence of patches while they ign... |
Given the following machine learning model name: ConvBERT, provide a description of the model | **ConvBERT** is a modification on the [BERT](https://paperswithcode.com/method/bert) architecture which uses a [span-based dynamic convolution](https://paperswithcode.com/method/span-based-dynamic-convolution) to replace self-attention heads to directly model local dependencies. Specifically a new [mixed attention modu... |
Given the following machine learning model name: EdgeFlow, provide a description of the model | **EdgeFlow** is an interactive segmentation architecture that fully utilizes interactive information of user clicks with edge-guided flow. Edge guidance is the idea that interactive segmentation improves segmentation masks progressively with user clicks. Based on user clicks, an edge mask scheme is used, which takes th... |
Given the following machine learning model name: Efficient Channel Attention, provide a description of the model | **Efficient Channel Attention** is an architectural unit based on [squeeze-and-excitation](https://paperswithcode.com/method/squeeze-and-excitation-block) blocks that reduces model complexity without dimensionality reduction. It was proposed as part of the [ECA-Net](https://paperswithcode.com/method/eca-net) CNN archit... |
Given the following machine learning model name: Invertible NxN Convolution, provide a description of the model | |
Given the following machine learning model name: Approximate Bayesian Computation, provide a description of the model | Class of methods in Bayesian Statistics where the posterior distribution is approximated over a rejection scheme on simulations because the likelihood function is intractable.
Different parameters get sampled and simulated. Then a distance function is calculated to measure the quality of the simulation compared to d... |
Given the following machine learning model name: Positional Encoding Generator, provide a description of the model | **Positional Encoding Generator**, or **PEG**, is a module used in the [Conditional Position Encoding](https://paperswithcode.com/method/conditional-positional-encoding) position embeddings. It dynamically produce the positional encodings conditioned on the local neighborhood of an input token. To condition on the loca... |
Given the following machine learning model name: Hierarchical Softmax, provide a description of the model | **Hierarchical Softmax** is a is an alternative to [softmax](https://paperswithcode.com/method/softmax) that is faster to evaluate: it is $O\left(\log{n}\right)$ time to evaluate compared to $O\left(n\right)$ for softmax. It utilises a multi-layer binary tree, where the probability of a word is calculated through the p... |
Given the following machine learning model name: Differentiable Neural Architecture Search, provide a description of the model | **DNAS**, or **Differentiable Neural Architecture Search**, uses gradient-based methods to optimize ConvNet architectures, avoiding enumerating and training individual architectures separately as in previous methods. DNAS allows us to explore a layer-wise search space where we can choose a different block for each laye... |
Given the following machine learning model name: Animatable Reconstruction of Clothed Humans, provide a description of the model | **Animatable Reconstruction of Clothed Humans** is an end-to-end framework for accurate reconstruction of animation-ready 3D clothed humans from a monocular image. ARCH is a learned pose-aware model that produces detailed 3D rigged full-body human avatars from a single unconstrained RGB image. A Semantic Space and a Se... |
Given the following machine learning model name: VQ-VAE-2, provide a description of the model | **VQ-VAE-2** is a type of variational autoencoder that combines a a two-level hierarchical VQ-[VAE](https://paperswithcode.com/method/vae) with a self-attention autoregressive model ([PixelCNN](https://paperswithcode.com/method/pixelcnn)) as a prior. The encoder and decoder architectures are kept simple and light-weigh... |
Given the following machine learning model name: Levenshtein Transformer, provide a description of the model | The **Levenshtein Transformer** (LevT) is a type of [transformer](https://paperswithcode.com/method/transformer) that aims to address the lack of flexibility of previous decoding models. Notably, in previous frameworks, the length of generated sequences is either fixed or monotonically increased as the decoding proceed... |
Given the following machine learning model name: Contour Stochastic Gradient Langevin Dynamics, provide a description of the model | Simulations of multi-modal distributions can be very costly and often lead to unreliable predictions. To accelerate the computations, we propose to sample from a flattened distribution to accelerate the computations and estimate the importance weights between the original distribution and the flattened distribution to ... |
Given the following machine learning model name: Voxel R-CNN, provide a description of the model | **Voxel R-CNN** is a voxel-based two stage framework for 3D object detection. It consists of a 3D backbone network, a 2D bird-eye-view (BEV) Region Proposal Network and a detect head. Voxel RoI Pooling is devised to extract RoI features directly from raw features for further refinement.
End-to-end, the point clouds... |
Given the following machine learning model name: Adaptive Richard's Curve Weighted Activation, provide a description of the model | This work introduces a novel activation unit that can be efficiently employed in deep neural nets (DNNs) and performs significantly better than the traditional Rectified Linear Units ([ReLU](https://paperswithcode.com/method/relu)). The function developed is a two parameter version of the specialized Richard's Curve an... |
Given the following machine learning model name: FeatureNMS, provide a description of the model | **Feature Non-Maximum Suppression**, or **FeatureNMS**, is a post-processing step for object detection models that removes duplicates where there are multiple detections outputted per object. FeatureNMS recognizes duplicates not only based on the intersection over union between the bounding boxes, but also based on the... |
Given the following machine learning model name: Octave Convolution, provide a description of the model | An **Octave Convolution (OctConv)** stores and process feature maps that vary spatially “slower” at a lower spatial resolution reducing both memory and computation cost. It takes in feature maps containing tensors of two frequencies one octave apart, and extracts information directly from the
low-frequency maps withou... |
Given the following machine learning model name: Fishr, provide a description of the model | **Fishr** is a learning scheme to enforce domain invariance in the space of the gradients of the loss function: specifically, it introduces a regularization term that matches the domain-level variances of gradients across training domains. Critically, the strategy exhibits close relations with the Fisher Information an... |
Given the following machine learning model name: Context Enhancement Module, provide a description of the model | **Context Enhancement Module (CEM)** is a feature extraction module used in object detection (specifically, [ThunderNet](https://paperswithcode.com/method/thundernet)) which aims to to enlarge the receptive field. The key idea of CEM is to aggregate multi-scale local context information and global context information ... |
Given the following machine learning model name: Rectified Linear Units, provide a description of the model | **Rectified Linear Units**, or **ReLUs**, are a type of activation function that are linear in the positive dimension, but zero in the negative dimension. The kink in the function is the source of the non-linearity. Linearity in the positive dimension has the attractive property that it prevents non-saturation of gradi... |
Given the following machine learning model name: DU-GAN, provide a description of the model | **DU-GAN** is a [generative adversarial network](https://www.paperswithcode.com/methods/category/generative-adversarial-networks) for LDCT denoising in medical imaging. The generator produces denoised LDCT images, and two independent branches with [U-Net](https://paperswithcode.com/method/u-net) based discriminators pe... |
Given the following machine learning model name: Deformable Convolution, provide a description of the model | **Deformable convolutions** add 2D offsets to the regular grid sampling locations in the standard [convolution](https://paperswithcode.com/method/convolution). It enables free form deformation of the sampling grid. The offsets are learned from the preceding feature maps, via additional convolutional layers. Thus, the d... |
Given the following machine learning model name: PSANet, provide a description of the model | **PSANet** is a semantic segmentation architecture that utilizes a [Point-wise Spatial Attention](https://paperswithcode.com/method/point-wise-spatial-attention) (PSA) module to aggregate long-range contextual information in a flexible and adaptive manner. Each position in the feature map is connected with all other on... |
Given the following machine learning model name: Longformer, provide a description of the model | **Longformer** is a modified [Transformer](https://paperswithcode.com/method/transformer) architecture. Traditional [Transformer-based models](https://paperswithcode.com/methods/category/transformers) are unable to process long sequences due to their self-attention operation, which scales quadratically with the sequenc... |
Given the following machine learning model name: Stein Variational Policy Gradient, provide a description of the model | **Stein Variational Policy Gradient**, or **SVPG**, is a policy gradient based method in reinforcement learning that uses Stein Variational Gradient Descent to allow simultaneous exploitation and exploration of multiple policies. Unlike traditional policy optimization which attempts to learn a single policy, SVPG model... |
Given the following machine learning model name: AdaBound, provide a description of the model | **AdaBound** is a variant of the [Adam](https://paperswithcode.com/method/adabound) stochastic optimizer which is designed to be more robust to extreme learning rates. Dynamic bounds are employed on learning rates, where the lower and upper bound are initialized as zero and infinity respectively, and they both smoothly... |
Given the following machine learning model name: Uncertainty Class Activation Map (U-CAM) Using Gradient Certainty Method, provide a description of the model | Understanding and explaining deep learning models is an imperative task. Towards this, we propose a method that obtains gradient-based certainty estimates that also provide [visual attention](https://paperswithcode.com/method/visual-attention) maps. Particularly, we solve for visual question answering task. We incorpor... |
Given the following machine learning model name: Twin Delayed Deep Deterministic, provide a description of the model | **TD3** builds on the [DDPG](https://paperswithcode.com/method/ddpg) algorithm for reinforcement learning, with a couple of modifications aimed at tackling overestimation bias with the value function. In particular, it utilises [clipped double Q-learning](https://paperswithcode.com/method/clipped-double-q-learning), de... |
Given the following machine learning model name: Feature Fusion Module v2, provide a description of the model | **Feature Fusion Module v2** is a feature fusion module from the [M2Det](https://paperswithcode.com/method/m2det) object detection model, and is crucial for constructing the final multi-level feature pyramid. They use [1x1 convolution](https://paperswithcode.com/method/1x1-convolution) layers to compress the channels o... |
Given the following machine learning model name: Graph Network-based Simulators, provide a description of the model | **Graph Network-Based Simulators** is a type of graph neural network that represents the state of a physical system with particles, expressed as nodes in a graph, and computes dynamics via learned message-passing. |
Given the following machine learning model name: Confidence Intervals for Diffusion Models, provide a description of the model | Given a corrupted input image, Con\textit{ffusion}, repurposes a pretrained diffusion model to generate lower and upper bounds around each reconstructed pixel. The true pixel value is guaranteed to fall within these bounds with probability $p$. |
Given the following machine learning model name: StreaMRAK, provide a description of the model | **StreaMRAK** is a streaming version of kernel ridge regression. It divdes the problem into several levels of resolution, which allows continual refinement to the predictions. |
Given the following machine learning model name: Sparse Switchable Normalization, provide a description of the model | **Sparse Switchable Normalization (SSN)** is a variant on [Switchable Normalization](https://paperswithcode.com/method/switchable-normalization) where the importance ratios are constrained to be sparse. Unlike $\ell_1$ and $\ell_0$ constraints that impose difficulties in optimization, the constrained optimization probl... |
Given the following machine learning model name: YOLOP, provide a description of the model | **YOLOP** is a panoptic driving perception network for handling traffic object detection, drivable area segmentation and lane detection simultaneously. It is composed of one encoder for feature extraction and three decoders to handle the specific tasks. It can be thought of a lightweight version of Tesla's HydraNet mod... |
Given the following machine learning model name: Pointer Network, provide a description of the model | **Pointer Networks** tackle problems where input and output data are sequential data, but can't be solved by seq2seq type models because discrete categories of output elements depend on the variable input size (and are not decided in advance).
A Pointer Network learns the conditional probability of an output sequen... |
Given the following machine learning model name: NADAM, provide a description of the model | **NADAM**, or **Nesterov-accelerated Adaptive Moment Estimation**, combines [Adam](https://paperswithcode.com/method/adam) and [Nesterov Momentum](https://paperswithcode.com/method/nesterov-accelerated-gradient). The update rule is of the form:
$$ \theta\_{t+1} = \theta\_{t} - \frac{\eta}{\sqrt{\hat{v}\_{t}}+\epsilo... |
Given the following machine learning model name: ProxylessNet-GPU, provide a description of the model | **ProxylessNet-GPU** is a convolutional neural network architecture learnt with the [ProxylessNAS](https://paperswithcode.com/method/proxylessnas) [neural architecture search](https://paperswithcode.com/method/neural-architecture-search) algorithm that is optimized for GPU devices. It uses inverted residual blocks (MBC... |
Given the following machine learning model name: LOGAN, provide a description of the model | **LOGAN** is a generative adversarial network that uses a latent optimization approach using [natural gradient descent](https://paperswithcode.com/method/natural-gradient-descent) (NGD). For the Fisher matrix in NGD, the authors use the empirical Fisher $F'$ with Tikhonov damping:
$$ F' = g \cdot g^{T} + \beta{I} $$... |
Given the following machine learning model name: Feature Information Entropy Regularized Cross Entropy, provide a description of the model | FIERCE is an entropic regularization on the **feature** space |
Given the following machine learning model name: GLM, provide a description of the model | **GLM** is a bilingual (English and Chinese) pre-trained transformer-based language model that follow the traditional architecture of decoder-only autoregressive language modeling. It leverages autoregressive blank infilling as its training objective. |
Given the following machine learning model name: Class Attention, provide a description of the model | A **Class Attention** layer, or **CA Layer**, is an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) for [vision transformers](https://paperswithcode.com/methods/category/vision-transformer) used in [CaiT](https://paperswithcode.com/method/cait) that aims to extract information ... |
Given the following machine learning model name: Sparse Sinkhorn Attention, provide a description of the model | **Sparse Sinkhorn Attention** is an attention mechanism that reduces the memory complexity of the [dot-product attention mechanism](https://paperswithcode.com/method/scaled) and is capable of learning sparse attention outputs. It is based on the idea of differentiable sorting of internal representations within the self... |
Given the following machine learning model name: Feature-Centric Voting, provide a description of the model | |
Given the following machine learning model name: RoI Tanh-polar Transform, provide a description of the model | |
Given the following machine learning model name: NPID++, provide a description of the model | **NPID++** (Non-Parametric Instance Discrimination) is a self-supervision approach that takes a non-parametric classification approach. It approves upon [NPID](https://paperswithcode.com/method/npid) by using more negative samples and training for more epochs. |
Given the following machine learning model name: NICE-SLAM: Neural Implicit Scalable Encoding for SLAM, provide a description of the model | NICE-SLAM, a dense RGB-D SLAM system that combines neural implicit decoders with hierarchical grid-based representations, which can be applied to large-scale scenes.
Neural implicit representations have recently shown encouraging results in various domains, including promising progress in simultaneous localization a... |
Given the following machine learning model name: Synchronized Batch Normalization, provide a description of the model | **Synchronized Batch Normalization (SyncBN)** is a type of [batch normalization](https://paperswithcode.com/method/batch-normalization) used for multi-GPU training. Standard batch normalization only normalizes the data within each device (GPU). SyncBN normalizes the input within the whole mini-batch. |
Given the following machine learning model name: Cascade Corner Pooling, provide a description of the model | **Cascade Corner Pooling** is a pooling layer for object detection that builds upon the [corner pooling](https://paperswithcode.com/method/corner-pooling) operation. Corners are often outside the objects, which lacks local appearance features. [CornerNet](https://paperswithcode.com/method/cornernet) uses corner pooling... |
Given the following machine learning model name: Baidu Dependency Parser, provide a description of the model | **DDParser**, or **Baidu Dependency Parser**, is a Chinese dependency parser trained on a large-scale manually labeled dataset called Baidu Chinese Treebank (DuCTB).
For inputs, for the $i$ th word, its input vector $e_{i}$ is the concatenation of the word embedding and character-level representation:
$$
e\_{i}=... |
Given the following machine learning model name: Masked Convolution, provide a description of the model | A **Masked Convolution** is a type of [convolution](https://paperswithcode.com/method/convolution) which masks certain pixels so that the model can only predict based on pixels already seen. This type of convolution was introduced with [PixelRNN](https://paperswithcode.com/method/pixelrnn) generative models, where an i... |
Given the following machine learning model name: Retrace, provide a description of the model | **Retrace** is an off-policy Q-value estimation algorithm which has guaranteed convergence for a target and behaviour policy $\left(\pi, \beta\right)$. With off-policy rollout for TD learning, we must use importance sampling for the update:
$$ \Delta{Q}^{\text{imp}}\left(S\_{t}, A\_{t}\right) = \gamma^{t}\prod\_{1\l... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.