prompts stringlengths 87 212 | description stringlengths 0 6.76k |
|---|---|
Given the following machine learning model name: MCKERNEL, provide a description of the model | McKernel introduces a framework to use kernel approximates in the mini-batch setting with Stochastic Gradient Descent ([SGD](https://paperswithcode.com/method/sgd)) as an alternative to Deep Learning.
The core library was developed in 2014 as integral part of a thesis of Master of Science [1,2] at Carnegie Mellon an... |
Given the following machine learning model name: A2C, provide a description of the model | **A2C**, or **Advantage Actor Critic**, is a synchronous version of the [A3C](https://paperswithcode.com/method/a3c) policy gradient method. As an alternative to the asynchronous implementation of A3C, A2C is a synchronous, deterministic implementation that waits for each actor to finish its segment of experience befor... |
Given the following machine learning model name: Inverted Residual Block, provide a description of the model | An **Inverted Residual Block**, sometimes called an **MBConv Block**, is a type of residual block used for image models that uses an inverted structure for efficiency reasons. It was originally proposed for the [MobileNetV2](https://paperswithcode.com/method/mobilenetv2) CNN architecture. It has since been reused for s... |
Given the following machine learning model name: WaveGlow, provide a description of the model | **WaveGlow** is a flow-based generative model that generates audio by sampling from a distribution. Specifically samples are taken from a zero mean spherical Gaussian with the same number of dimensions as our desired output, and those samples are put through a series of layers that transforms the simple distribution to... |
Given the following machine learning model name: Dreamix: video diffusion models are general video editors, provide a description of the model | |
Given the following machine learning model name: Sliced Iterative Generator, provide a description of the model | The **Sliced Iterative Generator (SIG)** is an iterative generative model that is a Normalizing Flow (NF), but shares the advantages of Generative Adversarial Networks (GANs). The model is based on iterative Optimal Transport of a series of 1D slices through the data space, matching on each slice the probability distri... |
Given the following machine learning model name: Virtual Data Augmentation, provide a description of the model | **Virtual Data Augmentation**, or **VDA**, is a framework for robustly fine-tuning pre-trained language model. Based on the original token embeddings, a multinomial mixture for augmenting virtual data is constructed, where a masked language model guarantees the semantic relevance and the Gaussian noise provides the aug... |
Given the following machine learning model name: Bilateral Grid, provide a description of the model | Bilateral grid is a new data structure that enables fast edge-aware image processing. It enables edge-aware image manipulations such as local tone mapping on high resolution images in real time.
Source: [Chen et al.](https://people.csail.mit.edu/sparis/publi/2007/siggraph/Chen_07_Bilateral_Grid.pdf)
Image source:... |
Given the following machine learning model name: Attentional Liquid Warping GAN, provide a description of the model | **Attentional Liquid Warping GAN** is a type of generative adversarial network for human image synthesis that utilizes a [AttLWB](https://paperswithcode.com/method/attlwb) block, which is a 3D body mesh recovery module that disentangles pose and shape. To preserve the source information, such as texture, style, color, ... |
Given the following machine learning model name: Inception-ResNet-v2, provide a description of the model | **Inception-ResNet-v2** is a convolutional neural architecture that builds on the Inception family of architectures but incorporates [residual connections](https://paperswithcode.com/method/residual-connection) (replacing the filter concatenation stage of the Inception architecture). |
Given the following machine learning model name: Switchable Normalization, provide a description of the model | **Switchable Normalization** combines three types of statistics estimated channel-wise, layer-wise, and minibatch-wise by using [instance normalization](https://paperswithcode.com/method/instance-normalization), [layer normalization](https://paperswithcode.com/method/layer-normalization), and [batch normalization](http... |
Given the following machine learning model name: MPRNet, provide a description of the model | **MPRNet** is a multi-stage progressive image restoration architecture that progressively learns restoration functions for the degraded inputs, thereby breaking down the overall recovery process into more manageable steps. Specifically, the model first learns the contextualized features using encoder-decoder architectu... |
Given the following machine learning model name: FMix, provide a description of the model | A variant of [CutMix](https://paperswithcode.com/method/cutmix) which randomly samples masks from Fourier space. |
Given the following machine learning model name: ARM-Net, provide a description of the model | ARM-Net is an adaptive relation modeling network tailored for structured data, and a lightweight framework ARMOR based on ARM-Net for relational data analytics. The key idea is to model feature interactions with cross features selectively and dynamically, by first transforming the input features into exponential space,... |
Given the following machine learning model name: Swapping Assignments between Views, provide a description of the model | **SwaV**, or **Swapping Assignments Between Views**, is a self-supervised learning approach that takes advantage of contrastive methods without requiring to compute pairwise comparisons. Specifically, it simultaneously clusters the data while enforcing consistency between cluster assignments produced for different augm... |
Given the following machine learning model name: ENet, provide a description of the model | **ENet** is a semantic segmentation architecture which utilises a compact encoder-decoder architecture. Some design choices include:
1. Using the [SegNet](https://paperswithcode.com/method/segnet) approach to downsampling y saving indices of elements chosen in max
pooling layers, and using them to produce sparse up... |
Given the following machine learning model name: Hybrid Firefly and Particle Swarm Optimization, provide a description of the model | **Hybrid Firefly and Particle Swarm Optimization (HFPSO)** is a metaheuristic optimization algorithm that combines strong points of firefly and particle swarm optimization. HFPSO tries to determine the start of the local search process properly by checking the previous global best fitness values.
[Click Here for the... |
Given the following machine learning model name: Multiplicative LSTM, provide a description of the model | A **Multiplicative LSTM (mLSTM)** is a recurrent neural network architecture for sequence modelling that combines the long short-term memory ([LSTM](https://paperswithcode.com/method/lstm)) and multiplicative recurrent neural network ([mRNN](https://paperswithcode.com/method/mrnn)) architectures. The mRNN and LSTM arc... |
Given the following machine learning model name: CSPDarknet53, provide a description of the model | **CSPDarknet53** is a convolutional neural network and backbone for object detection that uses [DarkNet-53](https://paperswithcode.com/method/darknet-53). It employs a CSPNet strategy to partition the feature map of the base layer into two parts and then merges them through a cross-stage hierarchy. The use of a split a... |
Given the following machine learning model name: Composite Backbone Network, provide a description of the model | **CBNet** is a backbone architecture that consists of multiple identical backbones (specially called Assistant Backbones and Lead Backbone) and composite connections between neighbor backbones. From left to right, the output of each stage in an Assistant Backbone, namely higher-level
features, flows to the parallel st... |
Given the following machine learning model name: VisualBERT, provide a description of the model | VisualBERT aims to reuse self-attention to implicitly align elements of the input text and regions in the input image. Visual embeddings are used to model images where the representations are represented by a bounding region in an image obtained from an object detector. These visual embeddings are constructed by summin... |
Given the following machine learning model name: OverFeat, provide a description of the model | **OverFeat** is a classic type of convolutional neural network architecture, employing [convolution](https://paperswithcode.com/method/convolution), pooling and fully connected layers. The Figure to the right shows the architectural details. |
Given the following machine learning model name: Dilated Convolution, provide a description of the model | **Dilated Convolutions** are a type of [convolution](https://paperswithcode.com/method/convolution) that “inflate” the kernel by inserting holes between the kernel elements. An additional parameter $l$ (dilation rate) indicates how much the kernel is widened. There are usually $l-1$ spaces inserted between kernel eleme... |
Given the following machine learning model name: Residual Multi-Layer Perceptrons, provide a description of the model | **Residual Multi-Layer Perceptrons**, or **ResMLP**, is an architecture built entirely upon [multi-layer perceptrons](https://paperswithcode.com/methods/category/feedforward-networks) for image classification. It is a simple [residual network](https://paperswithcode.com/method/residual-connection) that alternates (i) a... |
Given the following machine learning model name: Temporal Distribution Matching, provide a description of the model | **Temporal Distribution Matching**, or **TDM**, is a module used in the [AdaRNN](https://paperswithcode.com/method/adarnn) architecture to match the distributions of the discovered periods to build a time series prediction model $\mathcal{M}$ Given the learned time periods, the TDM module is designed to learn the comm... |
Given the following machine learning model name: CoordConv, provide a description of the model | A **CoordConv** layer is a simple extension to the standard convolutional layer. It has the same functional signature as a convolutional layer, but accomplishes the mapping by first concatenating extra channels to the incoming representation. These channels contain hard-coded coordinates, the most basic version of whic... |
Given the following machine learning model name: Temporal Word Embeddings with a Compass, provide a description of the model | TWEC is a method to generate temporal word embeddings: this method is efficient and it is based on a simple heuristic: we train an atemporal word embedding, the compass and we use this embedding to freeze one of the layers of the CBOW architecture. The frozen architecture is then used to train time-specific slices that... |
Given the following machine learning model name: Embedded Gaussian Affinity, provide a description of the model | **Embedded Gaussian Affinity** is a type of affinity or self-similarity function between two points $\mathbf{x\_{i}}$ and $\mathbf{x\_{j}}$ that uses a Gaussian function in an embedding space:
$$ f\left(\mathbf{x\_{i}}, \mathbf{x\_{j}}\right) = e^{\theta\left(\mathbf{x\_{i}}\right)^{T}\phi\left(\mathbf{x\_{j}}\right... |
Given the following machine learning model name: Multi-Head Linear Attention, provide a description of the model | **Multi-Head Linear Attention** is a type of linear multi-head self-attention module, proposed with the [Linformer](https://paperswithcode.com/method/linformer) architecture. The main idea is to add two linear projection matrices $E\_{i}, F\_{i} \in \mathbb{R}^{n\times{k}}$ when computing key and value. We first projec... |
Given the following machine learning model name: Temporal ROIAlign, provide a description of the model | **Temporal ROI Align** is an operator for extracting features from other frames' feature maps for current frame proposals by utilizing feature similarity. Considering the features of the same object instance are highly similar among frames in a video, the proposed operator implicitly extracts the most similar ROI featu... |
Given the following machine learning model name: CornerNet-Squeeze Hourglass, provide a description of the model | **CornerNet-Squeeze Hourglass** is a convolutional neural network and object detection backbone used in the [CornerNet-Squeeze](https://paperswithcode.com/method/cornernet-squeeze) object detector. It uses a modified [hourglass module](https://paperswithcode.com/method/hourglass-module) that makes use of a [fire module... |
Given the following machine learning model name: Self-Organizing Map, provide a description of the model | The **Self-Organizing Map (SOM)**, commonly also known as Kohonen network (Kohonen 1982, Kohonen 2001) is a computational method for the visualization and analysis of high-dimensional data, especially experimentally acquired information.
Extracted from [scholarpedia](http://www.scholarpedia.org/article/Self-organizi... |
Given the following machine learning model name: OPT, provide a description of the model | **OPT** is a suite of decoder-only pre-trained transformers ranging from 125M to 175B parameters. The model uses an AdamW optimizer and weight decay of 0.1. It follows a linear learning rate schedule, warming up from 0 to the maximum learning rate over the first 2000 steps in OPT-175B, or over 375M tokens in the smalle... |
Given the following machine learning model name: Wasserstein Embedding for Graph Learning, provide a description of the model | Please enter a description here |
Given the following machine learning model name: Hyper HyperNetwork, provide a description of the model | |
Given the following machine learning model name: Dynamic Memory Network, provide a description of the model | A **Dynamic Memory Network** is a neural network architecture which processes input sequences and questions, forms episodic memories, and generates relevant answers. Questions trigger an iterative attention process which allows the model to condition its attention on the inputs and the result of previous iterations. Th... |
Given the following machine learning model name: Learning From Multiple Experts, provide a description of the model | **Learning From Multiple Experts** is a self-paced knowledge distillation framework that aggregates the knowledge from multiple 'Experts' to learn a unified student model. Specifically, the proposed framework involves two levels of adaptive learning schedules: Self-paced Expert Selection and Curriculum Instance Selecti... |
Given the following machine learning model name: Harris Hawks optimization, provide a description of the model | [HHO](https://aliasgharheidari.com/HHO.html) is a popular swarm-based, gradient-free optimization algorithm with several active and time-varying phases of exploration and exploitation. This algorithm initially published by the prestigious Journal of Future Generation Computer Systems (FGCS) in 2019, and from the first ... |
Given the following machine learning model name: FairMOT, provide a description of the model | **FairMOT** is a model for multi-object tracking which consists of two homogeneous branches to predict pixel-wise objectness scores and re-ID features. The achieved fairness between the tasks is used to achieve high levels of detection and tracking accuracy. The detection branch is implemented in an anchor-free style w... |
Given the following machine learning model name: Monocular Real-Time Volumetric Performance Capture, provide a description of the model | |
Given the following machine learning model name: Self-supervised Equivariant Attention Mechanism, provide a description of the model | **Self-supervised Equivariant Attention Mechanism**, or **SEAM**, is an attention mechanism for weakly supervised semantic segmentation. The SEAM applies consistency regularization on CAMs from various transformed images to provide self-supervision for network learning. To further improve the network prediction consist... |
Given the following machine learning model name: Ensemble Clustering, provide a description of the model | Ensemble clustering, also called consensus clustering, has
been attracting much attention in recent years, aiming to combine multiple base clustering algorithms into a better and more consensus clustering. Due to its good performance, ensemble clustering plays a vital role in many research areas, such as community det... |
Given the following machine learning model name: Position-Sensitive RoI Pooling, provide a description of the model | **Position-Sensitive RoI Pooling layer** aggregates the outputs of the last convolutional layer and generates scores for each RoI. Unlike [RoI Pooling](https://paperswithcode.com/method/roi-pooling), PS RoI Pooling conducts selective pooling, and each of the $k$ × $k$ bin aggregates responses from only one score map ou... |
Given the following machine learning model name: PatchGAN, provide a description of the model | **PatchGAN** is a type of discriminator for generative adversarial networks which only penalizes structure at the scale of local image patches. The PatchGAN discriminator tries to classify if each $N \times N$ patch in an image is real or fake. This discriminator is run convolutionally across the image, averaging all r... |
Given the following machine learning model name: SEED RL, provide a description of the model | **SEED** (Scalable, Efficient, Deep-RL) is a scalable reinforcement learning agent. It utilizes an architecture that features centralized inference and an optimized communication layer. SEED adopts two state of the art distributed algorithms, [IMPALA](https://paperswithcode.com/method/impala)/[V-trace](https://paperswi... |
Given the following machine learning model name: Metric mixup, provide a description of the model | A generic way of representing and interpolating labels, which allows straightforward extension of any kind of [mixup](https://paperswithcode.com/method/mixup) to deep metric learning for a large class of loss functions. |
Given the following machine learning model name: mBARTHez, provide a description of the model | **BARThez** is a self-supervised transfer learning model for the French language based on [BART](https://paperswithcode.com/method/bart). Compared to existing [BERT](https://paperswithcode.com/method/bert)-based French language models such as [CamemBERT](https://paperswithcode.com/paper/camembert-a-tasty-french-languag... |
Given the following machine learning model name: SqueezeBERT, provide a description of the model | **SqueezeBERT** is an efficient architectural variant of [BERT](https://paperswithcode.com/method/bert) for natural language processing that uses [grouped convolutions](https://paperswithcode.com/method/grouped-convolution). It is much like BERT-base, but with positional feedforward connection layers implemented as con... |
Given the following machine learning model name: Viewmaker Network, provide a description of the model | **Viewmaker Network** is a type of generative model that learns to produce input-dependent views for contrastive learning. This network is trained jointly with an encoder network. The viewmaker network is trained adversarially to create views which increase the contrastive loss of the encoder network. Rather than direc... |
Given the following machine learning model name: ConViT, provide a description of the model | **ConViT** is a type of [vision transformer](https://paperswithcode.com/method/vision-transformer) that uses a gated positional self-attention module ([GPSA](https://paperswithcode.com/method/gpsa)), a form of positional self-attention which can be equipped with a “soft” convolutional inductive bias. The GPSA layers ar... |
Given the following machine learning model name: Spatiotemporal Point Inference Network, provide a description of the model | |
Given the following machine learning model name: Chain-of-thought prompting, provide a description of the model | Chain-of-thought prompts contain a series of intermediate reasoning steps, and they are shown to significantly improve the ability of large language models to perform certain tasks that involve complex reasoning (e.g., arithmetic, commonsense reasoning, symbolic reasoning, etc.) |
Given the following machine learning model name: BlendMask, provide a description of the model | **BlendMask** is an [instance segmentation framework](https://paperswithcode.com/methods/category/instance-segmentation-models) built on top of the[ FCOS](https://paperswithcode.com/method/fcos) object detector. The bottom module uses either backbone or [FPN](https://paperswithcode.com/method/fpn) features to predict a... |
Given the following machine learning model name: EmbraceNet: A robust deep learning architecture for multimodal classification, provide a description of the model | |
Given the following machine learning model name: Recurrent Trend Predictive Neural Network, provide a description of the model | A neural network model to automatically capture trends in time-series data for improved prediction/forecasting performance |
Given the following machine learning model name: Residual Block, provide a description of the model | **Residual Blocks** are skip-connection blocks that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture.
Formally, denoting the desired underlying mapping as $\mat... |
Given the following machine learning model name: ENet Bottleneck, provide a description of the model | **ENet Bottleneck** is an image model block used in the [ENet](https://paperswithcode.com/method/enet) semantic segmentation architecture. Each block consists of three convolutional layers: a 1 × 1 projection that reduces the dimensionality, a main convolutional layer, and a 1 × 1 expansion. We place [Batch Normalizati... |
Given the following machine learning model name: Fast Sample Re-Weighting, provide a description of the model | **Fast Sample Re-Weighting**, or **FSR**, is a sample re-weighting strategy to tackle problems such as dataset biases, noisy labels and imbalanced classes. It leverages a dictionary (essentially an extra buffer) to monitor the training history reflected by the model updates during meta optimization periodically, and ut... |
Given the following machine learning model name: DropPath, provide a description of the model | Just as [dropout](https://paperswithcode.com/method/dropout) prevents co-adaptation of activations, **DropPath** prevents co-adaptation of parallel paths in networks such as [FractalNets](https://paperswithcode.com/method/fractalnet) by randomly dropping operands of the join layers. This
discourages the network from u... |
Given the following machine learning model name: One Representation, provide a description of the model | In the OneR method, model input can be one of image, text or image+text, and CMC objective is combined with the traditional image-text contrastive (ITC) loss. Masked modeling is also carried out for all three input types (i.e., image, text and multi-modal). This framework employs no modality-specific architectural comp... |
Given the following machine learning model name: LipGAN, provide a description of the model | **LipGAN** is a generative adversarial network for generating realistic talking faces conditioned on translated speech. It employs an adversary that measures the extent of lip synchronization in the frames generated by the generator. The system is capable of handling faces in random poses without the need for realignme... |
Given the following machine learning model name: DropPathway, provide a description of the model | **DropPathway** randomly drops an audio pathway during training as a regularization technique for audiovisual recognition models. Specifically, at each training iteration, we drop the Audio pathway altogether with probability $P\_{d}$. This way, we slow down the learning of the Audio pathway and make its learning dyna... |
Given the following machine learning model name: Soft Actor-Critic (Autotuned Temperature), provide a description of the model | **Soft Actor Critic (Autotuned Temperature** is a modification of the [SAC](https://paperswithcode.com/method/soft-actor-critic) reinforcement learning algorithm. [SAC](https://paperswithcode.com/method/sac) can suffer from brittleness to the temperature hyperparameter. Unlike in conventional reinforcement learning, wh... |
Given the following machine learning model name: Spatial Attention Module, provide a description of the model | A **Spatial Attention Module** is a module for spatial attention in convolutional neural networks. It generates a spatial attention map by utilizing the inter-spatial relationship of features. Different from the [channel attention](https://paperswithcode.com/method/channel-attention-module), the spatial attention focus... |
Given the following machine learning model name: VGG and variational Model Decomposition, provide a description of the model | |
Given the following machine learning model name: Auto-Classifier, provide a description of the model | |
Given the following machine learning model name: AMSBound, provide a description of the model | **AMSBound** is a variant of the [AMSGrad](https://paperswithcode.com/method/amsgrad) stochastic optimizer which is designed to be more robust to extreme learning rates. Dynamic bounds are employed on learning rates, where the lower and upper bound are initialized as zero and infinity respectively, and they both smooth... |
Given the following machine learning model name: Multi-Query Attention, provide a description of the model | Multi-head attention consists of multiple attention layers (heads) in parallel with different linear
transformations on the queries, keys, values and outputs. **Multi-query attention** is identical except that the
different heads share a single set of keys and values. |
Given the following machine learning model name: Local Relation Network, provide a description of the model | The **Local Relation Network** (**LR-Net**) is a network built with local relation layers which represent a feature image extractor. This feature extractor adaptively determines aggregation weights based on the compositional relationship of local pixel pairs. |
Given the following machine learning model name: Linear Layer, provide a description of the model | A **Linear Layer** is a projection $\mathbf{XW + b}$. |
Given the following machine learning model name: MLP-Mixer, provide a description of the model | The **MLP-Mixer** architecture (or “Mixer” for short) is an image architecture that doesn't use convolutions or self-attention. Instead, Mixer’s architecture is based entirely on multi-layer perceptrons (MLPs) that are repeatedly applied across either spatial locations or feature channels. Mixer relies only on basic ma... |
Given the following machine learning model name: Poisson Flow Generative Models, provide a description of the model | |
Given the following machine learning model name: HyperTree MetaModel, provide a description of the model | Optimize combinations of various neural network models for multimodal data with bayseian optimization. |
Given the following machine learning model name: Tanh Exponential Activation Function, provide a description of the model | Lightweight or mobile neural networks used for real-time computer vision tasks contain fewer parameters than normal
networks, which lead to a constrained performance. In this work, we proposed a novel activation function named Tanh Exponential
Activation Function (TanhExp) which can improve the performance for these ... |
Given the following machine learning model name: VoVNetV2, provide a description of the model | **VoVNetV2** is a convolutional neural network that improves upon [VoVNet](https://paperswithcode.com/method/vovnet) with two effective strategies: (1) [residual connection](https://paperswithcode.com/method/residual-connection) for alleviating the optimization problem of larger VoVNets and (2) effective Squeeze-Excita... |
Given the following machine learning model name: Part-based Convolutional Baseline, provide a description of the model | |
Given the following machine learning model name: Attention Free Transformer, provide a description of the model | **Attention Free Transformer**, or **AFT**, is an efficient variant of a [multi-head attention module](https://paperswithcode.com/method/multi-head-attention) that eschews [dot product self attention](https://paperswithcode.com/method/scaled). In an AFT layer, the key and value are first combined with a set of learned ... |
Given the following machine learning model name: Assemble-ResNet, provide a description of the model | **Assemble-ResNet** is a modification to the [ResNet](https://paperswithcode.com/method/resnet) architecture with several tweaks including using [ResNet-D](https://paperswithcode.com/method/resnet-d), channel attention, [anti-alias downsampling](https://paperswithcode.com/method/anti-alias-downsampling), and Big Little... |
Given the following machine learning model name: 3D ResNet-RS, provide a description of the model | **3D ResNet-RS** is an architecture and scaling strategy for 3D ResNets for video recognition. The key additions are:
- **3D ResNet-D stem**: The [ResNet-D](https://paperswithcode.com/method/resnet-d) stem is adapted to 3D inputs by using three consecutive [3D convolutional layers](https://paperswithcode.com/method/... |
Given the following machine learning model name: Parallel Layers, provide a description of the model | • Parallel Layers – We use a “parallel” formulation in each Transformer block (Wang & Komatsuzaki, 2021), rather than the standard “serialized” formulation. Specifically, the standard formulation can be written as:
y = x + MLP(LayerNorm(x + Attention(LayerNorm(x)))
Whereas the parallel formulation can be ... |
Given the following machine learning model name: ResNeXt Block, provide a description of the model | A **ResNeXt Block** is a type of [residual block](https://paperswithcode.com/method/residual-block) used as part of the [ResNeXt](https://paperswithcode.com/method/resnext) CNN architecture. It uses a "split-transform-merge" strategy (branched paths within a single module) similar to an [Inception module](https://paper... |
Given the following machine learning model name: Singular Value Clipping, provide a description of the model | **Singular Value Clipping (SVC)** is an adversarial training technique used by [TGAN](https://paperswithcode.com/method/tgan) to enforce the 1-Lipschitz constraint of the [WGAN](https://paperswithcode.com/method/wgan) objective. It is a constraint to all linear layers in the discriminator that satisfies the spectral no... |
Given the following machine learning model name: Triplet Entropy Loss, provide a description of the model | The Triplet Entropy Loss (TEL) training method aims to leverage both the strengths of Cross Entropy Loss (CEL) and [Triplet loss](https://paperswithcode.com/method/triplet-loss) during the training process, assuming that it would lead to better generalization. The TEL method though does not contain a pre-training step,... |
Given the following machine learning model name: Semi-Supervised Knowledge Distillation, provide a description of the model | **Semi-Supervised Knowledge Distillation** is a type of knowledge distillation for person re-identification that exploits weakly annotated data by assigning soft pseudo labels to YouTube-Human to improve models' generalization ability. SSKD first trains a student model (e.g. [ResNet](https://paperswithcode.com/method/r... |
Given the following machine learning model name: MEUZZ, provide a description of the model | **MEUZZ** is a machine learning-based hybrid fuzzer which employs supervised machine learning for adaptive and generalizable seed scheduling -- a prominent factor in determining the yields of hybrid fuzzing. MEUZZ determines which new seeds are expected to produce better fuzzing yields based on the knowledge learned fr... |
Given the following machine learning model name: Online Normalization, provide a description of the model | **Online Normalization** is a normalization technique for training deep neural networks. To define Online Normalization. we replace arithmetic averages over the full dataset in with exponentially decaying averages of online samples. The decay factors $\alpha\_{f}$ and $\alpha\_{b}$ for forward and backward passes respe... |
Given the following machine learning model name: GAN-TTS, provide a description of the model | **GAN-TTS** is a generative adversarial network for text-to-speech synthesis. The architecture is composed of a conditional feed-forward generator producing raw speech audio, and an ensemble of discriminators which operate on random windows of different sizes. The discriminators analyze the audio both in terms of gener... |
Given the following machine learning model name: CornerNet-Squeeze, provide a description of the model | **CornerNet-Squeeze** is an object detector that extends [CornerNet](https://paperswithcode.com/method/cornernet) with a new compact hourglass architecture that makes use of fire modules with depthwise separable convolutions. |
Given the following machine learning model name: Pansharpening by convolutional neural networks in the full resolution framework, provide a description of the model | In recent years, there has been a growing interest on deep learning-based pansharpening.
Research has mainly focused on architectures.
However, lacking a ground truth, model training is also a major issue.
A popular approach is to train networks in a reduced resolution domain, using the original data as ground truth... |
Given the following machine learning model name: Additive Angular Margin Loss, provide a description of the model | **ArcFace**, or **Additive Angular Margin Loss**, is a loss function used in face recognition tasks. The [softmax](https://paperswithcode.com/method/softmax) is traditionally used in these tasks. However, the softmax loss function does not explicitly optimise the feature embedding to enforce higher similarity for intra... |
Given the following machine learning model name: Reduction-B, provide a description of the model | **Reduction-B** is an image model block used in the [Inception-v4](https://paperswithcode.com/method/inception-v4) architecture. |
Given the following machine learning model name: GloVe Embeddings, provide a description of the model | **GloVe Embeddings** are a type of word embedding that encode the co-occurrence probability ratio between two words as vector differences. GloVe uses a weighted least squares objective $J$ that minimizes the difference between the dot product of the vectors of two words and the logarithm of their number of co-occurrenc... |
Given the following machine learning model name: Attention Mesh, provide a description of the model | **Attention Mesh** is a neural network architecture for 3D face mesh prediction that uses attention to semantically meaningful regions. Specifically region-specific heads are employed that transform the feature maps with spatial transformers. |
Given the following machine learning model name: Relation-aware Global Attention, provide a description of the model | In relation-aware global attention (RGA) stresses the importance of global structural information provided by pairwise relations, and uses it to produce attention maps.
RGA comes in two forms, spatial RGA (RGA-S) and channel RGA (RGA-C). RGA-S first reshapes the input feature map $X$ to $C\times (H\times W)$ and t... |
Given the following machine learning model name: Progressive Growing Channel Attentive Non-Local Network, provide a description of the model | Lung cancer classification in screening computed tomography (CT) scans is one of the most crucial tasks for early detection of this disease. Many lives can be saved if we are able to accurately classify malignant/cancerous lung nodules. Consequently, several deep learning based models have been proposed recently to cla... |
Given the following machine learning model name: Graph Finite-State Automaton, provide a description of the model | **Graph Finite-State Automaton**, or **GFSA**, is a differentiable layer for learning graph structure that adds a new edge type (expressed as a weighted adjacency matrix) to a base graph. This layer can be trained end-to-end to add derived relationships (edges) to arbitrary graph-structured data based on performance on... |
Given the following machine learning model name: BLOOMZ, provide a description of the model | **BLOOMZ** is a Multitask prompted finetuning (MTF) variant of BLOOM. |
Given the following machine learning model name: CPC v2, provide a description of the model | **Contrastive Predictive Coding v2 (CPC v2)** is a self-supervised learning approach that builds upon the original [CPC](https://paperswithcode.com/method/contrastive-predictive-coding) with several improvements. These improvements include:
- **Model capacity** - The third residual stack of [ResNet](https://paperswi... |
Given the following machine learning model name: Linformer, provide a description of the model | **Linformer** is a linear [Transformer](https://paperswithcode.com/method/transformer) that utilises a linear self-attention mechanism to tackle the self-attention bottleneck with [Transformer models](https://paperswithcode.com/methods/category/transformers). The original [scaled dot-product attention](https://paperswi... |
Given the following machine learning model name: Decentralized Distributed Proximal Policy Optimization, provide a description of the model | **Decentralized Distributed Proximal Policy Optimization (DD-PPO)** is a method for distributed reinforcement learning in resource-intensive simulated environments. DD-PPO is distributed (uses multiple machines), decentralized (lacks a centralized server), and synchronous (no computation is ever `stale'), making it con... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.