prompts stringlengths 87 212 | description stringlengths 0 6.76k |
|---|---|
Given the following machine learning model name: Self-Attention Guidance, provide a description of the model | |
Given the following machine learning model name: Support-set Based Cross-Supervision, provide a description of the model | **Sscs**, or **Support-set Based Cross-Supervision**, is a module for video grounding which consists of two main components: a discriminative contrastive objective and a generative caption objective. The contrastive objective aims to learn effective representations by contrastive learning, while the caption objective c... |
Given the following machine learning model name: Dorylus, provide a description of the model | **Dorylus** is a distributed system for training graph neural networks which uses cheap CPU servers and Lambda threads. It scales to
large billion-edge graphs with low-cost cloud resources. |
Given the following machine learning model name: Graphic Mutual Information, provide a description of the model | **Graphic Mutual Information**, or **GMI**, measures the correlation between input graphs and high-level hidden representations. GMI generalizes the idea of conventional mutual information computations from vector space to the graph domain where measuring mutual information from two aspects of node features and topolog... |
Given the following machine learning model name: Dual Graph Convolutional Networks, provide a description of the model | A dual graph convolutional neural network jointly considers the two essential assumptions of semi-supervised learning: (1) local consistency and (2) global consistency. Accordingly, two convolutional neural networks are devised to embed the local-consistency-based and global-consistency-based knowledge, respectively.
... |
Given the following machine learning model name: ShakeDrop, provide a description of the model | **ShakeDrop regularization** extends [Shake-Shake regularization](https://paperswithcode.com/method/shake-shake-regularization) and can be applied not only to [ResNeXt](https://paperswithcode.com/method/resnext) but also [ResNet](https://paperswithcode.com/method/resnet), [WideResNet](https://paperswithcode.com/method/... |
Given the following machine learning model name: Recursive Feature Pyramid, provide a description of the model | An **Recursive Feature Pyramid (RFP)** builds on top of the Feature Pyramid Networks ([FPN](https://paperswithcode.com/method/fpn)) by incorporating extra feedback connections from the FPN layers into the bottom-up backbone layers. Unrolling the recursive structure to a sequential implementation, we obtain a backbone f... |
Given the following machine learning model name: Cross-Scale Non-Local Attention, provide a description of the model | **Cross-Scale Non-Local Attention**, or **CS-NL**, is a non-local attention module for image super-resolution deep networks. It learns to mine long-range dependencies between LR features to larger-scale HR patches within the same feature map. Specifically, suppose we are conducting an s-scale super-resolution with the... |
Given the following machine learning model name: EvoNorms, provide a description of the model | **EvoNorms** are a set of normalization-activation layers that go beyond existing design patterns. Normalization and activation are unified into a single computation graph, its structure is evolved starting from low-level primitives. EvoNorms consist of two series: B series and S series. The B series are batch-dependen... |
Given the following machine learning model name: ESPNetv2, provide a description of the model | **ESPNetv2** is a convolutional neural network that utilises group point-wise and depth-wise dilated separable convolutions to learn representations from a large effective receptive field with fewer FLOPs and parameters. |
Given the following machine learning model name: FlexFlow, provide a description of the model | **FlexFlow** is a deep learning engine that uses guided randomized search of the SOAP (Sample, Operator, Attribute, and Parameter) space to find a fast parallelization strategy for a specific parallel machine. To accelerate this search, FlexFlow introduces a novel execution simulator that can accurately predict a paral... |
Given the following machine learning model name: Clipped Double Q-learning, provide a description of the model | **Clipped Double Q-learning** is a variant on [Double Q-learning](https://paperswithcode.com/method/double-q-learning) that upper-bounds the less biased Q estimate $Q\_{\theta\_{2}}$ by the biased estimate $Q\_{\theta\_{1}}$. This is equivalent to taking the minimum of the two estimates, resulting in the following targ... |
Given the following machine learning model name: Pseudoinverse Graph Convolutional Network, provide a description of the model | A [GCN](https://paperswithcode.com/method/gcn) method targeted at the unique spectral properties of dense graphs and hypergraphs, enabled by efficient numerical linear algebra. |
Given the following machine learning model name: Adaptive Masking, provide a description of the model | **Adaptive Masking** is a type of attention mechanism that allows a model to learn its own context size to attend over. For each head in [Multi-Head Attention](https://paperswithcode.com/method/multi-head-attention), a masking function is added to control for the span of the attention. A masking function is a non-incre... |
Given the following machine learning model name: Approximating Spatiotemporal Representations Using a 2DCNN, provide a description of the model | Approximating Spatiotemporal Representations Using a 2DCNN |
Given the following machine learning model name: Sandwich Batch Normalization, provide a description of the model | Sandwich Batch Normalization (**SaBN**) is a frustratingly easy improvement of [Batch Normalization](https://paperswithcode.com/method/batch-normalization) (BN) with only a few lines of code changes. SaBN is motivated by addressing the inherent *feature distribution heterogeneity* that one can be identified in many tas... |
Given the following machine learning model name: Orientation Regularized Network, provide a description of the model | **Orientation Regularized Network** (ORN) is a multi-view image fusion technique for pose estimation. It uses IMU orientations as a structural prior to mutually fuse the image features of each pair of joints linked by IMUs. For example, it uses the features of the elbow to reinforce those of the wrist based on the IMU ... |
Given the following machine learning model name: CrossViT, provide a description of the model | **CrossViT** is a type of [vision transformer](https://paperswithcode.com/method/vision-transformer) that uses a dual-branch architecture to extract multi-scale feature representations for image classification. The architecture combines image patches (i.e. tokens in a [transformer](https://paperswithcode.com/method/tra... |
Given the following machine learning model name: ShuffleNet, provide a description of the model | **ShuffleNet** is a convolutional neural network designed specially for mobile devices with very limited computing power. The architecture utilizes two new operations, pointwise group [convolution](https://paperswithcode.com/method/convolution) and [channel shuffle](https://paperswithcode.com/method/channel-shuffle), t... |
Given the following machine learning model name: Dot-Product Attention, provide a description of the model | **Dot-Product Attention** is an attention mechanism where the alignment score function is calculated as:
$$f_{att}\left(\textbf{h}_{i}, \textbf{s}\_{j}\right) = h\_{i}^{T}s\_{j}$$
It is equivalent to [multiplicative attention](https://paperswithcode.com/method/multiplicative-attention) (without a trainable weigh... |
Given the following machine learning model name: Graph Self-Attention, provide a description of the model | **Graph Self-Attention (GSA)** is a self-attention module used in the [BP-Transformer](https://paperswithcode.com/method/bp-transformer) architecture, and is based on the [graph attentional layer](https://paperswithcode.com/method/graph-attentional-layer).
For a given node $u$, we update its representation according... |
Given the following machine learning model name: self-DIstillation with NO labels, provide a description of the model | **DINO** (self-distillation with no labels) is a self-supervised learning method that directly predicts the output of a teacher network - built with a momentum encoder - using a standard cross-entropy loss.
In the example to the right, DINO is illustrated in the case of one single pair of views $\left(x\_{1}, x\_{2... |
Given the following machine learning model name: Non-Local Operation, provide a description of the model | A **Non-Local Operation** is a component for capturing long-range dependencies with deep neural networks. It is a generalization of the classical non-local mean operation in computer vision. Intuitively a non-local operation computes the response at a position as a weighted sum of the features at all positions in the i... |
Given the following machine learning model name: AutoGAN, provide a description of the model | [Neural architecture search](https://paperswithcode.com/method/neural-architecture-search) (NAS) has witnessed prevailing success in image classification and (very recently) segmentation tasks. In this paper, we present the first preliminary study on introducing the NAS algorithm to generative adversarial networks (GAN... |
Given the following machine learning model name: AlphaZero, provide a description of the model | **AlphaZero** is a reinforcement learning agent for playing board games such as Go, chess, and shogi. |
Given the following machine learning model name: wav2vec Unsupervised, provide a description of the model | **wav2vec-U** is an unsupervised method to train speech recognition models without any labeled data. It leverages self-supervised speech representations to segment unlabeled language and learn a mapping from these representations to phonemes via adversarial training.
Specifically, we learn self-supervised represent... |
Given the following machine learning model name: Slime Mould Algorithm, provide a description of the model | **Slime Mould Algorithm** (**SMA**) is a new stochastic optimizer proposed based on the oscillation mode of slime mould in nature. SMA has several new features with a unique mathematical model that uses adaptive weights to simulate the process of producing positive and negative feedback of the propagation wave of slime... |
Given the following machine learning model name: Laplacian Pyramid, provide a description of the model | A **Laplacian Pyramid** is a linear invertible image representation consisting of a set of band-pass
images spaced an octave apart, plus a low-frequency residual. Formally, let $d\left(.\right)$ be a downsampling operation that blurs and decimates a $j \times j$ image $I$ so that $d\left(I\right)$ is a new image of si... |
Given the following machine learning model name: V-trace, provide a description of the model | **V-trace** is an off-policy actor-critic reinforcement learning algorithm that helps tackle the lag between when actions are generated by the actors and when the learner estimates the gradient. Consider a trajectory $\left(x\_{t}, a\_{t}, r\_{t}\right)^{t=s+n}\_{t=s}$ generated by the actor following some policy $\mu$... |
Given the following machine learning model name: H3DNet, provide a description of the model | Code for paper: H3DNet: 3D Object Detection Using Hybrid Geometric Primitives (ECCV 2020) |
Given the following machine learning model name: Demon, provide a description of the model | **Decaying Momentum**, or **Demon**, is a stochastic optimizer motivated by decaying the total contribution of a gradient to all future updates. By decaying the momentum parameter, the total contribution of a gradient to all future updates is decayed. A particular gradient term $g\_{t}$ contributes a total of $\eta\su... |
Given the following machine learning model name: Shapley Additive Explanations, provide a description of the model | **SHAP**, or **SHapley Additive exPlanations**, is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions. Shapley values are approximating using Kernel SH... |
Given the following machine learning model name: Spectral-Normalized Identity Priors, provide a description of the model | **Spectral-Normalized Identity Priors**, or **SNIP**, is a structured pruning approach that penalizes an entire [residual module](https://paperswithcode.com/method/residual-connection) in a [Transformer model](https://paperswithcode.com/method/residual-connection) toward an identity mapping. It is applicable to any str... |
Given the following machine learning model name: Chained-Tracker, provide a description of the model | **Chained-Tracker**, or **CTracker**, is an online model for multiple-object tracking. It chains paired bounding boxes regression results estimated from overlapping nodes, of which each node covers two adjacent frames. The paired regression is made attentive by object-attention (brought by a detection module) and iden... |
Given the following machine learning model name: Vulnerability-constrained Decoding, provide a description of the model | **Vulnerability-constrained Decoding**, is a sequence decoding approach that aims to avoid generating vulnerabilities in generated code. |
Given the following machine learning model name: Structurally Regularized Deep Clustering, provide a description of the model | **Structurally Regularized Deep Clustering**, or **SRDC**, is a deep network based discriminative clustering method for domain adaptation that minimizes the KL divergence between predictive label distribution of the network and an introduced auxiliary one. Replacing the auxiliary distribution with that formed by ground... |
Given the following machine learning model name: Performer, provide a description of the model | **Performer** is a [Transformer](https://paperswithcode.com/methods/category/transformers) architectures which can estimate regular ([softmax](https://paperswithcode.com/method/softmax)) full-rank-attention Transformers with provable accuracy, but using only linear (as opposed to quadratic) space and time complexity, w... |
Given the following machine learning model name: ScheduledDropPath, provide a description of the model | **ScheduledDropPath** is a modified version of [DropPath](https://paperswithcode.com/method/droppath). In DropPath, each path in the cell is stochastically dropped with some fixed probability during training. In ScheduledDropPath, each path in the cell is dropped out with a probability that is linearly increased over t... |
Given the following machine learning model name: Channel Squeeze and Spatial Excitation (sSE), provide a description of the model | Inspired on the widely known [spatial squeeze and channel excitation (SE)](https://paperswithcode.com/method/squeeze-and-excitation-block) block, the sSE block performs channel squeeze and spatial excitation, to recalibrate the feature maps spatially and achieve more fine-grained image segmentation. |
Given the following machine learning model name: RPM-Net, provide a description of the model | **RPM-Net** is an end-to-end differentiable deep network for robust point matching uses learned features. It preserves robustness of RPM against noisy/outlier points while desensitizing initialization with point correspondences from learned feature distances instead of spatial distances. The network uses the differenti... |
Given the following machine learning model name: Unitary RNN, provide a description of the model | A **Unitary RNN** is a recurrent neural network architecture that uses a unitary hidden to hidden matrix. Specifically they concern dynamics of the form:
$$ h\_{t} = f\left(Wh\_{t−1} + Vx\_{t}\right) $$
where $W$ is a unitary matrix $\left(W^{†}W = I\right)$. The product of unitary matrices is a unitary matrix, s... |
Given the following machine learning model name: Deep Convolutional GAN, provide a description of the model | **DCGAN**, or **Deep Convolutional GAN**, is a generative adversarial network architecture. It uses a couple of guidelines, in particular:
- Replacing any pooling layers with strided convolutions (discriminator) and fractional-strided convolutions (generator).
- Using batchnorm in both the generator and the discrim... |
Given the following machine learning model name: Phase Shuffle, provide a description of the model | **Phase Shuffle** is a technique for removing pitched noise artifacts that come from using transposed convolutions in audio generation models. Phase shuffle is an operation with hyperparameter $n$. It randomly perturbs the phase of each layer’s activations by −$n$ to $n$ samples before input to the next layer.
In th... |
Given the following machine learning model name: Meena, provide a description of the model | **Meena** is a multi-turn open-domain chatbot trained end-to-end on data mined and filtered from public domain social media conversations. This 2.6B parameter neural network is simply trained to minimize perplexity of the next token. A seq2seq model is used with the Evolved [Transformer](https://paperswithcode.com/meth... |
Given the following machine learning model name: LSGAN, provide a description of the model | **LSGAN**, or **Least Squares GAN**, is a type of generative adversarial network that adopts the least squares loss function for the discriminator. Minimizing the objective function of LSGAN yields minimizing the Pearson $\chi^{2}$ divergence. The objective function can be defined as:
$$ \min\_{D}V\_{LSGAN}\left(D\r... |
Given the following machine learning model name: MaskFlownet, provide a description of the model | **MaskFlownet** is an asymmetric occlusion-aware feature matching module, which can learn a rough occlusion mask that filters useless (occluded) areas immediately after feature warping without any explicit supervision. The learned occlusion mask can be further fed into a subsequent network cascade with dual feature pyr... |
Given the following machine learning model name: ERNIE, provide a description of the model | ERNIE is a transformer-based model consisting of two stacked modules: 1) textual encoder and 2) knowledgeable encoder, which is responsible to integrate extra token-oriented knowledge information into textual information. This layer consists of stacked aggregators, designed for encoding both tokens and entities as well... |
Given the following machine learning model name: Precise RoI Pooling, provide a description of the model | **Precise RoI Pooling**, or **PrRoI Pooling**, is a region of interest feature extractor that avoids any quantization of coordinates and has a continuous gradient on bounding box coordinates. Given the feature map $\mathcal{F}$ before RoI/PrRoI Pooling (eg from Conv4 in [ResNet](https://paperswithcode.com/method/resnet... |
Given the following machine learning model name: Lovasz-Softmax, provide a description of the model | The **Lovasz-Softmax loss** is a loss function for multiclass semantic segmentation that incorporates the [softmax](https://paperswithcode.com/method/softmax) operation in the Lovasz extension. The Lovasz extension is a means by which we can achieve direct optimization of the mean intersection-over-union loss in neural... |
Given the following machine learning model name: CornerNet-Squeeze Hourglass Module, provide a description of the model | **CornerNet-Squeeze Hourglass Module** is an image model block used in [CornerNet](https://paperswithcode.com/method/cornernet)-Lite that is based on an [hourglass module](https://paperswithcode.com/method/hourglass-module), but uses modified fire modules instead of residual blocks. Other than replacing the residual bl... |
Given the following machine learning model name: Tofu, provide a description of the model | **Tofu** is an intra-layer model parallel system that partitions very large DNN models across multiple GPU devices to reduce per-GPU memory footprint. Tofu is designed to partition a dataflow graph of fine-grained tensor operators used by platforms like MXNet and TensorFlow. To optimally partition different operators i... |
Given the following machine learning model name: Symbolic rule learning, provide a description of the model | Symbolic rule learning methods find regularities in data that can be expressed in the form of 'if-then' rules based on symbolic representations of the data. |
Given the following machine learning model name: Parameterized ReLU, provide a description of the model | A **Parametric Rectified Linear Unit**, or **PReLU**, is an activation function that generalizes the traditional rectified unit with a slope for negative values. Formally:
$$f\left(y\_{i}\right) = y\_{i} \text{ if } y\_{i} \ge 0$$
$$f\left(y\_{i}\right) = a\_{i}y\_{i} \text{ if } y\_{i} \leq 0$$
The intuition is... |
Given the following machine learning model name: Mirror-BERT, provide a description of the model | Mirror-BERT converts pretrained language models into effective universal text encoders without any supervision, in 20-30 seconds. It is an extremely simple, fast, and effective contrastive learning technique. It relies on fully identical *or* slightly modified string pairs as positive (i.e., synonymous) fine-tuning exa... |
Given the following machine learning model name: NetAdapt, provide a description of the model | **NetAdapt** is a network shrinking algorithm to adapt a pretrained network to a mobile platform given a real resource budget. NetAdapt can incorporate direct metrics, such as latency and energy, into the optimization to maximize the adaptation performance based on the characteristics of the platform. By using empirica... |
Given the following machine learning model name: Cycle Consistency Loss, provide a description of the model | **Cycle Consistency Loss** is a type of loss used for generative adversarial networks that performs unpaired image-to-image translation. It was introduced with the [CycleGAN](https://paperswithcode.com/method/cyclegan) architecture. For two domains $X$ and $Y$, we want to learn a mapping $G : X \rightarrow Y$ and $F: Y... |
Given the following machine learning model name: MATE, provide a description of the model | **MATE** is a [Transformer](https://paperswithcode.com/method/transformer) architecture designed to model the structure of web tables. It uses sparse attention in a way that allows heads to efficiently attend to either rows or columns in a table. Each attention head reorders the tokens by either column or row index and... |
Given the following machine learning model name: Automated Graph Learning, provide a description of the model | Automated graph learning is a method that aims at discovering the best hyper-parameter and neural architecture configuration for different graph tasks/data without manual design. |
Given the following machine learning model name: InceptionTime, provide a description of the model | |
Given the following machine learning model name: Extreme Value Machine, provide a description of the model | |
Given the following machine learning model name: Difference of Gaussian Random Forest, provide a description of the model | |
Given the following machine learning model name: Motion-Encoded Particle Swarm Optimization, provide a description of the model | |
Given the following machine learning model name: SM3, provide a description of the model | # Memory-Efficient Adaptive Optimization
Source: https://arxiv.org/abs/1901.11150
Adaptive gradient-based optimizers such as [AdaGrad](https://paperswithcode.com/method/adagrad) and [Adam](https://paperswithcode.com/method/adam) are among the
defacto methods of choice in modern machine learning.These methods tun... |
Given the following machine learning model name: ShuffleNet Block, provide a description of the model | A **ShuffleNet Block** is an image model block that utilises a [channel shuffle](https://paperswithcode.com/method/channel-shuffle) operation, along with depthwise convolutions, for an efficient architectural design. It was proposed as part of the [ShuffleNet](https://paperswithcode.com/method/shufflenet) architecture.... |
Given the following machine learning model name: Hermite Polynomial Activation, provide a description of the model | A **Hermite Activations** is a type of activation function which uses a smooth finite Hermite polynomial base as a substitute for non-smooth [ReLUs](https://paperswithcode.com/method/relu).
Relevant Paper: [Lokhande et al](https://arxiv.org/pdf/1909.05479.pdf) |
Given the following machine learning model name: DeepLabv3, provide a description of the model | **DeepLabv3** is a semantic segmentation architecture that improves upon [DeepLabv2](https://paperswithcode.com/method/deeplabv2) with several modifications. To handle the problem of segmenting objects at multiple scales, modules are designed which employ atrous [convolution](https://paperswithcode.com/method/convoluti... |
Given the following machine learning model name: Computation Redistribution, provide a description of the model | **Computation Redistribution** is an [neural architecture search](https://paperswithcode.com/task/architecture-search) method for [face detection](https://paperswithcode.com/task/face-detection), which reallocates the computation between the backbone, neck and head of the model based on a predefined search methodology.... |
Given the following machine learning model name: Cross-Attention Module, provide a description of the model | The **Cross-Attention** module is an attention module used in [CrossViT](https://paperswithcode.com/method/crossvit) for fusion of multi-scale features. The CLS token of the large branch (circle) serves as a query token to interact with the patch tokens from the small branch through attention. $f\left(·\right)$ and $g\... |
Given the following machine learning model name: FT-Transformer, provide a description of the model | FT-Transformer (Feature Tokenizer + Transformer) is a simple adaptation of the [Transformer](/method/transformer) architecture for the tabular domain. The model (Feature Tokenizer component) transforms all features (categorical and numerical) to tokens and runs a stack of Transformer layers over the tokens, so every Tr... |
Given the following machine learning model name: Syntax Heat Parse Tree, provide a description of the model | Syntax Heat Parse Tree are heatmaps over parse trees, similar to ["heat trees"](https://doi.org/10.1371/journal.pcbi.1005404) in biology. |
Given the following machine learning model name: Colorization Transformer, provide a description of the model | **Colorization Transformer** is a probabilistic [colorization](https://paperswithcode.com/method/colorization) model composed only of [axial self-attention blocks](https://paperswithcode.com/method/axial). The main advantages of these blocks are the ability to capture a global receptive field with only two layers and $... |
Given the following machine learning model name: RepPoints, provide a description of the model | **RepPoints** is a representation for object detection that consists of a set of points which indicate the spatial extent of an object and semantically significant local areas. This representation is learned via weak localization supervision from rectangular ground-truth boxes and implicit recognition feedback. Based o... |
Given the following machine learning model name: BasicVSR, provide a description of the model | **BasicVSR** is a video super-resolution pipeline including optical flow and [residual blocks](https://paperswithcode.com/method/residual-connection). It adopts a typical bidirectional recurrent network. The upsampling module $U$ contains multiple [pixel-shuffle](https://paperswithcode.com/method/pixelshuffle) and conv... |
Given the following machine learning model name: Mixture model network, provide a description of the model | Mixture model network (MoNet) is a general framework allowing to design convolutional deep architectures on non-Euclidean domains such as graphs and manifolds.
Image and description from: [Geometric deep learning on graphs and manifolds using mixture model CNNs](https://arxiv.org/pdf/1611.08402.pdf) |
Given the following machine learning model name: ZCA Whitening, provide a description of the model | **ZCA Whitening** is an image preprocessing method that leads to a transformation of data such that the covariance matrix $\Sigma$ is the identity matrix, leading to decorrelated features.
Image Source: [Alex Krizhevsky](http://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf) |
Given the following machine learning model name: Template based Graph Neural Network with Optimal Transport Distances, provide a description of the model | |
Given the following machine learning model name: LFPNet with test time augmentation, provide a description of the model | |
Given the following machine learning model name: Child-Tuning, provide a description of the model | **Child-Tuning** is a fine-tuning technique that updates a subset of parameters (called child network) of large pretrained models via strategically masking out the gradients of the non-child network during the backward process. It decreases the hypothesis space of the model via a task-specific mask applied to the full ... |
Given the following machine learning model name: Shake-Shake Regularization, provide a description of the model | **Shake-Shake Regularization** aims to improve the generalization ability of multi-branch networks by replacing the standard summation of parallel branches with a stochastic affine combination. A typical pre-activation [ResNet](https://paperswithcode.com/method/resnet) with 2 residual branches would follow this equati... |
Given the following machine learning model name: Inception-ResNet-v2-A, provide a description of the model | **Inception-ResNet-v2-A** is an image model block for a 35 x 35 grid used in the [Inception-ResNet-v2](https://paperswithcode.com/method/inception-resnet-v2) architecture. |
Given the following machine learning model name: Spectral Clustering, provide a description of the model | Spectral clustering has attracted increasing attention due to
the promising ability in dealing with nonlinearly separable datasets [15], [16]. In spectral clustering, the spectrum of the graph Laplacian is used to reveal the cluster structure. The spectral clustering algorithm mainly consists of two steps: 1) construc... |
Given the following machine learning model name: Softmax, provide a description of the model | The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:
$$ P(y=j \mid{x}) = \frac{e^{x^{T}w_{j}}}{\sum^{K}_{k=1}e^{x^{T}wk}} $$ |
Given the following machine learning model name: Pruning, provide a description of the model | |
Given the following machine learning model name: Feature Pyramid Grid, provide a description of the model | **Feature Pyramid Grids**, or **FPG**, is a deep multi-pathway feature pyramid, that represents the feature scale-space as a regular grid of parallel bottom-up pathways which are fused by multi-directional lateral connections. It connects the backbone features, $C$, of a ConvNet with a regular structure of $p$ parallel... |
Given the following machine learning model name: Spatial Pyramid Pooling, provide a description of the model | ** Spatial Pyramid Pooling (SPP)** is a pooling layer that removes the fixed-size constraint of the network, i.e. a CNN does not require a fixed-size input image. Specifically, we add an SPP layer on top of the last convolutional layer. The SPP layer pools the features and generates fixed-length outputs, which are then... |
Given the following machine learning model name: ReLIC, provide a description of the model | **ReLIC**, or **Representation Learning via Invariant Causal Mechanisms**, is a self-supervised learning objective that enforces invariant prediction of proxy targets across augmentations through an invariance regularizer which yields improved generalization guarantees.
We can write the objective as:
$$
\unders... |
Given the following machine learning model name: Masked autoencoder, provide a description of the model | |
Given the following machine learning model name: Bottleneck Residual Block, provide a description of the model | A **Bottleneck Residual Block** is a variant of the [residual block](https://paperswithcode.com/method/residual-block) that utilises 1x1 convolutions to create a bottleneck. The use of a bottleneck reduces the number of parameters and matrix multiplications. The idea is to make residual blocks as thin as possible to in... |
Given the following machine learning model name: Random Mix-up, provide a description of the model | R-Mix (Random Mix-up) is a Mix-up family Data Augmentation method. It combines random Mix-up with Saliency-guided mix-up, producing a procedure that is fast and performant, while reserving good characteristics of Saliency-guided Mix-up such as low Expected Calibration Error and high Weakly-supervised Object Localizatio... |
Given the following machine learning model name: Dynamic SmoothL1 Loss, provide a description of the model | **Dynamic SmoothL1 Loss (DSL)** is a loss function in object detection where we change the shape of loss function to gradually focus on high quality samples:
$$\text{DSL}\left(x, \beta\_{now}\right) = 0.5|{x}|^{2}/\beta\_{now}, \text{ if } |x| < \beta\_{now}\text{,} $$
$$\text{DSL}\left(x, \beta\_{now}\right) = |{... |
Given the following machine learning model name: style-based recalibration module, provide a description of the model | SRM combines style transfer with an attention mechanism. Its main contribution is style pooling which utilizes both mean and standard deviation of the input features to improve its capability to capture global information. It also adopts a lightweight channel-wise fully-connected (CFC) layer, in place of the original f... |
Given the following machine learning model name: Patch AutoAugment, provide a description of the model | **Patch AutoAugment** is a patch-level automatic data augmentation algorithm that automatically searches for the optimal augmentation policies for the patches of an image. Specifically, PAA allows each patch DA operation to be controlled by an agent and models it as a Multi-Agent Reinforcement Learning (MARL) problem. ... |
Given the following machine learning model name: Entropy Minimized Ensemble of Adapters, provide a description of the model | **Entropy Minimized Ensemble of Adapters**, or **EMEA**, is a method that optimizes the ensemble weights of the pretrained language adapters for each test sentence by minimizing the entropy of its predictions. The intuition behind the method is that a good [adapter](https://paperswithcode.com/method/adapter) weight $\a... |
Given the following machine learning model name: SEER, provide a description of the model | **SEER** is a self-supervised learning approach for training large models on random, uncurated images with no supervision. It trains [RegNet-Y](https://paperswithcode.com/method/regnet-y) architectures with the [SwAV](https://paperswithcode.com/method/swav). Several adjustments are made to self-supervised training to m... |
Given the following machine learning model name: DeepMind AlphaStar, provide a description of the model | **AlphaStar** is a reinforcement learning agent for tackling the game of Starcraft II. It learns a policy $\pi\_{\theta}\left(a\_{t}\mid{s\_{t}}, z\right) = P\left[a\_{t}\mid{s\_{t}}, z\right]$ using a neural network for parameters $\theta$ that receives observations $s\_{t} = \left(o\_{1:t}, a\_{1:t-1}\right)$ as inpu... |
Given the following machine learning model name: CANINE, provide a description of the model | **CANINE** is a pre-trained encoder for language understanding that operates directly on character sequences—without explicit tokenization or vocabulary—and a pre-training strategy with soft inductive biases in place of hard token boundaries. To use its finer-grained input effectively and efficiently, Canine combines d... |
Given the following machine learning model name: Bootstrap Your Own Latent, provide a description of the model | BYOL (Bootstrap Your Own Latent) is a new approach to self-supervised learning. BYOL’s goal is to learn a representation $y_θ$ which can then be used for downstream tasks. BYOL uses two neural networks to learn: the online and target networks. The online network is defined by a set of weights $θ$ and is comprised of th... |
Given the following machine learning model name: CRISS, provide a description of the model | **CRISS**, or **Cross-lingual Retrievial for Iterative Self-Supervised Training (CRISS)**, is a self-supervised learning method for multilingual sequence generation. CRISS is developed based on the finding that the encoder outputs of multilingual denoising autoencoder can be used as language agnostic representation to ... |
Given the following machine learning model name: BAGUA, provide a description of the model | **BAGUA** is a communication framework whose design goal is to provide a system abstraction that is both flexible and modular to support state-of-the-art system relaxation techniques of distributed training. The abstraction goes beyond parameter server and Allreduce paradigms, and provides a collection of MPI-style col... |
Given the following machine learning model name: Spectral Gap Rewiring Layer, provide a description of the model | **TL;DR: GAP-Layer is a GNN Layer which is able to rewire a graph in an inductive an parameter-free way optimizing the spectral gap (minimizing or maximizing the bottleneck size), learning a differentiable way to compute the Fiedler vector and the Fiedler value of the graph.**
## Summary
**GAP-Layer** is a rewirin... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.