prompts stringlengths 87 212 | description stringlengths 0 6.76k |
|---|---|
Given the following machine learning model name: Fractal Block, provide a description of the model | A **Fractal Block** is an image model block that utilizes an expansion rule that yields a structural layout of truncated fractals. For the base case where $f\_{1}\left(z\right) = \text{conv}\left(z\right)$ is a convolutional layer, we then have recursive fractals of the form:
$$ f\_{C+1}\left(z\right) = \left[\left(... |
Given the following machine learning model name: DVD-GAN GBlock, provide a description of the model | **DVD-GAN GBlock** is a [residual block](https://paperswithcode.com/method/residual-block) for the generator used in the [DVD-GAN](https://paperswithcode.com/method/dvd-gan) architecture for video generation. |
Given the following machine learning model name: Siamese U-Net, provide a description of the model | Siamese U-Net model with a pre-trained ResNet34 architecture as an encoder for data efficient Change Detection |
Given the following machine learning model name: ReLU6, provide a description of the model | **ReLU6** is a modification of the [rectified linear unit](https://paperswithcode.com/method/relu) where we limit the activation to a maximum size of $6$. This is due to increased robustness when used with low-precision computation.
Image Credit: [PyTorch](https://pytorch.org/docs/master/generated/torch.nn.ReLU6.htm... |
Given the following machine learning model name: Auditory Cortex ResNet, provide a description of the model | The Auditory Cortex ResNet, briefly AUCO ResNet, is proposed and tested. It is a deep neural network architecture especially designed for audio classification trained end-to-end. It is inspired by the architectural organization of rat's auditory cortex, containing also innovations 2 and 3. The network outperforms the s... |
Given the following machine learning model name: Crossmodal Contrastive Learning, provide a description of the model | **CMCL**, or **Crossmodal Contrastive Learning**, is a method for unifying visual and textual representations into the same semantic space based on a large-scale corpus of image collections, text corpus and image-text pairs. The CMCL aligns the visual representations and textual representations, and unifies them into t... |
Given the following machine learning model name: SNIPER, provide a description of the model | **SNIPER** is a multi-scale training approach for instance-level recognition tasks like object detection and instance-level segmentation. Instead of processing all pixels in an image pyramid, SNIPER selectively processes context regions around the ground-truth objects (a.k.a chips). This can help to speed up multi-scal... |
Given the following machine learning model name: Morphence, provide a description of the model | **Morphence** is an approach for adversarial defense that shifts the defense landscape by making a model a moving target against adversarial examples. By regularly moving the decision function of a model, Morphence makes it significantly challenging for repeated or correlated attacks to succeed. Morphence deploys a poo... |
Given the following machine learning model name: EfficientUNet++, provide a description of the model | Decoder architecture inspired on the [UNet++](https://paperswithcode.com/method/unet) structure and the [EfficientNet](https://paperswithcode.com/method/efficientnet) building blocks. Keeping the UNet++ structure, the EfficientUNet++ achieves higher performance and significantly lower computational complexity through t... |
Given the following machine learning model name: U2-Net, provide a description of the model | **U2-Net** is a two-level nested U-structure architecture that is designed for salient object detection (SOD). The architecture allows the network to go deeper, attain high resolution, without significantly increasing the memory and computation cost. This is achieved by a nested U-structure: on the bottom level, with ... |
Given the following machine learning model name: Neural Architecture Search, provide a description of the model | **Neural Architecture Search (NAS)** learns a modular architecture which can be transferred from a small dataset to a large dataset. The method does this by reducing the problem of learning best convolutional architectures to the problem of learning a small convolutional cell. The cell can then be stacked in series to ... |
Given the following machine learning model name: Efficient Recurrent Unit, provide a description of the model | An **Efficient Recurrent Unit (ERU)** extends [LSTM](https://paperswithcode.com/method/mrnn)-based language models by replacing linear transforms for processing the input vector with the [EESP](https://paperswithcode.com/method/eesp) unit inside the [LSTM](https://paperswithcode.com/method/lstm) cell. |
Given the following machine learning model name: Transformer, provide a description of the model | A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on... |
Given the following machine learning model name: Convolutional Hough Matching, provide a description of the model | **Convolutional Hough Matching**, or **CHM**, is a geometric matching algorithm that distributes similarities of candidate matches over a geometric transformation space and evaluates them in a convolutional manner. It is casted into a trainable neural layer with a semi-isotropic high-dimensional kernel, which learns n... |
Given the following machine learning model name: Language-driven Scene Synthesis using Multi-conditional Diffusion Model, provide a description of the model | Our main contribution is the Guiding Points Network, where we
integrate all information from the conditions to generate guiding points. |
Given the following machine learning model name: DV3 Attention Block, provide a description of the model | **DV3 Attention Block** is an attention-based module used in the [Deep Voice 3](https://paperswithcode.com/method/deep-voice-3) architecture. It uses a [dot-product attention](https://paperswithcode.com/method/dot-product-attention) mechanism. A query vector (the hidden states of the decoder) and the per-timestep key v... |
Given the following machine learning model name: Singular Value Decomposition Parameterization, provide a description of the model | |
Given the following machine learning model name: DE-GAN: A Conditional Generative Adversarial Network for Document Enhancement, provide a description of the model | Documents often exhibit various forms of degradation, which make it hard to be read and substantially deteriorate the
performance of an OCR system. In this paper, we propose an effective end-to-end framework named Document Enhancement
Generative Adversarial Networks (DE-GAN) that uses the conditional GANs (cGANs) to ... |
Given the following machine learning model name: Layer Normalization, provide a description of the model | Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for ... |
Given the following machine learning model name: Variance-based Feature Importance of Artificial Neural Networks, provide a description of the model | |
Given the following machine learning model name: Goal-Driven Tree-Structured Neural Model, provide a description of the model | |
Given the following machine learning model name: Optimizer Activation Function, provide a description of the model | A new activation function named NIPUNA : f(x)=max〖(g(x),x)〗 where g(x)=x/(〖(1+e〗^(-βx))) |
Given the following machine learning model name: Random Horizontal Flip, provide a description of the model | **RandomHorizontalFlip** is a type of image data augmentation which horizontally flips a given image with a given probability.
Image Credit: [Apache MXNet](https://mxnet.apache.org/versions/1.5.0/tutorials/gluon/data_augmentation.html) |
Given the following machine learning model name: Topographic VAE, provide a description of the model | **Topographic VAE** is a method for efficiently training deep generative models with topographically organized latent variables. The model learns sets of approximately equivariant features (i.e. "capsules") directly from sequences and achieves higher likelihood on correspondingly transforming test sequences. The combin... |
Given the following machine learning model name: Contrastive Cross-View Mutual Information Maximization, provide a description of the model | **CV-MIM**, or **Contrastive Cross-View Mutual Information Maximization**, is a representation learning method to disentangle pose-dependent as well as view-dependent factors from 2D human poses. The method trains a network using cross-view mutual information maximization, which maximizes mutual information of the same... |
Given the following machine learning model name: ShapeConv, provide a description of the model | **ShapeConv**, or **Shape-aware Convolutional layer**, is a convolutional layer for processing the depth feature in indoor RGB-D semantic segmentation. The depth feature is firstly decomposed into a shape-component and a base-component, next two learnable weights are introduced to cooperate with them independently, and... |
Given the following machine learning model name: Mirror Descent Policy Optimization, provide a description of the model | **Mirror Descent Policy Optimization (MDPO)** is a policy gradient algorithm based on the idea of iteratively solving a trust-region problem that minimizes a sum of two terms: a linearization of the standard RL objective function and a proximity term that restricts two consecutive updates to be close to each other. It ... |
Given the following machine learning model name: Deep Belief Network, provide a description of the model | A **Deep Belief Network (DBN)** is a multi-layer generative graphical model. DBNs have bi-directional connections ([RBM](https://paperswithcode.com/method/restricted-boltzmann-machine)-type connections) on the top layer while the bottom layers only have top-down connections. They are trained using layerwise pre-trainin... |
Given the following machine learning model name: Capsule Network, provide a description of the model | A capsule is an activation vector that basically executes on its inputs some complex internal
computations. Length of these activation vectors signifies the
probability of availability of a feature. Furthermore, the condition
of the recognized element is encoded as the direction in which
the vector is pointing. In ... |
Given the following machine learning model name: CubeRE, provide a description of the model | Our model known as CubeRE first encodes each input sentence using a language model encoder to obtain the contextualized sequence representation. We then capture the interaction between each possible head and tail entity as a pair representation for predicting the entity-relation label scores. To reduce the computationa... |
Given the following machine learning model name: Hierarchical Average Precision training for Pertinent ImagE Retrieval, provide a description of the model | |
Given the following machine learning model name: Replica exchange stochastic gradient Langevin Dynamics, provide a description of the model | reSGLD proposes to simulate a high-temperature particle for exploration and a low-temperature particle for exploitation and allows them to swap simultaneously. Moreover, a correction term is included to avoid biases. |
Given the following machine learning model name: Sarsa, provide a description of the model | **Sarsa** is an on-policy TD control algorithm:
$$Q\left(S\_{t}, A\_{t}\right) \leftarrow Q\left(S\_{t}, A\_{t}\right) + \alpha\left[R_{t+1} + \gamma{Q}\left(S\_{t+1}, A\_{t+1}\right) - Q\left(S\_{t}, A\_{t}\right)\right] $$
This update is done after every transition from a nonterminal state $S\_{t}$. if $S\_{t+1... |
Given the following machine learning model name: Table Pre-training via Execution, provide a description of the model | TAPEX is a conceptually simple and empirically powerful pre-training approach to empower existing models with table reasoning skills. TAPEX realizes table pre-training by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesising executable SQL queries. |
Given the following machine learning model name: Convolutional time-domain audio separation network, provide a description of the model | Combines learned time-frequency representation with a masker architecture based on 1D [dilated convolution](https://paperswithcode.com/method/dilated-convolution). |
Given the following machine learning model name: Enhanced Sequential Inference Model, provide a description of the model | **Enhanced Sequential Inference Model** or **ESIM** is a sequential NLI model proposed in [Enhanced LSTM for Natural Language Inference](https://www.aclweb.org/anthology/P17-1152) paper. |
Given the following machine learning model name: Sym-NCO, provide a description of the model | |
Given the following machine learning model name: Contractive Autoencoder, provide a description of the model | A **Contractive Autoencoder** is an autoencoder that adds a penalty term to the classical reconstruction cost function. This penalty term corresponds to the Frobenius norm of the Jacobian matrix of the encoder activations with respect to the input. This penalty term results in a localized space contraction which in tur... |
Given the following machine learning model name: Adaptive Softmax, provide a description of the model | **Adaptive Softmax** is a speedup technique for the computation of probability distributions over words. The adaptive [softmax](https://paperswithcode.com/method/softmax) is inspired by the class-based [hierarchical softmax](https://paperswithcode.com/method/hierarchical-softmax), where the word classes are built to mi... |
Given the following machine learning model name: InfoNCE, provide a description of the model | **InfoNCE**, where NCE stands for Noise-Contrastive Estimation, is a type of contrastive loss function used for [self-supervised learning](https://paperswithcode.com/methods/category/self-supervised-learning).
Given a set $X = ${$x\_{1}, \dots, x\_{N}$} of $N$ random samples containing one positive sample from $p\le... |
Given the following machine learning model name: CT3D, provide a description of the model | **CT3D** is a two-stage 3D object detection framework that leverages a high-quality region proposal network and a Channel-wise [Transformer](https://paperswithcode.com/method/transformer) architecture. The proposed CT3D simultaneously performs proposal-aware embedding and channel-wise context aggregation for the point ... |
Given the following machine learning model name: Dual Path Network, provide a description of the model | A **Dual Path Network (DPN)** is a convolutional neural network which presents a new topology of connection paths internally. The intuition is that [ResNets](https://paperswithcode.com/method/resnet) enables feature re-usage while [DenseNet](https://paperswithcode.com/method/densenet) enables new feature exploration, a... |
Given the following machine learning model name: Single-Shot Multi-Object Tracker, provide a description of the model | **Single-Shot Multi-Object Tracker** or **SMOT**, is a tracking framework that converts any single-shot detector (SSD) model into an online multiple object tracker, which emphasizes simultaneously detecting and tracking of the object paths. Contrary to the existing tracking by detection approaches which suffer from err... |
Given the following machine learning model name: HardELiSH, provide a description of the model | **HardELiSH** is an activation function for neural networks. The HardELiSH is a multiplication of the [HardSigmoid](https://paperswithcode.com/method/hard-sigmoid) and [ELU](https://paperswithcode.com/method/elu) in the negative part and a multiplication of the Linear and the HardSigmoid in the positive
part:
$$f\... |
Given the following machine learning model name: Spatial and Channel SE Blocks, provide a description of the model | To aggregate global spatial information,
an SE block applies global pooling to the feature map.
However, it ignores pixel-wise spatial information,
which is important in dense prediction tasks.
Therefore, Roy et al. proposed
spatial and channel SE blocks (scSE).
Like BAM, spatial SE blocks are used, complementin... |
Given the following machine learning model name: CRF-RNN, provide a description of the model | **CRF-RNN** is a formulation of a [CRF](https://paperswithcode.com/method/crf) as a Recurrent Neural Network. Specifically it formulates mean-field approximate inference for the Conditional Random Fields with Gaussian pairwise potentials as Recurrent Neural Networks. |
Given the following machine learning model name: Two Time-scale Update Rule, provide a description of the model | The **Two Time-scale Update Rule (TTUR)** is an update rule for generative adversarial networks trained with stochastic gradient descent. TTUR has an individual learning rate for both the discriminator and the generator. The main premise is that the discriminator converges to a local minimum when the generator is fixed... |
Given the following machine learning model name: Mish, provide a description of the model | **Mish** is an activation function for neural networks which can be defined as:
$$ f\left(x\right) = x\cdot\tanh{\text{softplus}\left(x\right)}$$
where
$$\text{softplus}\left(x\right) = \ln\left(1+e^{x}\right)$$
(Compare with functionally similar previously proposed activation functions such as the [GELU](h... |
Given the following machine learning model name: Video Panoptic Segmentation Network, provide a description of the model | **Video Panoptic Segmentation Network**, or **VPSNet**, is a model for video panoptic segmentation. On top of UPSNet, which is a method for image panoptic segmentation, VPSNet is designed to take an additional frame as the reference to correlate time information at two levels: pixel-level fusion and object-level tracki... |
Given the following machine learning model name: (2+1)D Convolution, provide a description of the model | A **(2+1)D Convolution** is a type of [convolution](https://paperswithcode.com/method/convolution) used for action recognition convolutional neural networks, with a spatiotemporal volume. As opposed to applying a [3D Convolution](https://paperswithcode.com/method/3d-convolution) over the entire volume, which can be com... |
Given the following machine learning model name: Adaptive Graph Convolutional Neural Networks, provide a description of the model | AGCN is a novel spectral graph convolution network that feed on original data of diverse graph structures.
Image credit: [Adaptive Graph Convolutional Neural Networks](https://arxiv.org/pdf/1801.03226.pdf) |
Given the following machine learning model name: Inverse Q-Learning, provide a description of the model | **Inverse Q-Learning (IQ-Learn)** is a a simple, stable & data-efficient framework for Imitation Learning (IL), that directly learns *soft Q-functions* from expert data. IQ-Learn enables **non-adverserial** imitation learning, working on both offline and online IL settings. It is performant even with very sparse expert... |
Given the following machine learning model name: EsViT, provide a description of the model | **EsViT** proposes two techniques for developing efficient self-supervised vision transformers for visual representation leaning: a multi-stage architecture with sparse self-attention and a new pre-training task of region matching. The multi-stage architecture reduces modeling complexity but with a cost of losing the a... |
Given the following machine learning model name: CentripetalNet, provide a description of the model | **CentripetalNet** is a keypoint-based detector which uses centripetal shift to pair corner keypoints from the same instance. CentripetalNet predicts the position and the centripetal shift of the corner points and matches corners whose shifted results are aligned. |
Given the following machine learning model name: Blue River Controls, provide a description of the model | **Blue River Controls** is a tool that allows users to train and test reinforcement learning algorithms on real-world hardware. It features a simple interface based on OpenAI Gym, that works directly on both simulation and hardware. |
Given the following machine learning model name: ENet Dilated Bottleneck, provide a description of the model | **ENet Dilated Bottleneck** is an image model block used in the [ENet](https://paperswithcode.com/method/enet) semantic segmentation architecture. It is the same as a regular [ENet Bottleneck](https://paperswithcode.com/method/enet-bottleneck) but employs dilated convolutions instead. |
Given the following machine learning model name: GeGLU, provide a description of the model | **GeGLU** is an activation function which is a variant of [GLU](https://paperswithcode.com/method/glu). The definition is as follows:
$$ \text{GeGLU}\left(x, W, V, b, c\right) = \text{GELU}\left(xW + b\right) \otimes \left(xV + c\right) $$ |
Given the following machine learning model name: Smish, provide a description of the model | Smish is an activation function defined as $f(x)=x\cdot \text{tanh}(\ln(1+\sigma(x)))$ where $\sigma(x)$ denotes the sigmoid function. A parameterized version was also described in the form $f(x)=\alpha x\cdot \text{tanh}(\ln(1+\sigma(\beta x)))$.
Paper: Smish: A Novel Activation Function for Deep Learning Methods
... |
Given the following machine learning model name: Leaky ReLU, provide a description of the model | **Leaky Rectified Linear Unit**, or **Leaky ReLU**, is a type of activation function based on a [ReLU](https://paperswithcode.com/method/relu), but it has a small slope for negative values instead of a flat slope. The slope coefficient is determined before training, i.e. it is not learnt during training. This type of a... |
Given the following machine learning model name: modified arcsinh, provide a description of the model | |
Given the following machine learning model name: Spatial Feature Transform, provide a description of the model | **Spatial Feature Transform**, or **SFT**, is a layer that generates affine transformation parameters for spatial-wise feature modulation, and was originally proposed within the context of image super-resolution. A Spatial Feature Transform (SFT) layer learns a mapping function $\mathcal{M}$ that outputs a modulation p... |
Given the following machine learning model name: BRepNet, provide a description of the model | **BRepNet** is a neural network for CAD applications. It is designed to operate directly on B-rep data structures, avoiding the need to approximate the model as meshes or point clouds. BRepNet defines convolutional kernels with respect to oriented coedges in the data structure. In the neighborhood of each coedge, a sma... |
Given the following machine learning model name: RandomRotate, provide a description of the model | **RandomRotate** is a type of image data augmentation where we randomly rotate the image by a degree. |
Given the following machine learning model name: Non-monotonically Triggered ASGD, provide a description of the model | **NT-ASGD**, or **Non-monotonically Triggered ASGD**, is an averaged stochastic gradient descent technique.
In regular ASGD, we take steps identical to [regular SGD](https://paperswithcode.com/method/sgd) but instead of returning the last iterate as the solution, we return $\frac{1}{\left(K-T+1\right)}\sum^{T}\_{i=... |
Given the following machine learning model name: PocketNet, provide a description of the model | **PocketNet** is a face recognition model family discovered through [neural architecture search](https://paperswithcode.com/methods/category/neural-architecture-search). The training is based on multi-step knowledge distillation. |
Given the following machine learning model name: Temporal Distribution Characterization, provide a description of the model | **Temporal Distribution Characterization**, or **TDC**, is a module used in the [AdaRNN](https://paperswithcode.com/method/adarnn) architecture to characterize the distributional information in a time series.
Based on the principle of maximum entropy, maximizing the utilization of shared knowledge underlying a times... |
Given the following machine learning model name: Generative Adversarial Network, provide a description of the model | A **GAN**, or **Generative Adversarial Network**, is a generative model that simultaneously trains
two models: a generative model $G$ that captures the data distribution, and a discriminative model $D$ that estimates the
probability that a sample came from the training data rather than $G$.
The training procedure ... |
Given the following machine learning model name: Multiplicative Attention, provide a description of the model | **Multiplicative Attention** is an attention mechanism where the alignment score function is calculated as:
$$f_{att}\left(\textbf{h}_{i}, \textbf{s}\_{j}\right) = \mathbf{h}\_{i}^{T}\textbf{W}\_{a}\mathbf{s}\_{j}$$
Here $\mathbf{h}$ refers to the hidden states for the encoder/source, and $\mathbf{s}$ is the hidd... |
Given the following machine learning model name: Polynomial Convolution, provide a description of the model | PolyConv learns continuous distributions as the convolutional filters to share the weights across different vertices of graphs or points of point clouds. |
Given the following machine learning model name: Characterizable Invertible 3x3 Convolution, provide a description of the model | Characterizable Invertible $3\times3$ Convolution |
Given the following machine learning model name: AutoAugment, provide a description of the model | **AutoAugment** is an automated approach to find data augmentation policies from data. It formulates the problem of finding the best augmentation policy as a discrete search problem. It consists of two components: a search algorithm and a search space.
At a high level, the search algorithm (implemented as a control... |
Given the following machine learning model name: PonderNet, provide a description of the model | **PonderNet** is an adaptive computation method that learns to adapt the amount of computation based on the complexity of the problem at hand. PonderNet learns end-to-end the number of computational steps to achieve an effective compromise between training prediction accuracy, computational cost and generalization. |
Given the following machine learning model name: EfficientNet, provide a description of the model | **EfficientNet** is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a *compound coefficient*. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method uniformly scales network width, depth, and ... |
Given the following machine learning model name: Early Learning Regularization, provide a description of the model | |
Given the following machine learning model name: SimCLR, provide a description of the model | **SimCLR** is a framework for contrastive learning of visual representations. It learns representations by maximizing agreement between differently augmented views of the same data example via a contrastive loss in the latent space. It consists of:
- A stochastic data augmentation module that transforms any given da... |
Given the following machine learning model name: GeniePath, provide a description of the model | GeniePath is a scalable approach for learning adaptive receptive fields of neural networks defined on permutation invariant graph data. In GeniePath, we propose an adaptive path layer consists of two complementary functions designed for breadth and depth exploration respectively, where the former learns the importance ... |
Given the following machine learning model name: Dialogue-Adaptive Pre-training Objective, provide a description of the model | **Dialogue-Adaptive Pre-training Objective (DAPO)** is a pre-training objective for dialogue adaptation, which is designed to measure qualities of dialogues from multiple important aspects, like Readability, Consistency and Fluency which have already been focused on by general LM pre-training objectives, and those also... |
Given the following machine learning model name: Convolutional Block Attention Module, provide a description of the model | **Convolutional Block Attention Module (CBAM)** is an attention module for convolutional neural networks. Given an intermediate feature map, the module sequentially infers attention maps along two separate dimensions, channel and spatial, then the attention maps are multiplied to the input feature map for adaptive feat... |
Given the following machine learning model name: Fixup Initialization, provide a description of the model | **FixUp Initialization**, or **Fixed-Update Initialization**, is an initialization method that rescales the standard initialization of [residual branches](https://paperswithcode.com/method/residual-block) by adjusting for the network architecture. Fixup aims to enables training very deep [residual networks](https://pap... |
Given the following machine learning model name: Adaptive Span Transformer, provide a description of the model | The **Adaptive Attention Span Transformer** is a Transformer that utilises an improvement to the self-attention layer called [adaptive masking](https://paperswithcode.com/method/adaptive-masking) that allows the model to choose its own context size. This results in a network where each attention layer gathers informati... |
Given the following machine learning model name: PointQuad-Transformer, provide a description of the model | **PQ-Transformer**, or **PointQuad-Transformer**, is a [Transformer](https://paperswithcode.com/method/transformer)-based architecture that predicts 3D objects and layouts simultaneously, using point cloud inputs. Unlike existing methods that either estimate layout keypoints or edges, room layouts are directly paramete... |
Given the following machine learning model name: StoGCN, provide a description of the model | StoGCN is a control variate based algorithm which allow sampling an arbitrarily small neighbor size. Presents new theoretical guarantee for the algorithms to converge to a local optimum of GCN. |
Given the following machine learning model name: Early Stopping, provide a description of the model | **Early Stopping** is a regularization technique for deep neural networks that stops training when parameter updates no longer begin to yield improves on a validation set. In essence, we store and update the current best parameters during training, and when parameter updates no longer yield an improvement (after a set ... |
Given the following machine learning model name: MultiGrain, provide a description of the model | **MultiGrain** is a type of image model that learns a single embedding for classes, instances and copies. In other words, it is a convolutional neural network that is suitable for both image classification and instance retrieval. We learn MultiGrain by jointly training an image embedding for multiple tasks. The result... |
Given the following machine learning model name: BIMAN, provide a description of the model | **BIMAN**, or **Bot Identification by commit Message, commit Association, and author Name**, is a technique to detect bots that commit code. It is comprised of three methods that consider independent aspects of the commits made by a particular author: 1) Commit Message: Identify if commit messages are being generated f... |
Given the following machine learning model name: Inception-ResNet-v2-C, provide a description of the model | **Inception-ResNet-v2-C** is an image model block for an 8 x 8 grid used in the [Inception-ResNet-v2](https://paperswithcode.com/method/inception-resnet-v2) architecture. It largely follows the idea of Inception modules - and grouped convolutions - but also includes residual connections. |
Given the following machine learning model name: Parallax, provide a description of the model | **Parallax** is a hybrid parallel method for training large neural networks. Parallax is a framework that optimizes data parallel training by utilizing the sparsity of model parameters. Parallax introduces a hybrid approach that combines Parameter Server and AllReduce architectures to optimize the amount of data transf... |
Given the following machine learning model name: Pythia, provide a description of the model | **Pythia** is a suite of decoder-only autoregressive language models all trained on public data seen in the exact same order and ranging in size from 70M to 12B parameters. The model architecture and hyperparameters largely follow GPT-3, with a few notable deviations based on recent advances in best practices for large... |
Given the following machine learning model name: High-Order Consensuses, provide a description of the model | |
Given the following machine learning model name: RPDet, provide a description of the model | **RPDet**, or **RepPoints Detector**, is a anchor-free, two-stage object detection model based on deformable convolutions. [RepPoints](https://paperswithcode.com/method/reppoints) serve as the basic object representation throughout the detection system. Starting from the center points, the first set of RepPoints is ob... |
Given the following machine learning model name: PipeDream, provide a description of the model | PipeDream is an asynchronous pipeline parallel strategy for training large neural networks. It adds inter-batch pipelining to intra-batch parallelism to further improve parallel training throughput, helping to better overlap computation with communication and reduce the amount of communication when possible. |
Given the following machine learning model name: E-MBConv, provide a description of the model | |
Given the following machine learning model name: Quasi-Recurrent Neural Network, provide a description of the model | A **QRNN**, or **Quasi-Recurrent Neural Network**, is a type of recurrent neural network that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in parallel across channels. Due to their increased parallelism, they can be up to 16 times fa... |
Given the following machine learning model name: LV-ViT, provide a description of the model | **LV-ViT** is a type of [vision transformer](https://paperswithcode.com/method/vision-transformer) that uses token labelling as a training objective. Different from the standard training
objective of ViTs that computes the classification loss on an additional trainable class token, token labelling takes advantage of a... |
Given the following machine learning model name: WenLan, provide a description of the model | Proposes a two-tower pre-training model called BriVL within the cross-modal contrastive learning framework. A cross-modal pre-training model is defined based on the image-text retrieval task. The main goal is thus to learn two encoders that can embed image and text samples into the same space for effective image-text r... |
Given the following machine learning model name: SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings, provide a description of the model | The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems.
Approaches based on the state of the art visio... |
Given the following machine learning model name: Multilingual Universal Sentence Encoder, provide a description of the model | |
Given the following machine learning model name: Cutout, provide a description of the model | **Cutout** is an image augmentation and regularization technique that randomly masks out square regions of input during training. and can be used to improve the robustness and overall performance of convolutional neural networks. The main motivation for cutout comes from the problem of object occlusion, which is common... |
Given the following machine learning model name: context2vec, provide a description of the model | **context2vec** is an unsupervised model for learning generic context embedding of wide sentential contexts, using a bidirectional [LSTM](https://paperswithcode.com/method/lstm). A large plain text corpora is trained on to learn a neural model that embeds entire sentential contexts and target words in the same low-dime... |
Given the following machine learning model name: PIRL, provide a description of the model | **Pretext-Invariant Representation Learning (PIRL, pronounced as “pearl”)** learns invariant representations based on pretext tasks. PIRL is used with a commonly used pretext task that involves solving [jigsaw](https://paperswithcode.com/method/jigsaw) puzzles. Specifically, PIRL constructs image representations that a... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.