prompts stringlengths 87 212 | description stringlengths 0 6.76k |
|---|---|
Given the following machine learning model name: CurricularFace, provide a description of the model | **CurricularFace**, or **Adaptive Curriculum Learning**, is a method for face recognition that embeds the idea of curriculum learning into the loss function to achieve a new training scheme. This training scheme mainly addresses easy samples in the early training stage and hard ones in the later stage. Specifically, Cu... |
Given the following machine learning model name: Phish: A Novel Hyper-Optimizable Activation Function, provide a description of the model | Deep-learning models estimate values using backpropagation. The activation function within hidden layers is a critical component to minimizing loss in deep neural-networks. Rectified Linear (ReLU) has been the dominant activation function for the past decade. Swish and Mish are newer activation functions that have show... |
Given the following machine learning model name: MyGym: Modular Toolkit for Visuomotor Robotic Tasks, provide a description of the model | We introduce myGym, a toolkit suitable for fast prototyping of neural networks in the area of robotic manipulation and navigation. Our toolbox is fully modular, enabling users to train their algorithms on different robots, environments, and tasks. We also include pretrained neural network modules for the real-time visi... |
Given the following machine learning model name: BTmPG, provide a description of the model | **BTmPG**, or **Back-Translation guided multi-round Paraphrase Generation**, is a multi-round paraphrase generation method that leverages back-translation to guide paraphrase model during training and generates paraphrases in a multiround process. The model regards paraphrase generation as a monolingual translation tas... |
Given the following machine learning model name: MoCo v2, provide a description of the model | **MoCo v2** is an improved version of the [Momentum Contrast](https://paperswithcode.com/method/moco) self-supervised learning algorithm. Motivated by the findings presented in the [SimCLR](https://paperswithcode.com/method/simclr) paper, authors:
- Replace the 1-layer fully connected layer with a 2-layer MLP head w... |
Given the following machine learning model name: Absolute Position Encodings, provide a description of the model | **Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\_{model}$ a... |
Given the following machine learning model name: BatchChannel Normalization, provide a description of the model | **Batch-Channel Normalization**, or **BCN**, uses batch knowledge to prevent channel-normalized models from getting too close to "elimination singularities". Elimination singularities correspond to the points on the training trajectory where neurons become consistently deactivated. They cause degenerate manifolds in th... |
Given the following machine learning model name: Generalized Focal Loss, provide a description of the model | **Generalized Focal Loss (GFL)** is a loss function for object detection that combines Quality [Focal Loss](https://paperswithcode.com/method/focal-loss) and Distribution Focal Loss into a general form. |
Given the following machine learning model name: Matching The Statements, provide a description of the model | |
Given the following machine learning model name: Monte Carlo Dropout, provide a description of the model | |
Given the following machine learning model name: Residual Network, provide a description of the model | **Residual Networks**, or **ResNets**, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack [residual blocks](https:/... |
Given the following machine learning model name: Gradient Sparsification, provide a description of the model | **Gradient Sparsification** is a technique for distributed training that sparsifies stochastic gradients to reduce the communication cost, with minor increase in the number of iterations. The key idea behind our sparsification technique is to drop some coordinates of the stochastic gradient and appropriately amplify th... |
Given the following machine learning model name: Disentangled Attribution Curves, provide a description of the model | **Disentangled Attribution Curves (DAC)** provide interpretations of tree ensemble methods in the form of (multivariate) feature importance curves. For a given variable, or group of variables, [DAC](https://paperswithcode.com/method/dac) plots the importance of a variable(s) as their value changes.
The Figure to the... |
Given the following machine learning model name: ALIGN, provide a description of the model | In the ALIGN method, visual and language representations are jointly trained from noisy image alt-text data. The image and text encoders are learned via contrastive loss (formulated as normalized softmax) that pushes the embeddings of the matched image-text pair together and pushing those of non-matched image-text pair... |
Given the following machine learning model name: Deep Layer Aggregation, provide a description of the model | **DLA**, or **Deep Layer Aggregation**, iteratively and hierarchically merges the feature hierarchy across layers in neural networks to make networks with better accuracy and fewer parameters.
In iterative deep aggregation (IDA), aggregation begins at the shallowest, smallest scale and then iteratively merges deep... |
Given the following machine learning model name: ParamCrop, provide a description of the model | **ParamCrop** is a parametric cubic cropping for video contrastive learning, where cubic cropping refers to cropping a 3D cube
from the input video. The central component of ParamCrop is a differentiable spatio-temporal cropping operation, which enables ParamCrop to be trained simultaneously with the video backbone an... |
Given the following machine learning model name: Exponential Linear Unit, provide a description of the model | The **Exponential Linear Unit** (ELU) is an activation function for neural networks. In contrast to [ReLUs](https://paperswithcode.com/method/relu), ELUs have negative values which allows them to push mean unit activations closer to zero like [batch normalization](https://paperswithcode.com/method/batch-normalization) ... |
Given the following machine learning model name: Semantic Clustering by Adopting Nearest Neighbours, provide a description of the model | SCAN automatically groups images into semantically meaningful clusters when ground-truth annotations are absent. SCAN is a two-step approach where feature learning and clustering are decoupled. First, a self-supervised task is employed to obtain semantically meaningful features. Second, the obtained features are used a... |
Given the following machine learning model name: Beneš Block with Residual Switch Units, provide a description of the model | The **Beneš block** is a computation-efficient alternative to dense attention, enabling the modelling of long-range dependencies in O(n log n) time. In comparison, dense attention which is commonly used in Transformers has O(n^2) complexity.
In music, dependencies occur on several scales, including on a coarse scale... |
Given the following machine learning model name: SentencePiece, provide a description of the model | **SentencePiece** is a subword tokenizer and detokenizer for natural language processing. It performs subword segmentation, supporting the byte-pair-encoding ([BPE](https://paperswithcode.com/method/bpe)) algorithm and unigram language model, and then converts this text into an id sequence guarantee perfect reproducibi... |
Given the following machine learning model name: CycleGAN, provide a description of the model | **CycleGAN**, or **Cycle-Consistent GAN**, is a type of generative adversarial network for unpaired image-to-image translation. For two domains $X$ and $Y$, CycleGAN learns a mapping $G : X \rightarrow Y$ and $F: Y \rightarrow X$. The novelty lies in trying to enforce the intuition that these mappings should be reverse... |
Given the following machine learning model name: CDCC-NET, provide a description of the model | CDCC-NET is a multi-task network that analyzes the detected counter region and predicts 9 outputs: eight float numbers referring to the corner positions (x0/w, y0/h, ... , x3/w, y3/h) and an array containing two float numbers regarding the probability of the counter being legible/operational or illegible/faulty. |
Given the following machine learning model name: Audiovisual SlowFast Network, provide a description of the model | **Audiovisual SlowFast Network**, or **AVSlowFast**, is an architecture for integrated audiovisual perception. AVSlowFast has Slow and Fast visual pathways that are integrated with a Faster Audio pathway to model vision and sound in a unified representation. Audio and visual features are fused at multiple layers, enabl... |
Given the following machine learning model name: CNN Bidirectional LSTM, provide a description of the model | A **CNN BiLSTM** is a hybrid bidirectional [LSTM](https://paperswithcode.com/method/lstm) and CNN architecture. In the original formulation applied to named entity recognition, it learns both character-level and word-level features. The CNN component is used to induce the character-level features. For each word the mod... |
Given the following machine learning model name: Serf, provide a description of the model | **Serf**, or **Log-Softplus ERror activation Function**, is a type of activation function which is self-regularized and nonmonotonic in nature. It belongs to the [Swish](https://paperswithcode.com/method/swish) family of functions. Serf is defined as:
$$f\left(x\right) = x\text{erf}\left(\ln\left(1 + e^{x}\right)\ri... |
Given the following machine learning model name: Decorrelated Batch Normalization, provide a description of the model | **Decorrelated Batch Normalization (DBN)**
is a normalization technique which not just centers and scales activations but whitens them. ZCA whitening instead of [PCA](https://paperswithcode.com/method/pca) whitening is employed since PCA whitening causes a problem called *stochastic axis swapping*, which is detriment... |
Given the following machine learning model name: Learnable adjacency matrix GCN, provide a description of the model | Graph structure is learnable |
Given the following machine learning model name: VGG Loss, provide a description of the model | **VGG Loss** is a type of content loss introduced in the [Perceptual Losses for Real-Time Style Transfer and Super-Resolution](https://paperswithcode.com/paper/perceptual-losses-for-real-time-style) super-resolution and style transfer framework. It is an alternative to pixel-wise losses; VGG Loss attempts to be closer ... |
Given the following machine learning model name: Sparse Transformer, provide a description of the model | A **Sparse Transformer** is a [Transformer](https://paperswithcode.com/method/transformer) based architecture which utilises sparse factorizations of the attention matrix to reduce time/memory to $O(n \sqrt{n})$. Other changes to the Transformer architecture include: (a) a restructured [residual block](https://paperswi... |
Given the following machine learning model name: Class-Attention in Image Transformers, provide a description of the model | **CaiT**, or **Class-Attention in Image Transformers**, is a type of [vision transformer](https://paperswithcode.com/methods/category/vision-transformer) with several design alterations upon the original [ViT](https://paperswithcode.com/method/vision-transformer). First a new layer scaling approach called [LayerScale](... |
Given the following machine learning model name: XCiT Layer, provide a description of the model | An **XCiT Layer** is the main building block of the [XCiT](https://paperswithcode.com/method/xcit) architecture which uses a [cross-covariance attention]() operator as its principal operation. The XCiT layer consists of three main blocks, each preceded by [LayerNorm](https://paperswithcode.com/method/layer-normalizatio... |
Given the following machine learning model name: Strided Attention, provide a description of the model | **Strided Attention** is a factorized attention pattern that has one head attend to the previous
$l$ locations, and the other head attend to every $l$th location, where $l$ is the stride and chosen to be close to $\sqrt{n}$. It was proposed as part of the [Sparse Transformer](https://paperswithcode.com/method/sparse-t... |
Given the following machine learning model name: Neural Network Compression Framework, provide a description of the model | **Neural Network Compression Framework**, or **NNCF**, is a Python-based framework for neural network compression with fine-tuning. It leverages recent advances of various network compression methods and implements some of them, namely quantization, sparsity, filter pruning and binarization. These methods allow produci... |
Given the following machine learning model name: AutoInt, provide a description of the model | **AutoInt** is a deep tabular learning method that models high-order feature interactions of input features. AutoInt can be applied to both numerical and categorical input features. Specifically, both the numerical and categorical features are mapped into the same low-dimensional space. Afterwards, a multi-head self-at... |
Given the following machine learning model name: YOLOv3, provide a description of the model | **YOLOv3** is a real-time, single-stage object detection model that builds on [YOLOv2](https://paperswithcode.com/method/yolov2) with several improvements. Improvements include the use of a new backbone network, [Darknet-53](https://paperswithcode.com/method/darknet-53) that utilises residual connections, or in the wor... |
Given the following machine learning model name: Bilinear Attention, provide a description of the model | Bi-attention employs the attention-in-attention (AiA) mechanism to capture second-order statistical information: the outer point-wise channel attention vectors are computed from the output of the inner channel attention. |
Given the following machine learning model name: Hierarchical Multi-Task Learning, provide a description of the model | Multi-task learning (MTL) introduces an inductive bias, based on a-priori relations between tasks: the trainable model is compelled to model more general dependencies by using the abovementioned relation as an important data feature. Hierarchical MTL, in which different tasks use different levels of the deep neural net... |
Given the following machine learning model name: Truncation Trick, provide a description of the model | The **Truncation Trick** is a latent sampling procedure for generative adversarial networks, where we sample $z$ from a truncated normal (where values which fall outside a range are resampled to fall inside that range).
The original implementation was in [Megapixel Size Image Creation with GAN](https://paperswithcode... |
Given the following machine learning model name: Laplacian Positional Encodings, provide a description of the model | [Laplacian eigenvectors](https://paperswithcode.com/paper/laplacian-eigenmaps-and-spectral-techniques) represent a natural generalization of the [Transformer](https://paperswithcode.com/method/transformer) positional encodings (PE) for graphs as the eigenvectors of a discrete line (NLP graph) are the cosine and sinusoi... |
Given the following machine learning model name: Unified VLP, provide a description of the model | Unified VLP is unified encoder-decoder model for general vision-language pre-training. The models uses a shared multi-layer transformers network for both encoding and decoding. The model is pre-trained on large amount of image-text pairs using the unsupervised learning objectives of two tasks: bidirectional and sequenc... |
Given the following machine learning model name: Elastic Margin Loss for Deep Face Recognition, provide a description of the model | |
Given the following machine learning model name: modReLU, provide a description of the model | **modReLU** is an activation that is a modification of a [ReLU](https://paperswithcode.com/method/relu). It is a pointwise nonlinearity, $\sigma\_{modReLU}\left(z\right) : C \rightarrow C$, which affects only the absolute value of a complex number, defined as:
$$ \sigma\_{modReLU}\left(z\right) = \left(|z| + b\right... |
Given the following machine learning model name: ARShoe, provide a description of the model | **ARShoe** is a multi-branch network for pose estimation and segmentation tackling the "try-on" problem for augmented reality shoes. Consisting of an encoder and a decoder, the multi-branch network is trained to predict keypoints [heatmap](https://paperswithcode.com/method/heatmap) (heatmap), [PAFs](https://paperswithc... |
Given the following machine learning model name: Bi-Directional Graph Convolutional Network, provide a description of the model | |
Given the following machine learning model name: fastText, provide a description of the model | **fastText** embeddings exploit subword information to construct word embeddings. Representations are learnt of character $n$-grams, and words represented as the sum of the $n$-gram vectors. This extends the word2vec type models with subword information. This helps the embeddings understand suffixes and prefixes. Once ... |
Given the following machine learning model name: Patch Merger Module, provide a description of the model | PatchMerger is a module for Vision Transformers that decreases the number of tokens/patches passed onto each individual transformer encoder block whilst maintaining performance and reducing compute. PatchMerger takes linearly transforms an input of shape N patches × D dimensions through a learnable weight matrix of sha... |
Given the following machine learning model name: Virtual Batch Normalization, provide a description of the model | **Virtual Batch Normalization** is a normalization method used for training generative adversarial networks that extends batch normalization. Regular [batch normalization](https://paperswithcode.com/method/batch-normalization) causes the output of a neural network for an input example $\mathbf{x}$ to be highly dependen... |
Given the following machine learning model name: CLIPort, provide a description of the model | CLIPort, a language-conditioned imitation-learning agent that combines the broad semantic understanding (what) of CLIP [1] with the spatial precision (where) of Transporter [2]. |
Given the following machine learning model name: Gradient Normalization, provide a description of the model | **Gradient Normalization** is a normalization method for [Generative Adversarial Networks](https://paperswithcode.com/methods/category/generative-adversarial-networks) to tackle the training instability of generative adversarial networks caused by the sharp gradient space. Unlike existing work such as [gradient penalty... |
Given the following machine learning model name: Dynamic Keypoint Head, provide a description of the model | **Dynamic Keypoint Head** is an output head for pose estimation that are conditioned on each instance (person), and can encode the instance concept in the dynamically-generated weights of their filters. They are used in the [FCPose](https://paperswithcode.com/method/fcpose) architecture.
The Figure shows the core id... |
Given the following machine learning model name: Fast AutoAugment, provide a description of the model | **Fast AutoAugment** is an image data augmentation algorithm that finds effective augmentation policies via a search strategy based on density matching, motivated by Bayesian DA. The strategy is to improve the generalization performance of a given network by learning the augmentation policies which treat augmented data... |
Given the following machine learning model name: lda2vec, provide a description of the model | **lda2vec** builds representations over both words and documents by mixing word2vec’s skipgram architecture with Dirichlet-optimized sparse topic mixtures.
The Skipgram Negative-Sampling (SGNS) objective of word2vec is modified to utilize document-wide feature vectors while simultaneously learning continuous docume... |
Given the following machine learning model name: Implicit Q-Learning, provide a description of the model | |
Given the following machine learning model name: XLM, provide a description of the model | **XLM** is a [Transformer](https://paperswithcode.com/method/transformer) based architecture that is pre-trained using one of three language modelling objectives:
1. Causal Language Modeling - models the probability of a word given the previous words in a sentence.
2. Masked Language Modeling - the masked language ... |
Given the following machine learning model name: Random Convolutional Kernel Transform, provide a description of the model | Linear classifier using random convolutional kernels applied to time series. |
Given the following machine learning model name: Fisher-BRC, provide a description of the model | **Fisher-BRC** is an actor critic algorithm for offline reinforcement learning that encourages the learned policy to stay close to the data, namely parameterizing the critic as the $\log$-behavior-policy, which generated the offline dataset, plus a state-action value offset term, which can be learned using a neural net... |
Given the following machine learning model name: WaveVAE, provide a description of the model | **WaveVAE** is a generative audio model that can be used as a vocoder in text-to-speech systems. It is a [VAE](https://paperswithcode.com/method/vae) based model that can be trained from scratch by jointly optimizing the encoder $q\_{\phi}\left(\mathbf{z}|\mathbf{x}, \mathbf{c}\right)$ and decoder $p\_{\theta}\left(\ma... |
Given the following machine learning model name: Perceiver IO, provide a description of the model | Perceiver IO is a general neural network architecture that performs well for structured input modalities and output tasks. Perceiver IO is built to easily integrate and transform arbitrary information for arbitrary tasks. |
Given the following machine learning model name: Forward-Looking Actor, provide a description of the model | **FORK**, or **Forward Looking Actor** is a type of actor for actor-critic algorithms. In particular, FORK includes a neural network that forecasts the next state given the current state and current action, called system network; and a neural network that forecasts the
reward given a (state, action) pair, called rewar... |
Given the following machine learning model name: Cross-encoder Reranking, provide a description of the model | Cross-encoder Reranking |
Given the following machine learning model name: Gradient Harmonizing Mechanism R, provide a description of the model | **GHM-R** is a loss function designed to balance the gradient flow for bounding box refinement. The GHM first performs statistics on the number of examples with similar attributes w.r.t their gradient density and then attaches a harmonizing parameter to the gradient of each example according to the density. The modific... |
Given the following machine learning model name: Sparse Convolutions, provide a description of the model | |
Given the following machine learning model name: Res2Net, provide a description of the model | **Res2Net** is an image model that employs a variation on bottleneck residual blocks. The motivation is to be able to represent features at multiple scales. This is achieved through a novel building block for CNNs that constructs hierarchical residual-like connections within one single [residual block](https://paperswi... |
Given the following machine learning model name: Mutual Guidance, provide a description of the model | |
Given the following machine learning model name: Peer-attention, provide a description of the model | **Peer-attention** is a network component which dynamically learns the attention weights using another block or input modality. This is unlike AssembleNet which partially relies on exponential mutations to explore connections. Once the attention weights are found, we can either prune the connections by only leaving the... |
Given the following machine learning model name: Inflated 3D ConvNet Retina Net, provide a description of the model | |
Given the following machine learning model name: Structural Deep Network Embedding, provide a description of the model | |
Given the following machine learning model name: ProxylessNet-CPU, provide a description of the model | **ProxylessNet-CPU** is an image model learnt with the [ProxylessNAS](https://paperswithcode.com/method/proxylessnas) [neural architecture search](https://paperswithcode.com/method/neural-architecture-search) algorithm that is optimized for CPU devices. It uses inverted residual blocks (MBConvs) from [MobileNetV2](http... |
Given the following machine learning model name: ScaledSoftSign, provide a description of the model | The **ScaledSoftSign** is a modification of **[SoftSign](https://paperswithcode.com/method/softsign-activation)** activation function that has trainable parameters.
$$ScaledSoftSign(x) = \frac{\alpha x}{\beta + |x|}$$ |
Given the following machine learning model name: SortCut Sinkhorn Attention, provide a description of the model | **SortCut Sinkhorn Attention** is a variant of [Sparse Sinkhorn Attention](https://paperswithcode.com/method/sparse-sinkhorn-attention) where a post-sorting truncation of the input sequence is performed, essentially performing a hard top-k operation on the input sequence blocks within the computational graph. While mos... |
Given the following machine learning model name: Groupwise Point Convolution, provide a description of the model | A **Groupwise Point Convolution** is a type of [convolution](https://paperswithcode.com/method/convolution) where we apply a [point convolution](https://paperswithcode.com/method/pointwise-convolution) groupwise (using different set of convolution filter groups).
Image Credit: [Chi-Feng Wang](https://towardsdatascie... |
Given the following machine learning model name: CascadePSP, provide a description of the model | **CascadePSP** is a general segmentation refinement model that refines any given segmentation from low to high resolution. The model takes as input an initial mask that can be an output of any algorithm to provide a rough object location. Then the CascadePSP will output a refined mask. The model is designed in a cascad... |
Given the following machine learning model name: imGHUM, provide a description of the model | **imGHUM** is a generative model of 3D human shape and articulated pose, represented as a signed distance function. The full body is modeled implicitly as a function zero-level-set and without the use of an explicit template mesh. We compute the signed distance $s = S\left(\rho, \alpha\right)$ and the semantics $c = C\... |
Given the following machine learning model name: MatrixNet, provide a description of the model | **MatrixNet** is a scale and aspect ratio aware building block for object detection that seek to handle objects of different sizes and aspect ratios. They have several matrix layers, each layer handles an object of specific size and aspect ratio. They can be seen as an alternative to [FPNs](https://paperswithcode.com/m... |
Given the following machine learning model name: TabNN, provide a description of the model | TabNN is a universal neural network solution to derive effective NN architectures for tabular data in all kinds of tasks automatically. Specifically, the design of TabNN follows two principles: to explicitly leverage expressive feature combinations and to reduce model complexity. Since GBDT has empirically proven its s... |
Given the following machine learning model name: k-Means Clustering, provide a description of the model | **k-Means Clustering** is a clustering algorithm that divides a training set into $k$ different clusters of examples that are near each other. It works by initializing $k$ different centroids {$\mu\left(1\right),\ldots,\mu\left(k\right)$} to different values, then alternating between two steps until convergence:
(i)... |
Given the following machine learning model name: DeepSIM, provide a description of the model | **DeepSIM** is a generative model for conditional image manipulation based on a single image. The network learns to map between a primitive representation of the image to the image itself. At manipulation time, the generator allows for making complex image changes by modifying the primitive input representation and map... |
Given the following machine learning model name: DROID-SLAM, provide a description of the model | **DROID-SLAM** is a deep learning based SLAM system. It consists of recurrent iterative updates of camera pose and pixelwise depth through a Dense Bundle Adjustment layer. This layer leverages geometric constraints, improves accuracy and robustness, and enables a monocular system to handle stereo or RGB-D input without... |
Given the following machine learning model name: TridentNet, provide a description of the model | **TridentNet** is an object detection architecture that aims to generate scale-specific feature
maps with a uniform representational power. A parallel multi-branch architecture is constructed in which each branch shares the same transformation parameters but with different receptive fields. A scale-aware training sch... |
Given the following machine learning model name: DeepZero, provide a description of the model | |
Given the following machine learning model name: Discriminative Fine-Tuning, provide a description of the model | **Discriminative Fine-Tuning** is a fine-tuning strategy that is used for [ULMFiT](https://paperswithcode.com/method/ulmfit) type models. Instead of using the same learning rate for all layers of the model, discriminative fine-tuning allows us to tune each layer with different learning rates. For context, the regular s... |
Given the following machine learning model name: DPN Block, provide a description of the model | A **Dual Path Network** block is an image model block used in convolutional neural network. The idea of this module is to enable sharing of common features while maintaining the flexibility to explore new features through dual path architectures. In this sense it combines the benefits of [ResNets](https://paperswithcod... |
Given the following machine learning model name: MoViNet, provide a description of the model | **Mobile Video Network**, or **MoViNet**, is a type of computation and memory efficient video network that can operate on streaming video for online inference. Three techniques are used to improve efficiency while reducing the peak memory usage of 3D CNNs. First, a video network search space is designed and [neural arc... |
Given the following machine learning model name: Stand-Alone Self Attention, provide a description of the model | **Stand-Alone Self Attention** (SASA) replaces all instances of spatial [convolution](https://paperswithcode.com/method/convolution) with a form of self-attention applied to [ResNet](https://paperswithcode.com/method/resnet) producing a fully, stand-alone self-attentional model. |
Given the following machine learning model name: Visual Parsing, provide a description of the model | Visual Parsing is a vision and language pretrained model that adopts self-attention for visual feature learning where each visual token is an approximate weighted mixture of all tokens. Thus, visual parsing provides the dependencies of each visual token pair. It helps better learning of visual relation with the langua... |
Given the following machine learning model name: SRU, provide a description of the model | **SRU**, or **Simple Recurrent Unit**, is a recurrent neural unit with a light form of recurrence. SRU exhibits the same level of parallelism as [convolution](https://paperswithcode.com/method/convolution) and [feed-forward nets](https://paperswithcode.com/methods/category/feedforward-networks). This is achieved by bal... |
Given the following machine learning model name: Circular Dilated Convolutional Neural Networks, provide a description of the model | |
Given the following machine learning model name: Paddle Anchor Free Network, provide a description of the model | **PAFNet** is an anchor-free detector for object detection that removes pre-defined anchors and regresses the locations directly, which can achieve higher efficiency. The overall network is composed of a backbone, an up-sampling module, an AGS module, a localization branch and a regression branch. Specifically, ResNet... |
Given the following machine learning model name: Discriminative Adversarial Search, provide a description of the model | **Discriminative Adversarial Search**, or **DAS**, is a sequence decoding approach which aims to alleviate the effects of exposure bias and to optimize on the data distribution itself rather than for external metrics. Inspired by generative adversarial networks (GANs), wherein a discriminator is used to improve the gen... |
Given the following machine learning model name: Group-Aware Neural Network, provide a description of the model | **GAGNN**, or **Group-aware Graph Neural Network**, is a hierarchical model for nationwide city air quality forecasting. The model constructs a city graph and a city group graph to model the spatial and latent dependencies between cities, respectively. GAGNN introduces differentiable grouping network to discover the la... |
Given the following machine learning model name: CARAFE, provide a description of the model | **Content-Aware ReAssembly of FEatures (CARAFE)** is an operator for feature upsampling in convolutional neural networks. CARAFE has several appealing properties: (1) Large field of view. Unlike previous works (e.g. bilinear interpolation) that only exploit subpixel neighborhood, CARAFE can aggregate contextual informa... |
Given the following machine learning model name: Xavier Initialization, provide a description of the model | **Xavier Initialization**, or **Glorot Initialization**, is an initialization scheme for neural networks. Biases are initialized be 0 and the weights $W\_{ij}$ at each layer are initialized as:
$$ W\_{ij} \sim U\left[-\frac{\sqrt{6}}{\sqrt{fan_{in} + fan_{out}}}, \frac{\sqrt{6}}{\sqrt{fan_{in} + fan_{out}}}\right] $... |
Given the following machine learning model name: CodeT5, provide a description of the model | **CodeT5** is a [Transformer](https://paperswithcode.com/methods/category/transformers)-based model for code understanding and generation based on the [T5 architecture](https://paperswithcode.com/method/t5). It utilizes an identifier-aware pre-training objective that considers the crucial token type information (identi... |
Given the following machine learning model name: Differentiable Hyperparameter Search, provide a description of the model | Differentiable simultaneous optimization of hyperparameters and neural network architecture. Also a [Neural Architecture Search](https://paperswithcode.com/method/neural-architecture-search) (NAS) method. |
Given the following machine learning model name: GBST, provide a description of the model | **GBST**, or **Gradient-based Subword Tokenization Module**, is a soft gradient-based subword tokenization module that automatically learns latent subword representations from characters in a data-driven fashion. Concretely, GBST enumerates candidate subword blocks and learns to score them in a position-wise fashion us... |
Given the following machine learning model name: LightGCN, provide a description of the model | **LightGCN** is a type of [graph convolutional neural network](https://paperswithcode.com/method/gcn) (GCN), including only the most essential component in GCN (neighborhood aggregation) for collaborative filtering. Specifically, LightGCN learns user and item embeddings by linearly propagating them on the user-item int... |
Given the following machine learning model name: Inception-B, provide a description of the model | **Inception-B** is an image model block used in the [Inception-v4](https://paperswithcode.com/method/inception-v4) architecture. |
Given the following machine learning model name: Selective Kernel, provide a description of the model | A **Selective Kernel** unit is a bottleneck block consisting of a sequence of 1×1 [convolution](https://paperswithcode.com/method/convolution), SK convolution and 1×1 convolution. It was proposed as part of the [SKNet](https://paperswithcode.com/method/sknet) CNN architecture. In general, all the large kernel convoluti... |
Given the following machine learning model name: Pyramid Vision Transformer v2, provide a description of the model | **Pyramid Vision Transformer v2** (PVTv2) is a type of [Vision Transformer](https://paperswithcode.com/method/vision-transformer) for detection and segmentation tasks. It improves on [PVTv1](https://paperswithcode.com/method/pvt) through several design improvements: (1) overlapping patch embedding, (2) convolutional fe... |
Given the following machine learning model name: Attention with Linear Biases, provide a description of the model | **ALiBi**, or **Attention with Linear Biases**, is a [positioning method](https://paperswithcode.com/methods/category/position-embeddings) that allows [Transformer](https://paperswithcode.com/methods/category/transformers) language models to consume, at inference time, sequences which are longer than the ones they were... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.