prompts stringlengths 87 212 | description stringlengths 0 6.76k |
|---|---|
Given the following machine learning model name: O-Net, provide a description of the model | |
Given the following machine learning model name: Non-Local Block, provide a description of the model | A **Non-Local Block** is an image block module used in neural networks that wraps a [non-local operation](https://paperswithcode.com/method/non-local-operation). We can define a non-local block as:
$$ \mathbb{z}\_{i} = W\_{z}\mathbb{y\_{i}} + \mathbb{x}\_{i} $$
where $y\_{i}$ is the output from the non-local oper... |
Given the following machine learning model name: TransferQA, provide a description of the model | **TransferQA** is a transferable generative QA model, built upon [T5](https://paperswithcode.com/method/t5) that combines extractive QA and multi-choice QA via a text-to-text [transformer](https://paperswithcode.com/method/transformer) framework, and tracks both categorical slots and non-categorical slots in DST. In ad... |
Given the following machine learning model name: Model-Agnostic Meta-Learning, provide a description of the model | **MAML**, or **Model-Agnostic Meta-Learning**, is a model and task-agnostic algorithm for meta-learning that trains a model’s parameters such that a small number of gradient updates will lead to fast learning on a new task.
Consider a model represented by a parametrized function $f\_{\theta}$ with parameters $\theta... |
Given the following machine learning model name: Graph sampling based inductive learning method, provide a description of the model | Scalable method to train large scale GNN models via sampling small subgraphs. |
Given the following machine learning model name: Reformer, provide a description of the model | **Reformer** is a [Transformer](https://paperswithcode.com/method/transformer) based architecture that seeks to make efficiency improvements. [Dot-product attention](https://paperswithcode.com/method/dot-product-attention) is replaced by one that uses locality-sensitive hashing, changing its complexity
from O($L^2$) t... |
Given the following machine learning model name: Surface Nomral-based Spatial Propagation, provide a description of the model | Inspired by the spatial propagation mechanism utilized in the depth completion task \cite{NLSPN}, we introduce a normal incorporated non-local disparity propagation module in which we hub NDP to generate non-local affinities and offsets for spatial propagation at the disparity level. The motivation lies that the sample... |
Given the following machine learning model name: Generalized Mean Pooling, provide a description of the model | **Generalized Mean Pooling (GeM)** computes the generalized mean of each channel in a tensor. Formally:
$$ \textbf{e} = \left[\left(\frac{1}{|\Omega|}\sum\_{u\in{\Omega}}x^{p}\_{cu}\right)^{\frac{1}{p}}\right]\_{c=1,\cdots,C} $$
where $p > 0$ is a parameter. Setting this exponent as $p > 1$ increases the contrast... |
Given the following machine learning model name: Watch Your Step, provide a description of the model | |
Given the following machine learning model name: ProxylessNet-Mobile, provide a description of the model | **ProxylessNet-Mobile** is a convolutional neural architecture learnt with the [ProxylessNAS](https://paperswithcode.com/method/proxylessnas) [neural architecture search](https://paperswithcode.com/method/neural-architecture-search) algorithm that is optimized for mobile devices. It uses inverted residual blocks (MBCon... |
Given the following machine learning model name: MobileNetV1, provide a description of the model | **MobileNet** is a type of convolutional neural network designed for mobile and embedded vision applications. They are based on a streamlined architecture that uses depthwise separable convolutions to build lightweight deep neural networks that can have low latency for mobile and embedded devices. |
Given the following machine learning model name: Recurrent Back Projection Network, provide a description of the model | |
Given the following machine learning model name: Bidirectional GRU, provide a description of the model | A **Bidirectional GRU**, or **BiGRU**, is a sequence processing model that consists of two [GRUs](https://paperswithcode.com/method/gru). one taking the input in a forward direction, and the other in a backwards direction. It is a bidirectional recurrent neural network with only the input and forget gates.
Image Sou... |
Given the following machine learning model name: Global-Local Attention, provide a description of the model | **Global-Local Attention** is a type of attention mechanism used in the [ETC](https://paperswithcode.com/method/etc) architecture. ETC receives two separate input sequences: the global input $x^{g} = (x^{g}\_{1}, \dots, x^{g}\_{n\_{g}})$ and the long input $x^{l} = (x^{l}\_{1}, \dots x^{l}\_{n\_{l}})$. Typically, the l... |
Given the following machine learning model name: End-To-End Memory Network, provide a description of the model | An **End-to-End Memory Network** is a neural network with a recurrent attention model over a possibly large external memory. The architecture is a form of [Memory Network](https://paperswithcode.com/method/memory-network), but unlike the model in that work, it is trained end-to-end, and hence requires significantly les... |
Given the following machine learning model name: SuperpixelGridCut, SuperpixelGridMean, SuperpixelGridMix, provide a description of the model | Karim Hammoudi, Adnane Cabani, Bouthaina Slika, Halim Benhabiles, Fadi Dornaika and Mahmoud Melkemi. SuperpixelGridCut, SuperpixelGridMean and SuperpixelGridMix Data Augmentation, arXiv:2204.08458, 2022. https://doi.org/10.48550/arxiv.2204.08458 |
Given the following machine learning model name: Separate And Diffuse, provide a description of the model | |
Given the following machine learning model name: Canonical Tensor Decomposition with N3 Regularizer, provide a description of the model | Canonical Tensor Decomposition, trained with N3 regularizer |
Given the following machine learning model name: Canvas Method, provide a description of the model | **Canvas Method** is a method for inference attacks on object detection models. It draws a predicted bounding box distribution on an empty canvas for an attack model input. The canvas is initially set to an image of 300$\times$300 pixels in size, where every pixel has a value of zero and the boxes drawn on the canvas h... |
Given the following machine learning model name: Good Feature Matching, provide a description of the model | **Good Feature Matching** is an active map-to-frame feature matching method. Feature matching effort is tied to submatrix selection, which has combinatorial time complexity and requires choosing a scoring metric. Via simulation, the Max-logDet matrix revealing metric is shown to perform best. |
Given the following machine learning model name: Sparsemax, provide a description of the model | **Sparsemax** is a type of activation/output function similar to the traditional [softmax](https://paperswithcode.com/method/softmax), but able to output sparse probabilities.
$$ \text{sparsemax}\left(z\right) = \arg\_{p∈\Delta^{K−1}}\min||\mathbf{p} - \mathbf{z}||^{2} $$ |
Given the following machine learning model name: Batchboost, provide a description of the model | **Batchboost** is a variation on [MixUp](https://paperswithcode.com/method/mixup) that instead of mixing just two images, mixes many images together. |
Given the following machine learning model name: GCNet, provide a description of the model | A **Global Context Network**, or **GCNet**, utilises global context blocks to model long-range dependencies in images. It is based on the [Non-Local Network](https://paperswithcode.com/method/non-local-block), but it modifies the architecture so less computation is required. Global context blocks are applied to multipl... |
Given the following machine learning model name: DenseNAS, provide a description of the model | **DenseNAS** is a [neural architecture search](https://paperswithcode.com/method/neural-architecture-search) method that utilises a densely connected search space. The search space is represented as a dense super network, which is built upon designed routing blocks. In the super network, routing blocks are densely conn... |
Given the following machine learning model name: Laplacian EigenMap, provide a description of the model | |
Given the following machine learning model name: Receptive Field Block, provide a description of the model | **Receptive Field Block (RFB)** is a module for strengthening the deep features learned from lightweight CNN models so that they can contribute to fast and accurate detectors. Specifically, RFB makes use of multi-branch pooling with varying kernels corresponding to RFs of different sizes, applies [dilated convolution](... |
Given the following machine learning model name: Self-critical Sequence Training, provide a description of the model | |
Given the following machine learning model name: Distribution-induced Bidirectional Generative Adversarial Network for Graph Representation Learning, provide a description of the model | DBGAN is a method for graph representation learning. Instead of the widely used normal distribution assumption, the prior distribution of latent representation in DBGAN is estimated in a structure-aware way, which implicitly bridges the graph and feature spaces by prototype learning.
Source: [Distribution-induced Bi... |
Given the following machine learning model name: Greedy Policy Search, provide a description of the model | **Greedy Policy Search** (GPS) is a simple algorithm that learns a policy for test-time data augmentation based on the predictive performance on a validation set. GPS starts with an empty policy and builds it in an iterative fashion. Each step selects a sub-policy that provides the largest improvement in calibrated log... |
Given the following machine learning model name: Dueling Network, provide a description of the model | A **Dueling Network** is a type of Q-Network that has two streams to separately estimate (scalar) state-value and the advantages for each action. Both streams share a common convolutional feature learning module. The two streams are combined via a special aggregating layer to produce an
estimate of the state-action va... |
Given the following machine learning model name: Mogrifier LSTM, provide a description of the model | The **Mogrifier LSTM** is an extension to the [LSTM](https://paperswithcode.com/method/lstm) where the LSTM’s input $\mathbf{x}$ is gated conditioned on the output of the previous step $\mathbf{h}\_{prev}$. Next, the gated input is used in a similar manner to gate the output of the
previous time step. After a couple o... |
Given the following machine learning model name: Problem Agnostic Speech Encoder +, provide a description of the model | **PASE+** is a problem-agnostic speech encoder that combines a convolutional encoder followed by multiple neural networks, called workers, tasked to solve self-supervised problems (i.e., ones that do not require manual annotations as ground truth). An online speech distortion module is employed, that contaminates the i... |
Given the following machine learning model name: 3-dimensional interaction space, provide a description of the model | A **trainable 3D interaction space** aims to captures the associations between the triplet components and helps model the recognition of multiple triplets in the same frame.
Source: [Nwoye et al.](https://arxiv.org/pdf/2007.05405v1.pdf)
Image source: [Nwoye et al.](https://arxiv.org/pdf/2007.05405v1.pdf) |
Given the following machine learning model name: DExTra, provide a description of the model | **DExTra**, or **Deep and Light-weight Expand-reduce Transformation**, is a light-weight expand-reduce transformation that enables learning wider representations efficiently.
DExTra maps a $d\_{m}$ dimensional input vector into a high dimensional space (expansion) and then
reduces it down to a $d\_{o}$ dimensional ... |
Given the following machine learning model name: Color Jitter, provide a description of the model | **ColorJitter** is a type of image data augmentation where we randomly change the brightness, contrast and saturation of an image.
Image Credit: [Apache MXNet](https://mxnet.apache.org/versions/1.5.0/tutorials/gluon/data_augmentation.html) |
Given the following machine learning model name: StruBERT: Structure-aware BERT for Table Search and Matching, provide a description of the model | A large amount of information is stored in data tables. Users can search for data tables using a keyword-based query. A table is composed primarily of data values that are organized in rows and columns providing implicit structural information. A table is usually accompanied by secondary information such as the caption... |
Given the following machine learning model name: G-GLN Neuron, provide a description of the model | A **G-GLN Neuron** is a type of neuron used in the [G-GLN](https://paperswithcode.com/method/g-gln) architecture. G-GLN. The key idea is that further representational power can be added to a weighted product of Gaussians via a contextual gating procedure. This is achieved by extending a weighted product of Gaussians mo... |
Given the following machine learning model name: Estimation Statistics, provide a description of the model | Estimation statistics is a data analysis framework that uses a combination of effect sizes, confidence intervals, precision planning, and meta-analysis to plan experiments, analyze data and interpret results. It is distinct from null hypothesis significance testing (NHST), which is considered to be less informative. Th... |
Given the following machine learning model name: Contrastive Multiview Coding, provide a description of the model | **Contrastive Multiview Coding (CMC)** is a self-supervised learning approach, based on [CPC](https://paperswithcode.com/method/contrastive-predictive-coding), that learns representations that capture information shared between multiple sensory views. The core idea is to set an anchor view and the sample positive and ... |
Given the following machine learning model name: Asynchronous Interaction Aggregation, provide a description of the model | **Asynchronous Interaction Aggregation**, or **AIA**, is a network that leverages different interactions to boost action detection. There are two key designs in it: one is the Interaction Aggregation structure (IA) adopting a uniform paradigm to model and integrate multiple types of interaction; the other is the Asynch... |
Given the following machine learning model name: PixelShuffle, provide a description of the model | **PixelShuffle** is an operation used in super-resolution models to implement efficient sub-pixel convolutions with a stride of $1/r$. Specifically it rearranges elements in a tensor of shape $(\*, C \times r^2, H, W)$ to a tensor of shape $(\*, C, H \times r, W \times r)$.
Image Source: [Remote Sensing Single-Image... |
Given the following machine learning model name: Linear Warmup With Linear Decay, provide a description of the model | **Linear Warmup With Linear Decay** is a learning rate schedule in which we increase the learning rate linearly for $n$ updates and then linearly decay afterwards. |
Given the following machine learning model name: Adaptive Feature Pooling, provide a description of the model | **Adaptive Feature Pooling** pools features from all levels for each proposal in object detection and fuses them for the following prediction. For each proposal, we map them to different feature levels. Following the idea of [Mask R-CNN](https://paperswithcode.com/method/adaptive-feature-pooling), [RoIAlign](https://pa... |
Given the following machine learning model name: Region Proposal Network, provide a description of the model | A **Region Proposal Network**, or **RPN**, is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals. RPN and algorithms like [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) can b... |
Given the following machine learning model name: Compressive Transformer, provide a description of the model | The **Compressive Transformer** is an extension to the [Transformer](https://paperswithcode.com/method/transformer) which maps past hidden activations (memories) to a smaller set of compressed representations (compressed memories). The Compressive Transformer uses the same attention mechanism over its set of memories a... |
Given the following machine learning model name: Social-STGCNN, provide a description of the model | **Social-STGCNN** is a method for human trajectory prediction. Pedestrian trajectories are not only influenced by the pedestrian itself but also by interaction with surrounding objects. |
Given the following machine learning model name: PolarNet, provide a description of the model | **PolarNet** is an improved grid representation for online, single-scan LiDAR point clouds. Instead of using common spherical or bird's-eye-view projection, the polar bird's-eye-view representation balances the points across grid cells in a polar coordinate system, indirectly aligning a segmentation network's attention... |
Given the following machine learning model name: double-stage parameter tuning, provide a description of the model | Parameter tuning method for neural network models with adaptive activation functions. |
Given the following machine learning model name: FixMatch, provide a description of the model | FixMatch is an algorithm that first generates pseudo-labels using the model's predictions on weakly-augmented unlabeled images. For a given image, the pseudo-label is only retained if the model produces a high-confidence prediction. The model is then trained to predict the pseudo-label when fed a strongly-augmented ver... |
Given the following machine learning model name: Categorical Modularity, provide a description of the model | A novel low-resource intrinsic metric to evaluate word
embedding quality based on graph modularity. |
Given the following machine learning model name: GPT-Neo, provide a description of the model | An implementation of model & data parallel [GPT3-like](https://paperswithcode.com/method/gpt-3) models using the [mesh-tensorflow](https://github.com/tensorflow/mesh) library.
Source: [EleutherAI/GPT-Neo](https://github.com/EleutherAI/gpt-neo) |
Given the following machine learning model name: Pose-Appearance Disentangling, provide a description of the model | A method to disentangle pose from other factors in a scene. |
Given the following machine learning model name: TorchBeast, provide a description of the model | **TorchBeast** is a platform for reinforcement learning (RL) research in PyTorch. It implements a version of the popular [IMPALA](https://paperswithcode.com/method/impala) algorithm for fast, asynchronous, parallel training of RL agents. |
Given the following machine learning model name: Mixture of Softmaxes, provide a description of the model | **Mixture of Softmaxes** performs $K$ different softmaxes and mixes them. The motivation is that the traditional [softmax](https://paperswithcode.com/method/softmax) suffers from a softmax bottleneck, i.e. the expressiveness of the conditional probability we can model is constrained by the combination of a dot product ... |
Given the following machine learning model name: Swish, provide a description of the model | **Swish** is an activation function, $f(x) = x \cdot \text{sigmoid}(\beta x)$, where $\beta$ a learnable parameter. Nearly all implementations do not use the learnable parameter $\beta$, in which case the activation function is $x\sigma(x)$ ("Swish-1").
The function $x\sigma(x)$ is exactly the [SiLU](https://papersw... |
Given the following machine learning model name: Cosine Linear Unit, provide a description of the model | The **Cosine Linear Unit**, or **CosLU**, is a type of activation function that has trainable parameters and uses the cosine function.
$$CosLU(x) = (x + \alpha \cos(\beta x))\sigma(x)$$ |
Given the following machine learning model name: MACEst, provide a description of the model | **Model Agnostic Confidence Estimator**, or **MACEst**, is a model-agnostic confidence estimator. Using a set of nearest neighbours, the algorithm differs from other methods by estimating confidence independently as a local quantity which explicitly accounts for both aleatoric and epistemic uncertainty. This approach d... |
Given the following machine learning model name: SpecGAN, provide a description of the model | **SpecGAN** is a generative adversarial network method for spectrogram-based, frequency-domain audio generation. The problem is suited for GANs designed for image generation. The model can be approximately inverted.
To process audio into suitable spectrograms, the authors perform the short-time Fourier transform wi... |
Given the following machine learning model name: InfoGAN, provide a description of the model | **InfoGAN** is a type of generative adversarial network that modifies the [GAN](https://paperswithcode.com/method/gan) objective to
encourage it to learn interpretable and meaningful representations. This is done by maximizing the
mutual information between a fixed small subset of the GAN’s noise variables and the ob... |
Given the following machine learning model name: Two-Way Dense Layer, provide a description of the model | **Two-Way Dense Layer** is an image model block used in the [PeleeNet](https://paperswithcode.com/method/peleenet) architectures. Motivated by [GoogLeNet](https://paperswithcode.com/method/googlenet), the 2-way dense layer is used to get different scales of receptive fields. One way of the layer uses a 3x3 kernel size.... |
Given the following machine learning model name: AlterNet, provide a description of the model | |
Given the following machine learning model name: OpenPose, provide a description of the model | |
Given the following machine learning model name: Voxel RoI Pooling, provide a description of the model | **Voxel RoI Pooling** is a RoI feature extractor extracts RoI features directly from voxel features for further refinement. It starts by dividing a region proposal into $G \times G \times G$ regular sub-voxels. The center point is taken as the grid point of the corresponding sub-voxel. Since $3 D$ feature volumes are e... |
Given the following machine learning model name: TridentNet Block, provide a description of the model | A **TridentNet Block** is a feature extractor used in object detection models. Instead of feeding in multi-scale inputs like the image pyramid, in a [TridentNet](https://paperswithcode.com/method/tridentnet) block we adapt the backbone network for different scales. These blocks create multiple scale-specific feature ma... |
Given the following machine learning model name: Mask Scoring R-CNN, provide a description of the model | **Mask Scoring R-CNN** is a Mask RCNN with MaskIoU Head, which takes the instance feature and the predicted mask together as input, and predicts the IoU between input mask and ground truth mask. |
Given the following machine learning model name: Atrous Spatial Pyramid Pooling, provide a description of the model | **Atrous Spatial Pyramid Pooling (ASPP)** is a semantic segmentation module for resampling a given feature layer at multiple rates prior to [convolution](https://paperswithcode.com/method/convolution). This amounts to probing the original image with multiple filters that have complementary effective fields of view, thu... |
Given the following machine learning model name: Neo-fuzzy-neuron, provide a description of the model | **Neo-fuzzy-neuron** is a type of artificial neural network that combines the characteristics of both fuzzy logic and neural networks. It uses a fuzzy inference system to model non-linear relationships between inputs and outputs, and a feedforward neural network to learn the parameters of the fuzzy system. The combinat... |
Given the following machine learning model name: VocGAN, provide a description of the model | Please enter a description about the method here |
Given the following machine learning model name: PCA Whitening, provide a description of the model | **PCA Whitening** is a processing step for image based data that makes input less redundant. Adjacent pixel or feature values can be highly correlated, and whitening through the use of [PCA](https://paperswithcode.com/method/pca) reduces this degree of correlation.
Image Source: [Wikipedia](https://en.wikipedia.org/... |
Given the following machine learning model name: CSPResNeXt, provide a description of the model | **CSPResNeXt** is a convolutional neural network where we apply the Cross Stage Partial Network (CSPNet) approach to [ResNeXt](https://paperswithcode.com/method/resnext). The CSPNet partitions the feature map of the base layer into two parts and then merges them through a cross-stage hierarchy. The use of a split and m... |
Given the following machine learning model name: Filter Response Normalization, provide a description of the model | **Filter Response Normalization (FRN)** is a type of normalization that combines normalization and an activation function, which can be used as a replacement for other normalizations and activations. It operates on each activation channel of each batch element independently, eliminating the dependency on other batch el... |
Given the following machine learning model name: Graph Attention Network v2, provide a description of the model | The __GATv2__ operator from the [“How Attentive are Graph Attention Networks?”](https://arxiv.org/abs/2105.14491) paper, which fixes the static attention problem of the standard [GAT](https://paperswithcode.com/method/gat) layer: since the linear layers in the standard GAT are applied right after each other, the rankin... |
Given the following machine learning model name: Make-A-Scene, provide a description of the model | Make-A-Scene is a text-to-image method that (i) enables a simple control mechanism complementary to text in the form of a scene, (ii) introduces elements that improve the tokenization process by employing domain-specific knowledge over key image regions (faces and salient objects), and (iii) adapts classifier-free guid... |
Given the following machine learning model name: energy-based model, provide a description of the model | |
Given the following machine learning model name: Distance to Modelled Embedding, provide a description of the model | **DIME**, or **Distance to Modelled Embedding**, is a method for detecting out-of-distribution examples during prediction time. Given a trained neural network, the training data drawn from some high-dimensional distribution in data space $X$ is transformed into the model’s intermediate feature vector space $\mathbb{R}^... |
Given the following machine learning model name: Experience Replay, provide a description of the model | **Experience Replay** is a replay memory technique used in reinforcement learning where we store the agent’s experiences at each time-step, $e\_{t} = \left(s\_{t}, a\_{t}, r\_{t}, s\_{t+1}\right)$ in a data-set $D = e\_{1}, \cdots, e\_{N}$ , pooled over many episodes into a replay memory. We then usually sample the mem... |
Given the following machine learning model name: Bayesian Reward Extrapolation, provide a description of the model | **Bayesian Reward Extrapolation** is a Bayesian reward learning algorithm that scales to high-dimensional imitation learning problems by pre-training a low-dimensional feature encoding via self-supervised tasks and then leveraging preferences over demonstrations to perform fast Bayesian inference. |
Given the following machine learning model name: simple Copy-Paste, provide a description of the model | |
Given the following machine learning model name: Wide&Deep, provide a description of the model | **Wide&Deep** jointly trains wide linear models and deep neural networks to combine the benefits of memorization and generalization for real-world recommender systems. In summary, the wide component is a generalized linear model. The deep component is a feed-forward neural network. The deep and wide components are comb... |
Given the following machine learning model name: SCARLET, provide a description of the model | **SCARLET** is a type of convolutional neural architecture learnt by the [SCARLET-NAS](https://paperswithcode.com/method/scarlet-nas) [neural architecture search](https://paperswithcode.com/method/neural-architecture-search) method. The three variants are SCARLET-A, SCARLET-B and SCARLET-C. The basic building block is ... |
Given the following machine learning model name: Deep Boltzmann Machine, provide a description of the model | A **Deep Boltzmann Machine (DBM)** is a three-layer generative model. It is similar to a [Deep Belief Network](https://paperswithcode.com/method/deep-belief-network), but instead allows bidirectional connections in the bottom layers. Its energy function is as an extension of the energy function of the RBM:
$$ E\lef... |
Given the following machine learning model name: Depthwise Separable Convolution, provide a description of the model | While [standard convolution](https://paperswithcode.com/method/convolution) performs the channelwise and spatial-wise computation in one step, **Depthwise Separable Convolution** splits the computation into two steps: [depthwise convolution](https://paperswithcode.com/method/depthwise-convolution) applies a single con... |
Given the following machine learning model name: Probabilistic Continuously Indexed Domain Adaptation, provide a description of the model | **Probabilistic Continuously Indexed Domain Adaptation** (**PCIDA**) enjoys better theoretical guarantees to match both the mean and variance of the distribution $p(u|z)$. PCIDA can be extended to match higher-order moments. |
Given the following machine learning model name: Blended Diffusion, provide a description of the model | Blended Diffusion enables a zero-shot local text-guided image editing of natural images.
Given an input image $x$, an input mask $m$ and a target guiding text $t$ - the method enables to change the masked area within the image corresponding the the guiding text s.t. the unmasked area is left unchanged. |
Given the following machine learning model name: Orthogonal Regularization, provide a description of the model | **Orthogonal Regularization** is a regularization technique for convolutional neural networks, introduced with generative modelling as the task in mind. Orthogonality is argued to be a desirable quality in ConvNet filters, partially because multiplication by an orthogonal matrix leaves the norm of the original matrix u... |
Given the following machine learning model name: Auxiliary Classifier, provide a description of the model | **Auxiliary Classifiers** are type of architectural component that seek to improve the convergence of very deep networks. They are classifier heads we attach to layers before the end of the network. The motivation is to push useful gradients to the lower layers to make them immediately useful and improve the convergenc... |
Given the following machine learning model name: FoveaBox, provide a description of the model | **FoveaBox** is anchor-free framework for object detection. Instead of using predefined anchors to enumerate possible locations, scales and aspect ratios for the search of the objects, FoveaBox directly learns the object existing possibility and the bounding box coordinates without anchor reference. This is achieved by... |
Given the following machine learning model name: Contour Proposal Network, provide a description of the model | The Contour Proposal Network (CPN) detects possibly overlapping objects in an image while simultaneously fitting pixel-precise closed object contours. The CPN can incorporate state of the art object detection architectures as backbone networks into a fast single-stage instance segmentation model that can be trained end... |
Given the following machine learning model name: Part Affinity Fields, provide a description of the model | |
Given the following machine learning model name: Adam, provide a description of the model | **Adam** is an adaptive learning rate optimization algorithm that utilises both momentum and scaling, combining the benefits of [RMSProp](https://paperswithcode.com/method/rmsprop) and [SGD w/th Momentum](https://paperswithcode.com/method/sgd-with-momentum). The optimizer is designed to be appropriate for non-stationar... |
Given the following machine learning model name: Path Planning and Motion Control, provide a description of the model | **Path Planning and Motion Control**, or **PPMC RL**, is a training algorithm that teaches path planning and motion control to robots using reinforcement learning in a simulated environment. The focus is on promoting generalization where there are environmental uncertainties such as rough environments like lunar servic... |
Given the following machine learning model name: Joint Learning Architecture, provide a description of the model | **JLA**, or **Joint Learning Architecture**, is an approach for multiple object tracking and trajectory forecasting. It jointly trains a tracking and trajectory forecasting model, and the trajectory forecasts are used for short-term motion estimates in lieu of linear motion prediction methods such as the Kalman filter.... |
Given the following machine learning model name: ComiRec, provide a description of the model | **ComiRec** is a multi-interest framework for sequential recommendation. The multi-interest module captures multiple interests from user behavior sequences, which can be exploited for retrieving candidate items from the large-scale item pool. These items are then fed into an aggregation module to obtain the overall rec... |
Given the following machine learning model name: FLAVR, provide a description of the model | **FLAVR** is an architecture for video frame interpolation. It uses 3D space-time convolutions to enable end-to-end learning and inference for video frame interpolation. Overall, it consists of a [U-Net](https://paperswithcode.com/method/u-net) style architecture with 3D space-time convolutions and
deconvolutions (yel... |
Given the following machine learning model name: Weight excitation, provide a description of the model | A novel built-in attention mechanism, that is complementary to all other prior attention mechanisms (e.g. squeeze and excitation, transformers) that are external (i.e., not built-in - please read paper for more details) |
Given the following machine learning model name: Graph Path Feature Learning, provide a description of the model | **Graph Path Feature Learning** is a probabilistic rule learner optimized to mine instantiated first-order logic rules from knowledge graphs. Instantiated rules contain constants extracted from KGs. Compared to abstract rules that contain no constants, instantiated rules are capable of explaining and expressing concept... |
Given the following machine learning model name: RealFormer, provide a description of the model | **RealFormer** is a type of [Transformer](https://paperswithcode.com/methods/category/transformers) based on the idea of [residual](https://paperswithcode.com/method/residual-connection) attention. It adds skip edges to the backbone [Transformer](https://paperswithcode.com/method/transformer) to create multiple direct ... |
Given the following machine learning model name: EdgeBoxes, provide a description of the model | **EdgeBoxes** is an approach for generating object bounding box proposals directly from edges. Similar to segments, edges provide a simplified but informative representation of an image. In fact, line drawings of an image can accurately convey the high-level information contained in an image
using only a small fractio... |
Given the following machine learning model name: Human Robot Interaction Pipeline, provide a description of the model | The pipeline we propose consists of three parts: 1) recognizing the interaction type; 2) detecting the object that the interaction is targeting; and 3) learning incrementally the models from data recorded by the robot sensors. Our main contributions lie in the target object detection, guided by the recognized interacti... |
Given the following machine learning model name: AdaGrad, provide a description of the model | **AdaGrad** is a stochastic optimization method that adapts the learning rate to the parameters. It performs smaller updates for parameters associated with frequently occurring features, and larger updates for parameters associated with infrequently occurring features. In its update rule, Adagrad modifies the general l... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.