text stringlengths 0 93.6k |
|---|
PyTorch implementation of CapsNet in Sabour, Hinton et al.'s paper |
Dynamic Routing Between Capsules. NIPS 2017. |
https://arxiv.org/abs/1710.09829 |
Author: Cedric Chee |
""" |
import torch |
import torch.nn as nn |
import torch.nn.functional as F |
from torch.autograd import Variable |
import utils |
class CapsuleLayer(nn.Module): |
""" |
The core implementation of the idea of capsules |
""" |
def __init__(self, in_unit, in_channel, num_unit, unit_size, use_routing, |
num_routing, cuda_enabled): |
super(CapsuleLayer, self).__init__() |
self.in_unit = in_unit |
self.in_channel = in_channel |
self.num_unit = num_unit |
self.use_routing = use_routing |
self.num_routing = num_routing |
self.cuda_enabled = cuda_enabled |
if self.use_routing: |
""" |
Based on the paper, DigitCaps which is capsule layer(s) with |
capsule inputs use a routing algorithm that uses this weight matrix, Wij |
""" |
# weight shape: |
# [1 x primary_unit_size x num_classes x output_unit_size x num_primary_unit] |
# == [1 x 1152 x 10 x 16 x 8] |
self.weight = nn.Parameter(torch.randn(1, in_channel, num_unit, unit_size, in_unit)) |
else: |
""" |
According to the CapsNet architecture section in the paper, |
we have routing only between two consecutive capsule layers (e.g. PrimaryCapsules and DigitCaps). |
No routing is used between Conv1 and PrimaryCapsules. |
This means PrimaryCapsules is composed of several convolutional units. |
""" |
# Define 8 convolutional units. |
self.conv_units = nn.ModuleList([ |
nn.Conv2d(self.in_channel, 32, 9, 2) for u in range(self.num_unit) |
]) |
def forward(self, x): |
if self.use_routing: |
# Currently used by DigitCaps layer. |
return self.routing(x) |
else: |
# Currently used by PrimaryCaps layer. |
return self.no_routing(x) |
def routing(self, x): |
""" |
Routing algorithm for capsule. |
:input: tensor x of shape [128, 8, 1152] |
:return: vector output of capsule j |
""" |
batch_size = x.size(0) |
x = x.transpose(1, 2) # dim 1 and dim 2 are swapped. out tensor shape: [128, 1152, 8] |
# Stacking and adding a dimension to a tensor. |
# stack ops output shape: [128, 1152, 10, 8] |
# unsqueeze ops output shape: [128, 1152, 10, 8, 1] |
x = torch.stack([x] * self.num_unit, dim=2).unsqueeze(4) |
# Convert single weight to batch weight. |
# [1 x 1152 x 10 x 16 x 8] to: [128, 1152, 10, 16, 8] |
batch_weight = torch.cat([self.weight] * batch_size, dim=0) |
# u_hat is "prediction vectors" from the capsules in the layer below. |
# Transform inputs by weight matrix. |
# Matrix product of 2 tensors with shape: [128, 1152, 10, 16, 8] x [128, 1152, 10, 8, 1] |
# u_hat shape: [128, 1152, 10, 16, 1] |
u_hat = torch.matmul(batch_weight, x) |
# All the routing logits (b_ij in the paper) are initialized to zero. |
# self.in_channel = primary_unit_size = 32 * 6 * 6 = 1152 |
# self.num_unit = num_classes = 10 |
# b_ij shape: [1, 1152, 10, 1] |
b_ij = Variable(torch.zeros(1, self.in_channel, self.num_unit, 1)) |
if self.cuda_enabled: |
b_ij = b_ij.cuda() |
# From the paper in the "Capsules on MNIST" section, |
# the sample MNIST test reconstructions of a CapsNet with 3 routing iterations. |
num_iterations = self.num_routing |
for iteration in range(num_iterations): |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.